Detection is the first step in protecting children. Databases of known abuse material, including the IWF’s hash list which serves technology companies across the world, protect victims and survivors through the identification of new material. Without legal clarity, this pipeline will be broken.
Child sexual abuse and exploitation online is a complex problem and requires a range of complimentary solutions to mitigate harms, to address generation, consumption, and distribution.
When it comes to distribution of known child sexual abuse images and videos, proactive detection technology is foundational. If it is actively removed, the entire online safety ecosystem is compromised. User reporting, which is well known to be ineffective at tackling the scale of abuse material being shared online, will not touch the sides. Even if an image is reported, it is after the harm has happened, and there will be nothing to stop the same image being uploaded and shared again and again.
The landscape has materially changed since the last legal gap. The number of AI-generated child sexual abuse videos found and assessed by the IWF jumped from 13 in 2024 to 3,440 in 2025.
The temporary derogation was never permanent. It was intended as a bridge to allow platforms to continue detecting and reporting abuse while the EU finalises the Child Sexual Abuse Regulation.