Charity urges for ‘zero tolerance’ of ‘dangerous’ AI child sexual abuse in EU as content reaches record high

Published:  Tue 24 Mar 2026

EU lawmakers are being urged to recognise the “wide-reaching harms” of AI child sexual abuse imagery as record levels are discovered online. 

A new report, published today (March 24), reveals the full scale of AI-generated child sexual abuse images and videos being discovered online by the Internet Watch Foundation (IWF).  

It shows how, in 2025, the IWF identified 8,029 AI-generated images and videos of realistic child sexual abuse – a 14% increase in criminal AI content on the previous year.  

The IWF is now reasserting its stance that AI-generated child sexual abuse material must be criminalised in all forms across the EU through new laws currently being negotiated by EU legislators. 

The report, titled Harm without limits: AI child sexual abuse material through the eyes of our Analysts, also gives “unsettling” insight on the kind of offender conversations IWF analysts are witnessing as criminals vie with each other to create more and more lifelike and extreme child sexual abuse scenarios.  

AI CSAM Report 2026 mockup
New report looks at the harms of AI-generated child sexual abuse material

Chillingly, offenders even discuss setting up and using hidden cameras to source still footage of real children, which they can then transform into AI sexual abuse video content.

They also predict how, in a few years’ time, agentic AI tools may be able to create full child sexual abuse “movies” by feeding a prompt to an uncensored AI agent. “No skills with editing or tech will be required,” remarked one dark web forum user.

In January, the IWF, which is Europe’s largest hotline dedicated to disrupting the spread of child sexual abuse imagery online, published data showing a 260-fold increase in videos of AI-generated child sexual abuse.

This new report shows the combined surge in still images and videos, as well as horrifying details of the intentions of those producing them.  

The data shows:  

  • In 2025, the IWF identified 8,029 AI-generated images and videos of realistic child sexual abuse, a 14% increase in criminal AI content on the previous year.
  • An additional 82 items were classed as prohibited, actioned under UK law even if the material is not photorealistic, such as cartoons, illustrations and animations.
  • Of the 3,443 AI-generated child sexual abuse videos identified, which is a more than 260-fold increase on the 13 videos found in 2024, 65% were classified as Category A. This is the most severe legal category under UK law which encompasses offences such as rape, sexual torture and bestiality. 
  • By comparison, 43% of non-AI criminal videos seen by the IWF in 2025 were Category A – demonstrating that AI is being used to create more violent content. 
IWF Analyst working at a computer

Internet Watch Foundation Senior Analyst Natalia* said: “It is very apparent from the unsettling dark web conversations observed by the IWF Hotline that AI innovations are regarded with delight by users of child sexual abuse material.

“Every new development in generative AI is extolled for its ability to enhance the realism, to heighten the severity, or make more immersive, any conceivable sexual scenario with a child. This could be through adding audio to video, being able to depict multiple people interacting or even being able to successfully manipulate imagery of a real child known to an offender.

“Instead of being a vehicle for connection, the technology only deepens offenders’ capacity to view children and victims as abstract playthings, whose likenesses can be altered endlessly for their own enjoyment. 

“We know this affects victims and survivors, as its creation and distribution is just as keenly felt as with traditional forms of child sexual abuse.” 

One offender quoted in the report describes how surprised they are at “just how uncensored” the technology is, exclaiming that the ability to edit and finetune is “going to be nuts”

Another praises an AI child sexual abuse video saying it is “an absolute masterpiece” and how “anything you desire is possible in extreme realism.”

Analysts have also observed discussions on the ability to generate AI imagery of children known to offenders, with one individual saying they are “impressed with the results of [AI] image to video conversions”, and how they want to use hidden cameras to obtain footage of real children to convert into AI videos.

The IWF is doubling down on the call by its Chief Executive in Rome last year that all AI child sexual abuse imagery, and the tools used to create it, should be banned across the EU. This includes the creation, possession and distribution of the content as well as instructional materials and AI models that are fine-tuned to generated child sexual abuse imagery.

This can be implemented through a revised law, the Child Sexual Abuse Directive, which is currently being debated by EU lawmakers.  

Kerry Smith, IWF CEO
Kerry Smith, IWF CEO

Internet Watch Foundation CEO Kerry Smith said: “Advances in technology should never come at the expense of a child’s safety and wellbeing. While AI can offer much in a positive sense, it is horrifying to consider that its power can be used to devastate a child’s life. This material is dangerous.

“We urgently need governments and technology companies to recognise the concrete and wide-reaching harms of AI child sexual abuse imagery. We are urging for a comprehensive EU ban of AI child sexual abuse content, and the tools used to create it, as a minimum standard with no exceptions. There must be zero tolerance.

“This report’s in-depth view of the risks posed to children by AI, as well as emerging areas of concern, only serves to highlight the need for companies to adopt a safety-by-design approach that ensures child protection is baked into product development. Proper implementation of the AI Act should empower tech companies to work with designated authorities like the IWF to test the risks posed by their models.

“Children, victims and survivors cannot afford for us to be complacent. New technology must be held to the highest standard. In some cases, lives are on the line.”

The report also highlights how offenders are already anticipating the next generation of AI tools and how they might exploit them.

IWF analysts have observed offenders discussing the possibilities of “agentic AI”, systems designed to carry out complex tasks autonomously. One offender wrote: “I believe in a year or two we will be able to create our own movies just by feeding a prompt to an uncensored AI agent. No skills with editing or tech will be required.” 

AI child sexual abuse content with an audio component is also an emerging area of concern. This may be in the form of recordings – audio deepfakes – which synthetically generate the sexualised voices of children. 

While typically the IWF does not assess audio only reports, one example identified by analysts was of a fully synthetic video showing a child who appeared to be between three and six years old speaking to the camera and performing a sexual act on an adult man. Both the video and audio were generated by AI.  

Angele Lefranc, Advocacy Manager at Fondation pour l’Enfance said: “IWF’s latest findings confirm our worst fears regarding the use of AI technology for child sexual abuse purposes.  

“Just a week before the expiration of the EU temporary legal framework [April 3] that allows platforms to detect CSAM and grooming, we raise, once again, the alarm: we are facing a child sexual abuse crisis, we need and we must make children’s online safety a top priority.  

“Offenders do not only seize new technologies, they anticipate them. Online services and public authorities need to be two steps ahead of them, not behind.”   

* Not her real name. IWF analysts’ identities are protected.

EU failure on temporary derogation puts children at risk

EU failure on temporary derogation puts children at risk

The legal protections that allow companies in the EU to voluntarily detect, find, and remove child sexual abuse material on their platforms are about to expire, as legislative negotiations grind to a halt.

17 March 2026 Statement
Why the EU’s temporary law allowing companies to detect child sexual abuse online must be extended

Why the EU’s temporary law allowing companies to detect child sexual abuse online must be extended

Child safety is on the line - the EU must extend its temporary law before vital protections are turned off.

9 March 2026 Blog
“AI child sexual abuse imagery is not a future risk – it is a current and accelerating crisis”

“AI child sexual abuse imagery is not a future risk – it is a current and accelerating crisis”

IWF CEO Kerry Smith calls for complete EU ban of AI abuse content at high-level meeting of global experts in Rome.

20 November 2025 News