‘Dangerous’ AI child sexual abuse reaches record high as public backs clampdown on ‘uncensored’ tools

Published:  Tue 24 Mar 2026

Record levels of “dangerous” AI child sexual abuse imagery are now being discovered online as new polling reveals 82% of UK adults believe the government must now ensure “uncensored” AI systems are made safe by design.

A new report, published today (March 24), reveals the full scale of AI-generated child sexual abuse images and videos being discovered online by the Internet Watch Foundation (IWF).

It shows how, in 2025, the IWF identified 8,029 AI-generated images and videos of realistic child sexual abuse – a 14% increase in criminal AI content on the previous year.

It’s published alongside new polling from Savanta* which shows more than four in five UK adults want the government to introduce regulation to ensure AI systems are safe by design.  

Image of the new report: New AI CSAM Report: Harm without limits: AI CSAM through the eyes of our Analysts
New report looks at the harms of AI-generated child sexual abuse material

The report, titled Harm without limits: AI child sexual abuse material through the eyes of our Analysts, also gives “unsettling” insight on the kind of offender conversations IWF analysts are witnessing as criminals vie with each other to create more and more lifelike and extreme child sexual abuse scenarios.

Chillingly, offenders even discuss setting up and using hidden cameras to source still footage of real children, which they can then transform into AI sexual abuse video content.

They also predict how, in a few years’ time, agentic AI tools may be able to create full child sexual abuse “movies” by feeding a prompt to an uncensored AI agent. “No skills with editing or tech will be required,” remarked one dark web forum user.

In January, the IWF, which is Europe’s largest hotline dedicated to disrupting the spread of child sexual abuse imagery online, published data showing a more than 260-fold increase in videos of AI-generated child sexual abuse.

This new report shows the combined surge in still images and videos, as well as horrifying details of the intentions of those producing them.  

The data shows:  

  • In 2025, the IWF identified 8,029 AI-generated images and videos of realistic child sexual abuse, a 14% increase in criminal AI content on the previous year. 
  • An additional 82 items were classed as prohibited, actioned under UK law even if the material is not photorealistic, such as cartoons, illustrations and animations.
  • Of the 3,443 AI-generated child sexual abuse videos identified, which is a more than 260-fold increase on the 13 videos found in 2024, 65% were classified as Category A. This is the most severe legal category under UK law which encompasses offences such as rape, sexual torture and bestiality.
  • By comparison, 43% of non-AI criminal videos seen by the IWF in 2025 were Category A – demonstrating that AI is being used to create more violent content. 
Internet Watch Foundation Senior Analyst Natalia

Internet Watch Foundation Senior Analyst Natalia** said: “It is very apparent from the unsettling dark web conversations observed by the IWF Hotline that AI innovations are regarded with delight by users of child sexual abuse material.

“Every new development in generative AI is extolled for its ability to enhance the realism, to heighten the severity, or make more immersive, any conceivable sexual scenario with a child. This could be through adding audio to video, being able to depict multiple people interacting or even being able to successfully manipulate imagery of a real child known to an offender.

“Instead of being a vehicle for connection, the technology only deepens offenders’ capacity to view children and victims as abstract playthings, whose likenesses can be altered endlessly for their own enjoyment. 

“We know this affects victims and survivors, as its creation and distribution is just as keenly felt as with traditional forms of child sexual abuse.” 

One offender quoted in the report describes how surprised they are at “just how uncensored” the technology is, exclaiming that the ability to edit and finetune is “going to be nuts”

Another praises an AI child sexual abuse video saying it is “an absolute masterpiece” and how “anything you desire is possible in extreme realism.”

Analysts have also observed discussions on the ability to generate AI imagery of children known to offenders, with one individual saying they are “impressed with the results of [AI] image to video conversions”, and how they want to use hidden cameras to obtain footage of real children to convert into AI videos.

The IWF is calling on the UK government to tighten up laws around AI and make it mandatory for tech companies to evaluate and safeguard AI models before release to make it harder for criminals to abuse AI image generators and create child sexual abuse imagery.

This is echoed by new polling which shows* more than four in five, or 82%, of UK adults say the government should introduce regulation to ensure AI systems are safe by design and futureproofed from causing harm.  

A further 78% of survey respondents agreed that AI companies should be made to test for AI-related harms before products are released to market. 

Kerry Smith, IWF CEO
Kerry Smith, IWF CEO

Internet Watch Foundation CEO Kerry Smith said: “Advances in technology should never come at the expense of a child’s safety and wellbeing. While AI can offer much in a positive sense, it is horrifying to consider that its power can be used to devastate a child’s life. This material is dangerous.

“The UK government has made great strides in recognising the wide-reaching harms of AI child sexual abuse imagery and we welcome the move to allow designated authorities like the IWF to test AI models.

“But this report’s in-depth view of the risks posed to children by AI, as well as emerging areas of concern, only serves to highlight the need for companies to adopt a safety-by-design approach that ensures child protection is baked into product development. This non-negotiable standard in AI development must be mandated by a clear government framework.

“Children, victims and survivors cannot afford for us to be complacent. New technology must be held to the highest standard. In some cases, lives are on the line.” 

The report also highlights how offenders are already anticipating the next generation of AI tools and how they might exploit them.

IWF analysts have observed offenders discussing the possibilities of “agentic AI”, systems designed to carry out complex tasks autonomously. One offender wrote: “I believe in a year or two we will be able to create our own movies just by feeding a prompt to an uncensored AI agent. No skills with editing or tech will be required.”

AI child sexual abuse content with an audio component is also an emerging area of concern. This may be in the form of recordings – audio deepfakes – which synthetically generate the sexualised voices of children.

While typically the IWF does not assess audio only reports, one example identified by analysts was of a fully synthetic video showing a child who appeared to be between three and six years old speaking to the camera and performing a sexual act on an adult man. Both the video and audio were generated by AI.   

Helen Rance, Deputy Director of CSA threat at the National Crime Agency said: “AI generated child sexual abuse material is illegal. It harms children. And it fuels and escalates offending. Alongside policing colleagues, we are arresting nearly 1,000 offenders and safeguarding over 1,200 children every month in relation to online sexual abuse. Offenders should be under no illusion that they will be caught and the consequences for them and their families will be life changing.

“However, policing cannot tackle AI CSAM alone. We need industry around the world to invest its money, expertise and innovation in stopping this harm at source. We need to keep investing in the tools that help policing protect children at scale. And we need to equip children, parents, carers and professionals with the confidence and skills to navigate the challenges that AI brings.

“We welcome this important report from IWF and will continue to work with them and other partners to disrupt this evolving ecosystem and keep children safe.”

* The online survey was run by polling company Savanta in March 2026 and included 2,204 UK adults. Data was weighted to be representative of the UK by age, gender, region and social grade. 

** Not her real name. IWF analysts’ identities are protected.

CSA partners from around the world join forces to say No to Nudify Apps

CSA partners from around the world join forces to say No to Nudify Apps

On Safer Internet Day 2026, the IWF and child protection partners worldwide unite to call for a global ban on AI nudify apps and tools.

10 February 2026 Blog
AI becoming ‘child sexual abuse machine’ adding to ‘dangerous’ record levels of online abuse, IWF warns

AI becoming ‘child sexual abuse machine’ adding to ‘dangerous’ record levels of online abuse, IWF warns

‘Frightening’ 260-fold rise in AI child sexual abuse videos contribute to making 2025 worst year for online abuse in IWF’s 30-year history.

16 January 2026 News
AI nudification app ban and on-device protections for children welcomed following IWF campaign

AI nudification app ban and on-device protections for children welcomed following IWF campaign

The Government’s VAWG Strategy will outline new measures to prevent nudes being sent, received, or shared on teens’ phones.

18 December 2025 News