AI-generated child sexual abuse: why safety by design must be the next step

Published:  Tue 24 Mar 2026

“I sometime wonder why there isn’t more real imagery from nurseries? With AI it doesn't matter anymore. We can create all the fun we want."

“The more I experiment, the more surprised I am at just how uncensored [Redacted model name] is. The Edit version and any finetunes are going to be NUTS.”

These quotes appear in the Internet Watch Foundation’s (IWF) new report “Harm without limits: AI child sexual abuse material through the eyes of our Analysts”.

They come from offender discussions our Analysts have observed on dark web forums, where users openly discuss – often with clear enthusiasm – the quality of AI-generated child sexual abuse material, the ease with which they can generate their most extreme fantasies, and the opportunities AI presents to accelerate the production of illegal content.

These conversations underline an uncomfortable reality: offenders are watching the development of AI closely and actively exploring how it can be used to produce increasingly extreme material.

At the same time, public concern is clear. New polling from Savanta* shows that more than four in five UK adults want the Government to introduce regulation to ensure AI systems are safe by design. This strong public backing reflects a growing recognition that action is needed now to prevent harm.

We’re calling for an AI Bill that includes key measures to ensure safety-by-design becomes a non-negotiable standard in AI development – jump to our recommendations at the bottom of this blog.

In 2025, the IWF identified 8,029 AI-generated images and videos depicting realistic child sexual abuse. The most dramatic shift has been the emergence of AI-generated abuse videos. In 2025 alone, we identified 3,440 AI-generated child sexual abuse videos, compared with just 13 the year before. Nearly two thirds of these videos were classified as Category A, the most extreme category of abuse material.

In my previous blog, I highlighted new findings from our Hotline revealing, for the first time, AI-generated child sexual abuse images linked directly to chatbot platforms. Since then, the UK Government has committed to regulating AI chatbots – a welcome and important step.

The UK is moving in the right direction and continues to demonstrate leadership in tackling online child sexual abuse. New measures in the Crime and Policing Bill target the tools used to generate AI CSAM, as well as guidance that enables offenders to exploit AI for this purpose. 
The speed of technological change means we cannot afford to wait until harms have already escalated before acting.

What we are seeing today – highly realistic AI-generated abuse videos and increasingly sophisticated tools – reflects a period where safety safeguards were not consistently embedded into AI systems from the outset. We should learn from that experience.

Encouragingly, the Government has already introduced provisions in the Crime and Policing Bill that will allow designated authorities such as the Internet Watch Foundation to test AI models. As a global leader in tackling child sexual abuse imagery online, we stand ready to support this work and help ensure independent scrutiny is built into the development process.

The opportunity now is to ensure safety-by-design becomes a non-negotiable standard in AI development. The best vehicle for further safeguards to prevent the generation of AI CSAM is an AI Bill. 

Key measures should include:

  • Mandatory pre-market assessment: AI systems must be tested before release to market to ensure they cannot be adapted to generate CSAM. Permitting risk assessments and testing, while very welcome, is not the same as requiring them.
  • Built-in risk mitigation: Protections should be incorporated from the outset to make abuse technically more difficult.
  • Robust content moderation: Developers should maintain strong policies and technologies to detect and prevent CSAM.
  • Use of trusted datasets: Resources like the IWF’s Hash List (over 3 million verified CSAM hashes) and URL lists should be used to block known CSAM from training data.

The UK Government has an opportunity now to go further and embed safety at the heart of AI innovation and act before the risks we see today become far more severe tomorrow.

* The online survey was run by polling company Savanta in March 2026 and included 2,204 UK adults. Data was weighted to be representative of the UK by age, gender, region and social grade. 

‘Dangerous’ AI child sexual abuse reaches record high as public backs clampdown on ‘uncensored’ tools

‘Dangerous’ AI child sexual abuse reaches record high as public backs clampdown on ‘uncensored’ tools

Analysts observed offenders discuss using hidden cameras to obtain footage of real children to transform into AI videos.

24 March 2026 News
Charity urges for ‘zero tolerance’ of ‘dangerous’ AI child sexual abuse in EU as content reaches record high

Charity urges for ‘zero tolerance’ of ‘dangerous’ AI child sexual abuse in EU as content reaches record high

New report reveals full scale of AI-generated child sexual abuse images and videos and ‘unsettling’ insight into offender views.

24 March 2026 News
AI-generated child sexual abuse: now cannot be the moment the EU downs tools

AI-generated child sexual abuse: now cannot be the moment the EU downs tools

A new IWF report reveals record levels of AI‑generated child sexual abuse imagery and alarming insight into how offenders are exploiting emerging technologies. The charity is urging EU lawmakers to introduce a zero‑tolerance ban on AI‑generated abuse and the tools used to create it.

24 March 2026 Blog