AI imagery getting more ‘extreme’ as IWF welcomes new rules allowing thorough testing of AI tools

Published:  Wed 12 Nov 2025

Proposed new rules which would allow AI tools to be thoroughly tested to make sure they can not be used to create child sexual abuse imagery have been welcomed by the Internet Watch Foundation (IWF).

Currently, legal restrictions make it difficult to test AI products to ensure criminals can not use them to make images or videos of child sexual abuse without committing an offence if they inadvertently created criminal imagery in the process.

A proposed new legal defence, announced by the Government today (November 12) would mean designated bodies like the Internet Watch Foundation, as well as AI developers and other child protection organisations, will be empowered to scrutinise AI models robustly to make sure they can not be used to create nude or sexual imagery of children.

The announcement comes as the IWF publishes new data showing reports of AI-generated child sexual abuse material have more than doubled in the past year, rising from 199 in 2024 (January 1 to October 31) to 426 in the same period in 2025.

According to the data, the material being created has also become more extreme, with the most serious Category A content (which can include imagery involving penetrative sexual activity, sexual activity with an animal, or sadism) having risen from 2,621 to 3,086 items in the same period last year.

Category A content now accounts for 56% of all illegal AI material, compared with 41% last year, suggesting criminals are using the technology to make the most extreme and serious imagery.
The data showed that girls have been most commonly depicted, accounting for 94% of illegal AI images in 2025. 

Online Safety Minister Kanishka Narayan MP visits NSPCC offices in London
Online Safety Minister Kanishka Narayan MP visits NSPCC offices in London

The IWF has welcomed the Government’s proposed new measures, and met with Online Safety Minister Kanishka Narayan MP this week to discuss the harms inflicted on children and the realities the IWF hotline is seeing every day. 

Kerry Smith, Chief Executive of the IWF said: “We welcome the Government’s efforts to bring in new measures for testing AI models to check whether they can be abused to create child sexual abuse. 

“For three decades, we have been at the forefront of preventing the spread of this imagery online – we look forward to using our expertise to help further the fight against this new threat. 

“AI tools have made it so survivors can be victimised all over again with just a few clicks, giving criminals the ability to make potentially limitless amounts of sophisticated, photorealistic child sexual abuse material. Material which further commodifies victims’ suffering, and makes children, particularly girls, less safe on and off line.

“Safety needs to be baked into new technology by design. Today’s announcement could be a vital step to make sure AI products are safe before they are released.”

The changes would mean safeguards within AI systems could be tested from the start, with the aim of limiting the production of child sex abuse images in the first place.

The Government said the changes, due to be tabled as an amendment to the Crime and Policing Bill, mark a major step forward in safeguarding children in the digital age.  

The Bill already contains measures which will outlaw AI models specifically designed to create AI child sexual abuse imagery, as well as guides or manuals which would help offenders create this material. 

Image of Technology Secretary Liz Kendall
Technology Secretary Liz Kendall (Image: Elizabeth Kendall ©House of Commons)

Technology Secretary Liz Kendall said: “We will not allow technological advancement to outpace our ability to keep children safe.

“These new laws will ensure AI systems can be made safe at the source, preventing vulnerabilities that could put children at risk.

“By empowering trusted organisations to scrutinise their AI models, we are ensuring child safety is designed into AI systems, not bolted on as an afterthought.”

Image of Safeguarding Minister Jess Phillips
Safeguarding Minister Jess Phillips (Image: Jessica Phillips ©House of Commons)

Safeguarding Minister Jess Phillips said: “We must make sure children are kept safe online and that our laws keep up with the latest threats.

“This new measure will mean legitimate AI tools cannot be manipulated into creating vile material and more children will be protected from predators as a result.”

‘Disturbing’ AI-generated child sexual abuse images found on hidden chatbot website that simulates indecent fantasies

‘Disturbing’ AI-generated child sexual abuse images found on hidden chatbot website that simulates indecent fantasies

Internet watchdog says this is the first time it has identified imagery of child sexual abuse linked to AI chatbots.

22 September 2025 News
AI chatbots and child sexual abuse: a wake-up call for urgent safeguards

AI chatbots and child sexual abuse: a wake-up call for urgent safeguards

Our analysts uncovered criminal material on a platform hosting multiple chatbot “characters” designed to let users simulate sexual scenarios with child avatars.

22 September 2025 Blog