AI becoming ‘child sexual abuse machine’ adding to ‘dangerous’ record levels of online abuse, IWF warns

Published:  Fri 16 Jan 2026

AI tools will become “child sexual abuse machines” without urgent action, as “extreme” AI videos fuel record levels of child sexual abuse material found online by the Internet Watch Foundation (IWF).

New data released today (January 16) by the IWF shows 2025 was the worst year on record for online child sexual abuse material found by its analysts, with increasing levels of photo-realistic AI material contributing to the “dangerous” levels.

Analysts have also seen a “frightening” 26,362% rise in photo-realistic AI videos of child sexual abuse, often including real and recognisable child victims. In 2025, the IWF discovered 3,440 AI videos of child sexual abuse compared to only 13 in 2024.

Criminals are using the improving technology to create more of the most extreme Category A imagery (material which can even include penetration, bestiality, and sexual torture).

Of all the AI-generated videos of child sexual abuse discovered by the IWF in 2025, 65% (or 2,233 videos) were so extreme they were categorised as Category A.

By comparison, for the full year (2025), 43% of non-AI criminal videos seen by the IWF were Category A.

This material can now be made at scale by criminals with minimal technical knowledge, and can have harmful effects on children whose likenesses are coopted into the imagery, as well as further normalising sexual violence against children and undermining efforts to create an internet free of child sexual abuse and exploitation.

Analysts believe offenders are using the technology in greater numbers as the sophistication of AI video tools improves.

The IWF, which works internationally to further its mission to prevent the global spread of child sexual abuse imagery online, says Governments and regulators around the world must now step in to force AI companies to create products that are safe by design.

Currently, the IWF warns, it is too easy for AI tools to be abused, with the results contributing to making 2025 a record-breaking year, with analysts taking action to remove more child sexual abuse imagery than ever before in the organisation’s 30 year history.

Today’s data shows:

  • In total, last year, the IWF took action on 312,030 reports where analysts confirmed the presence of child sexual abuse material.
  • This is a record-breaking total and marks a 7% increase on the 291,730 in 2024 reports the IWF confirmed as involving child sexual abuse the year before.

Analysts have noted a particularly stark increase in the number of AI videos they have seen:

  • In 2025, the IWF discovered 3,440 AI videos of child sexual abuse – an increase of 26,362% on the previous year when only 13 such videos were found.
  • Of these videos – 65% (or 2,230 videos) were so extreme they were categorised as Category A – the most severe legal classification in UK law. Category A imagery can contain penetration, sexual torture, and even bestiality.
  • A further 30% (or 1,020) videos were deemed to be Category B – the second most extreme category.
Kerry Smith, IWF CEO
Kerry Smith, IWF CEO

Kerry Smith, Chief Executive of the IWF, said: “When images and videos of children suffering sexual abuse are distributed online, it makes everyone, especially those children, less safe.

“Our analysts work tirelessly to get this imagery removed to give victims some hope. But now AI has moved on to such an extent, criminals essentially can have their own child sexual abuse machines to make whatever they want to see.

“The frightening rise in extreme Category A videos of AI generated child sexual abuse shows the kind of things criminals want. And it is dangerous. Easy availability of this material will only embolden those with a sexual interest in children, fuel its commercialisation, and further endanger children both on and offline.

“Now Governments around the world must ensure AI companies embed safety by design principles from the very beginning. It is unacceptable that technology is released which allows criminals to create this content.”

Creating, possessing, and distributing child sexual abuse imagery, AI or otherwise, is already illegal in the UK.

Liz Kendall MP, Technology Secretary (Image credit: Elizabeth Kendall ©House of Commons)
Liz Kendall MP, Technology Secretary (Image credit: Elizabeth Kendall ©House of Commons)

Tech Secretary Liz Kendall said: "It is utterly abhorrent that AI is being used to target women and girls in this way. We will not tolerate this technology being weaponised to cause harm, which is why I have accelerated our action to bring into force a ban on the creation of non-consensual AI-generated intimate images.

"AI should be a force for progress, not abuse, and we are determined to support its responsible use to drive growth, improve lives and deliver real benefits, while taking action where it is misused.

"That is also why we have introduced a world-leading offence targeting AI models trained or adapted to generate child sexual abuse material. Possessing, supplying, or modifying these models will soon be a crime."

Jess Phillips MP, Parliamentary Under-Secretary of State (Minister for Safeguarding and Violence Against Women and Girls)Jess Phillips MP, Minister for Safeguarding and Violence Against Women and Girls (Image credit: Jessica Phillips ©House of Commons)
Jess Phillips MP, Minister for Safeguarding and Violence Against Women and Girls (Image credit: Jessica Phillips ©House of Commons)

Minister for Safeguarding, Jess Phillips said: “This surge in AI-generated child abuse videos is horrifying - this government will not sit back and let predators generate this repulsive content. 

“The UK is leading the world in cracking down on this vile trade. Soon, anyone who possesses, makes or shares tools designed to generate AI child abuse, writes guides on how to exploit legitimate AI tools for this purpose, or runs sites spreading this disgusting content will face hefty prison sentences.” 

There can be no more excuses from technology companies. Take action now or we will force you to.”  

Chris Sherwood, NSPCC CEO
Chris Sherwood, NSPCC CEO

Chris Sherwood, CEO at the NSPCC, said: “These findings are both deeply alarming and sadly predictable, showing how fast AI is amplifying the record levels of child sexual abuse already circulating online. Offenders are using these tools to create extreme material at a scale we’ve never faced before, with children paying the price.

“Tech companies cannot keep releasing AI products without building in vital protections. They know the risks, and they know the harms that can be caused. It is up to them to ensure their products can never be used to create indecent images of children.

“The UK Government and Ofcom must now step in and ensure tech companies are held to account. We are calling on Ofcom to use every tool available to them through the Online Safety Act and for Government to introduce a statutory duty of care to ensure generative AI services are required to build children’s safety into the design of their products and prevent these horrific crimes.”

Example of an AI chatbot
AI tools will become “child sexual abuse machines” without urgent action

In November, the UK Government proposed new rules to allow AI tools to be thoroughly tested to make sure they cannot be used to create child sexual abuse imagery, something the IWF says will help make sure products can be made safer before being made available to the public.

Currently, legal restrictions make it difficult to test AI products to ensure offenders cannot use them to make images or videos of child sexual abuse without committing an offence if criminal imagery is inadvertently created in the process.

The proposed new legal defence would mean designated bodies like the Internet Watch Foundation, as well as AI developers and other child protection organisations, will be empowered to scrutinise AI models robustly to make sure they cannot be used to create nude or sexual imagery of children.

In December, the UK Government announced further plans to outlaw AI apps (and other tools) which digitally remove clothing or ‘nudify’ subjects of photographs.

The move came after months of campaigning by the IWF and others who have argued the technology makes it too easy to create fake nude or sexual imagery of real children.

IWF is calling for Government to introduce the ban swiftly and ensure it can include all services which provide nudifying tools.

Vicky Young, Head of the Lucy Faithfull Foundation’s Stop It Now helpline
Vicky Young, Head of the Lucy Faithfull Foundation’s Stop It Now helpline

Vicky Young is Head of the Stop It Now helpline for child protection charity Lucy Faithfull Foundation – a UK charity which works to help people change their own behaviour if they are at risk of offending.

She said: "The figures released today by the Internet Watch Foundation are scary. The increase in AI-generated sexual images of children is a trend we can corroborate. The number of people telling us on the Stop It Now helpline about their own use of AI to view and create child sexual abuse images has doubled in the last year. 

“This behaviour is illegal and incredibly harmful. Some images are real abuse material that has been manipulated, revictimising children; others are non-sexual images altered to sexualise children, creating new victims; and some are entirely synthetic. Regardless of origin, sexualised images of children are harmful and illegal.

“The people we speak to also talk about other illegal behaviours. 91% of people who said they had accessed AI generated images had also seen images of real children that weren't created with AI. Accessing AI generated images is not only harmful, but also links to other types of online sexual offending against children.

“We want people concerned about their online behaviour to reach out for help before it escalates - get anonymous support to change and a pathway out of this behaviour through Stop It Now at stopitnow.org.uk or by calling 0808 1000 900."

Young people in the UK who are worried that nude or sexual imagery of themselves has been shared, or may be shared, online, can use the free and confidential Report Remove tool to take down or block imagery of under 18s.

Visit childline.org.uk/remove

AI nudification app ban and on-device protections for children welcomed following IWF campaign

AI nudification app ban and on-device protections for children welcomed following IWF campaign

The Government’s VAWG Strategy will outline new measures to prevent nudes being sent, received, or shared on teens’ phones.

18 December 2025 News
“AI child sexual abuse imagery is not a future risk – it is a current and accelerating crisis”

“AI child sexual abuse imagery is not a future risk – it is a current and accelerating crisis”

IWF CEO Kerry Smith calls for complete EU ban of AI abuse content at high-level meeting of global experts in Rome.

20 November 2025 News
AI imagery getting more ‘extreme’ as IWF welcomes new rules allowing thorough testing of AI tools

AI imagery getting more ‘extreme’ as IWF welcomes new rules allowing thorough testing of AI tools

The IWF welcomes new measures to help make sure digital tools are safe as new data shows AI child sexual abuse is still spreading.

12 November 2025 News