“AI child sexual abuse imagery is not a future risk – it is a current and accelerating crisis”

Published:  Thu 20 Nov 2025

Delegates in Rome were told of the suffering of child sexual abuse survivors who are made victims all over again when images of their exploitation are transformed into new, more extreme images of abuse by AI tools.

Internet Watch Foundation CEO Kerry Smith attended the event on Child Dignity in the Artificial Intelligence Era to share expertise and highlight the rapid, frightening advancement in the ability to artificially generate child sexual abuse imagery since the IWF first started monitoring AI in early 2023.

Kerry, and other conference participants, met with Pope Leo XIV during the event.

In 2024, 245 reports contained actionable AI-generated images of child sexual abuse – which equated to 7,644 images and a small number of videos. This is a 380% increase on 2023 where just 51 contained AI-generated images of child sexual abuse.

 Child Dignity in the Artificial Intelligence Era
Child Dignity in the Artificial Intelligence Era

Speaking after the conference on November 12-13, IWF CEO Kerry Smith said: “It is gratifying to see the high level of attention being directed towards the risks and harms posed to children from AI in the digital era in which we now live.

“But make no mistake, AI child sexual abuse imagery is not a future risk – it is a current and accelerating crisis. And if we don’t act now to regulate AI tools, the demand and the output will grow as generative AI continues to evolve.”

AI-generated child sexual abuse material identified by the IWF is becoming more extreme – and it is highly gendered.

Almost 40% of all AI-generated material in 2024 was rated Category A under UK law. That’s the most severe level – depicting the rape or sexual torture of a child or involving animals.

The content is overwhelmingly focused on girls. Last year, 98% of all AI-generated images and videos of child sexual abuse, in which the sex of the child was recorded, depicted girls only.

Kerry continued: “That tells us this technology is reproducing and amplifying existing gendered patterns of abuse – not disrupting them. It’s embedding misogyny and child sexualisation even more deeply into the online ecosystem.

“We’re not just seeing synthetic imitations of mild or ambiguous content; we’re talking about highly violent, deeply abusive material.

“There is no evidence to support the claim that AI-generated imagery could be used as a “safe substitute” that might somehow reduce harm by giving offenders a non-contact outlet. Using AI child sexual abuse material reinforces harmful sexual fantasies and normalises abusive behaviour rather than preventing it.”

At the gathering, which was convened by Fondazione Child ETS, in collaboration with the Child Dignity Alliance, the IWF CEO called for the criminalisation of AI-generated child sexual abuse material in all forms across the EU through new laws currently being negotiated in the EU Parliament, the Recast Child Sexual Abuse Directive. A ban on the use of so-called ‘nudification’ apps, which allow users to easily remove the clothing from images of real people and children was also proposed.

Kerry said: “This is the EU’s chance to update this legislation in line with the current state of child sexual abuse and exploitation across the world.

“We need a comprehensive EU ban as a minimum standard that includes the creation, possession and distribution of AI child sexual abuse imagery, with no exceptions.

“The IWF is also calling for a bar on the use of nudification apps as there is no possible justification for the existence of this invasive technology.”

IWF’s specialist analysts are seeing evidence that AI technology is being used not just to create entirely new images, but to recreate and reimagine the sexual abuse of existing victims.

Fine-tuned or custom AI models make it possible to generate endless, on-demand images of known victims with just a few prompts. This means that survivors of child sexual abuse are repeatedly victimised every time these models are trained or used.

Kerry concluded: “Because these AI systems are often trained and fine-tuned using real, photographic abuse material, they quite literally build on the suffering of real children. The development of AI tools must be regulated to prioritise children’s safety, dignity and rights.”

The event brought together partners across faiths, nations and disciplines to discuss how the rapid development of artificial intelligence is profoundly transforming young lives and to address its impact on child dignity and rights.

Participants were called upon to sign a declaration to protect children and keep them safe, including through the use of ethical and transparent technology.  The document was presented to Pope Leo XIV, who received the conference participants in an audience at the Apostolic Palace in the Vatican.

Main image: © Vatican Media

MEP visits IWF ahead of vital negotiations in EU Parliament on AI child sexual abuse content

MEP visits IWF ahead of vital negotiations in EU Parliament on AI child sexual abuse content

Dutch MEP Jeroen Lenaers hears about AI harms from experts on the front line of the fight to stop child sexual abuse online.

30 October 2025 News
IWF and child protection partners stand up for the rights of child sexual abuse victims and survivors at Brussels stunt

IWF and child protection partners stand up for the rights of child sexual abuse victims and survivors at Brussels stunt

Coalition urges EU leaders to pass vital child sexual abuse laws

13 October 2025 News
AI imagery getting more ‘extreme’ as IWF welcomes new rules allowing thorough testing of AI tools

AI imagery getting more ‘extreme’ as IWF welcomes new rules allowing thorough testing of AI tools

The IWF welcomes new measures to help make sure digital tools are safe as new data shows AI child sexual abuse is still spreading.

12 November 2025 News