Speaking after the conference on November 12-13, IWF CEO Kerry Smith said: “It is gratifying to see the high level of attention being directed towards the risks and harms posed to children from AI in the digital era in which we now live.
“But make no mistake, AI child sexual abuse imagery is not a future risk – it is a current and accelerating crisis. And if we don’t act now to regulate AI tools, the demand and the output will grow as generative AI continues to evolve.”
AI-generated child sexual abuse material identified by the IWF is becoming more extreme – and it is highly gendered.
Almost 40% of all AI-generated material in 2024 was rated Category A under UK law. That’s the most severe level – depicting the rape or sexual torture of a child or involving animals.
The content is overwhelmingly focused on girls. Last year, 98% of all AI-generated images and videos of child sexual abuse, in which the sex of the child was recorded, depicted girls only.
Kerry continued: “That tells us this technology is reproducing and amplifying existing gendered patterns of abuse – not disrupting them. It’s embedding misogyny and child sexualisation even more deeply into the online ecosystem.
“We’re not just seeing synthetic imitations of mild or ambiguous content; we’re talking about highly violent, deeply abusive material.
“There is no evidence to support the claim that AI-generated imagery could be used as a “safe substitute” that might somehow reduce harm by giving offenders a non-contact outlet. Using AI child sexual abuse material reinforces harmful sexual fantasies and normalises abusive behaviour rather than preventing it.”
At the gathering, which was convened by Fondazione Child ETS, in collaboration with the Child Dignity Alliance, the IWF CEO called for the criminalisation of AI-generated child sexual abuse material in all forms across the EU through new laws currently being negotiated in the EU Parliament, the Recast Child Sexual Abuse Directive. A ban on the use of so-called ‘nudification’ apps, which allow users to easily remove the clothing from images of real people and children was also proposed.
Kerry said: “This is the EU’s chance to update this legislation in line with the current state of child sexual abuse and exploitation across the world.
“We need a comprehensive EU ban as a minimum standard that includes the creation, possession and distribution of AI child sexual abuse imagery, with no exceptions.
“The IWF is also calling for a bar on the use of nudification apps as there is no possible justification for the existence of this invasive technology.”
IWF’s specialist analysts are seeing evidence that AI technology is being used not just to create entirely new images, but to recreate and reimagine the sexual abuse of existing victims.
Fine-tuned or custom AI models make it possible to generate endless, on-demand images of known victims with just a few prompts. This means that survivors of child sexual abuse are repeatedly victimised every time these models are trained or used.
Kerry concluded: “Because these AI systems are often trained and fine-tuned using real, photographic abuse material, they quite literally build on the suffering of real children. The development of AI tools must be regulated to prioritise children’s safety, dignity and rights.”
The event brought together partners across faiths, nations and disciplines to discuss how the rapid development of artificial intelligence is profoundly transforming young lives and to address its impact on child dignity and rights.
Participants were called upon to sign a declaration to protect children and keep them safe, including through the use of ethical and transparent technology. The document was presented to Pope Leo XIV, who received the conference participants in an audience at the Apostolic Palace in the Vatican.
Main image: © Vatican Media