EU Parliament leads the way in tackling AI-generated child sexual abuse material

Published:  Tue 8 Jul 2025

The Internet Watch Foundation (IWF) and the European Parliament’s Intergroup on Children’s Rights co-hosted a high-level technical briefing on the growing threat of AI-generated child sexual abuse material (AI-CSAM) in Brussels on 1 July. The session brought together MEPs and staff, the IWF’s Chief Technology Officer, and child protection advocates to assess the scale of this emerging risk and discuss the urgent responses needed.

Opening the event, MEP Veronika Cifrová Ostrihoňová stressed the importance of centring survivors in EU policymaking, citing the words of one child sexual abuse victim: “People say we should protect users’ privacy. What about mine?”

Her statement highlighted a critical truth: the trauma experienced by victims of sexual abuse is compounded by the digital permanence and repeated sharing of their images – and made all the more unimaginable by the prospect that their image could be used to create more abusive material through the use of AI.

As MEP Cifrová Ostrihoňová emphasised, the numbers speak for themselves. In 2024, IWF confirmed 245 reports of AI CSAM – up 380% from 2023 – containing 7,644 images and videos. Nearly 40% of this material depicted Category A abuse (the most severe under UK law), compared to 21% in all the CSAM assessed by IWF in 2024. In 2024, 97% of the CSAM found by IWF analysts depicted girls; for AI-generated CSAM, this figure jumped to 98%.

AI-generated CSAM was first identified by the IWF in 2023. Since then, the technology has evolved rapidly. Today, offenders are using a range of generative tools – from text-to-image generators to “nudifying” apps – to create and share material that depicts child sexual abuse. The most advanced systems are now even capable of producing short, hyper-realistic videos, and this is extremely alarming.

Dan Sexton, IWF Chief Technology Officer
Dan Sexton, IWF Chief Technology Officer

Dan Sexton, the IWF’s Chief Technology Officer, warned that “what we’re seeing now is highly realistic abuse imagery being generated with minimal technical skill. This technology is being exploited to cause real harm to children.”

In the most disturbing cases, existing child sexual abuse images and videos are used to train generative AI models, effectively reanimating the abuse and prolonging victims’ suffering.

The law must keep pace with the threat

Despite the severity of the threat, the current EU legal framework does not yet explicitly address synthetic abuse imagery. The European Parliament’s position on the proposed Child Sexual Abuse Directive (CSAD) represents a vital step forward. The IWF and the wider ECLAG coalition – comprising 73 organisations across Europe – strongly welcome the Parliament’s clear stance to:

  • Criminalise AI-generated CSAM in all circumstances, including rejecting exemptions for personal use and recognising that all forms of CSAM perpetuate harm and fuel demand;
  • Support law enforcement and child protection groups with clearer definitions, better tools and stronger cross-border cooperation.

Claire Fourçans, speaking on behalf of the ECLAG* steering group, underlined the need for decisive legal tools: “Crimes like AI-generated CSAM are not hypothetical – they are happening now, and children are suffering. The Parliament’s position recognises this and shows real leadership. We trust it will stand firm in trilogue negotiations.”

 

ECLAG logo and steering group

 

 

Loopholes in the Council of the EU’s position must be closed

By contrast, the Council’s current General Approach on the Child Sexual Abuse Directive contains a deeply concerning loophole that would allow the possession of AI-generated CSAM for “personal use”. The IWF and its partners are unequivocal: this provision must be removed.

At the European Parliament event, Hannah Swirsky, Head of Policy and Public Affairs at the IWF, stated: “There is no such thing as harmless child sexual abuse material,” and that “allowing synthetic abuse imagery under any pretext legitimises exploitation, hinders enforcement and sends the wrong message to victims.”

Stronger protections, stronger tools, better prevention

The IWF welcomes the Parliament’s efforts to strengthen the EU’s legal framework through the Directive, in parallel with the ongoing negotiations on the proposed Child Sexual Abuse Regulation. 

As these negotiations progress, the IWF urges the Council to take this opportunity to align its position with the Parliament by:

  • removing the personal use exception for AI-generated CSAM;
  • ensuring robust criminalisation of the creation, possession, and distribution of manuals, guides and models across the EU;
  • and making certain that survivors are recognised, supported, and protected.

As MEP Cifrová Ostrihoňová concluded: “We need stronger safeguards. We need stronger tools. And we need better prevention.” 

**The European Child Sexual Abuse Legislation Advocacy Group (ECLAG) is a coalition of over 70 international NGOs fighting to protect children from sexual violence and abuse. The IWF is a proud member of the Steering Group.

Safehire.ai joins IWF to reinforce digital safeguarding in recruitment

Safehire.ai joins IWF to reinforce digital safeguarding in recruitment

New Member Safehire.ai says the organisation is proud to join the Internet Watch Foundation (IWF) as it strengthens a shared mission to protect children from online harm.

28 July 2025 News
Smart Axiata partnership will protect millions from exposure to child sexual abuse images and videos

Smart Axiata partnership will protect millions from exposure to child sexual abuse images and videos

One of the largest telecoms providers in Cambodia is now a partner in our mission to defend children online and remove child sexual abuse material (CSAM) from the internet.

17 July 2025 News
Full feature-length AI films of child sexual abuse will be ‘inevitable’ as synthetic videos make ‘huge leaps’ in sophistication in a year

Full feature-length AI films of child sexual abuse will be ‘inevitable’ as synthetic videos make ‘huge leaps’ in sophistication in a year

New data reveals AI child sexual abuse continues to spread online as criminals create more realistic, and more extreme, imagery.

11 July 2025 News