AI-generated child sexual abuse: now cannot be the moment the EU downs tools

Published:  Tue 24 Mar 2026

The EU is at a crossroads on child protection. Legislative developments are moving, but the offenders who seek to exploit gaps in the law and in AI safety standards are moving faster. The Internet Watch Foundation's (IWF) new AI CSAM report, Harm without limits: AI child sexual abuse material through the eyes of our Analysts, provides the evidence base that EU policymakers need to act with urgency.

The EU must act before the risks we see today become far more severe tomorrow and it must do so on two fronts: preventing the generation of AI child sexual abuse material and preserving the detection systems that protect children right now.

In 2025, the IWF identified 8,029 AI-generated images and videos depicting realistic child sexual abuse. The most dramatic shift has been the emergence of AI-generated abuse videos – 3,443 in 2025, compared with just 13 the year before. Nearly two thirds were classified as Category A, the most extreme category of abuse material under UK law.

In January 2026, EU policymakers were rightly outraged over the risks posed by the Grok AI tool. The incident galvanised legislators and accelerated calls for a ban on AI nudification tools, with proposals now advancing at pace through both the European Parliament and the Council of the EU.

Yet two months later, EU institutions have failed to extend the temporary derogation to the ePrivacy Directive that allows platforms to detect and report child sexual abuse material voluntarily. Unless a solution is found before 3 April, technology companies will face legal uncertainty about whether they can search for and block child sexual abuse on their services – including AI-generated material, which will be able to circulate unchecked.

Unless negotiators return to the table, the EU has effectively given the green light to uploading, sharing and seeking AI images and videos depicting the sexual abuse of children.

This is a contradiction that cannot stand. Strengthening laws against generating AI CSAM means nothing if the systems that detect and remove it are switched off.

Separate to the temporary derogation, the recast of the EU Child Sexual Abuse Directive (2011/92/EU) and the implementation of the EU Artificial Intelligence Act (2024/1689) together represent a generational opportunity to close legal loopholes and embed child protection into AI governance. The EU AI Office should be empowered to act decisively against non-compliant developers, with voluntary codes of practice backed by binding obligations. The IWF stands ready to support independent scrutiny throughout this process. 

Key measures should include: 

  • Closing the legal loophole: The recast Directive must fully criminalise all forms of CSAM, remove any exemption for personal use or creation, criminalise AI tools designed to create CSAM and align penalties with those for in-person abuse.
  • Mandatory pre-market assessment: AI systems must be tested before release. Permitting testing is not the same as requiring it.
  • Safety by design: Child protection must be built in from the outset, with the creation of child sexual abuse images and videos recognised and handled as a systemic risk under the AI Act.
  • Use of trusted datasets: The IWF's Hash List (over 3 million verified CSAM hashes) and URL lists should be used to block known child sexual abuse imagery from training data.
  • Banning nudifying technology: Nudifying platforms should be prohibited for EU-based users. There is no positive use case for these tools – they only serve to humiliate, harass and exploit and can serve as a means to inflict further abuse. The IWF can support enforcement by adding such sites to our URL blocklist. 

 

The EU must act before the risks we see today become far more severe tomorrow and it must do so on two fronts: preventing the generation of AI child sexual abuse material and preserving the detection systems that protect children right now. 

EU failure on temporary derogation puts children at risk

EU failure on temporary derogation puts children at risk

The legal protections that allow companies in the EU to voluntarily detect, find, and remove child sexual abuse material on their platforms are about to expire, as legislative negotiations grind to a halt.

17 March 2026 Statement
Why the EU’s temporary law allowing companies to detect child sexual abuse online must be extended

Why the EU’s temporary law allowing companies to detect child sexual abuse online must be extended

Child safety is on the line - the EU must extend its temporary law before vital protections are turned off.

9 March 2026 Blog
“AI child sexual abuse imagery is not a future risk – it is a current and accelerating crisis”

“AI child sexual abuse imagery is not a future risk – it is a current and accelerating crisis”

IWF CEO Kerry Smith calls for complete EU ban of AI abuse content at high-level meeting of global experts in Rome.

20 November 2025 News