No Loopholes: New Development Shows the EU Must Close the AI Gap through the Recast CSA Directive

Published:  Mon 22 Sep 2025

As European Union negotiators prepare for crucial negotiations on the recast Child Sexual Abuse Directive, a disturbing new development highlights exactly why comprehensive legislation cannot wait. The rise of artificial intelligence has brought unprecedented opportunities for innovation and progress, but it has also handed criminals powerful new tools to exploit and harm children.

The scale of AI-generated child sexual abuse material is exploding globally. In total, 245 reports processed in 2024 contained actionable AI-generated images of child sexual abuse. This is a 380% increase on 2023 where just 51 contained actionable AI-generated images of child sexual abuse. Yet as this threat evolves at breakneck speed, there is a potential for a dangerous gap to remain in EU law – one that could provide a legal safe harbour for criminals under the guise of ‘possession for personal use’.

The IWF’s latest discovery should serve as the final wake-up call to legislators across Europe. For the first time, analysts have identified child sexual abuse material generated by artificial intelligence linked to AI chatbots – a disturbing evolution that demonstrates not only how quickly criminals adapt new technologies for exploitation, but why any legislative loophole will be exploited.

 

Hidden in Plain Sight

This discovery reveals sophisticated methods to evade detection. The chatbot website where child sexual abuse material was found operated with two faces: a legitimate front offering standard AI chat services, and a hidden section accessible only through specific links or pathways. This dual nature allowed illegal content to exist on the clear web whilst avoiding immediate detection – the same URL displaying either lawful or criminal material depending on how users accessed it.

Accessing the same website via a particular digital pathway allows users to interact with multiple chatbots that will simulate ‘abhorrent’ sexual scenarios with children. Simulated scenarios that users can engage with include: ‘child prostitute in a hotel’; ‘sex with your child while your wife is on holiday’; and ‘child and teacher alone after class’. These are not accidental outputs or edge cases: the metadata from the criminal images shows that the text instructions used to generate them were explicitly designed to create child sexual abuse material.

This discovery comes against a backdrop of explosive growth in AI-generated child sexual abuse material. The IWF reported a 400% increase in such content in just the first six months of 2025, with analysts taking action against 210 webpages containing AI-generated abuse imagery – five times more than the entire previous year.

Even more concerning is the severity of this content. Of the 1,286 AI-generated videos processed between January and June 2025, over 1,000 were classified as Category A, the most extreme classification under UK law, depicting rape, sexual torture, or bestiality.

 

Why “Personal Use” Exceptions Are Dangerous

The recast Child Sexual Abuse Directive (2024 proposal) updates the EU’s 2011 Directive to close legal gaps and address new challenges such as AI-generated CSAM, livestreaming, and online grooming. It will strengthen victim support, harmonise definitions and penalties across Member States, and align EU law with technological and societal developments.

As a directive, this law will set minimum standards for countries to reach through their national legal frameworks. Though many Member States already have robust laws concerning AI-generated CSAM, this is not uniform across the Union. The IWF calls for any exception for ‘personal use’ of AI-generated child sexual abuse material to be robustly rejected to ensure a zero-tolerance approach in the EU.

 An exception would:

  • Stymie enforcement efforts: CSA-image generators can enable perpetrators to produce an unlimited amount of CSAM. It is unclear how supporters of the “personal use” exemption expect authorities would distinguish between “personal use” and distribution when the same individual might possess hundreds of AI-generated images.
  • Enable offline escalation: Research shows that viewing child sexual abuse material – of any type – fuels normalisation of violence against children and can escalate offending behaviour. ‘Personal use’ creates a pathway to contact offending.
  • Revictimise existing survivors: AI models are often trained on existing abuse imagery, meaning known victims are subjected to further trauma as their images are manipulated to create new scenarios of abuse.
  • Undermine detection: When possession becomes legally ambiguous, it becomes exponentially harder for platforms, law enforcement, and organisations to take swift action that protects both children and all internet users. 

 

The Need for Future-Proof Legislation

The rapid evolution from static AI-generated images to sophisticated chatbots capable of real-time interaction demonstrates why legislation must be technology-neutral and future-proof to set a minimum standard for protection across Europe. We cannot predict every way that artificial intelligence will be weaponised against children, but we can ensure that our legal frameworks are robust enough to address these threats as they emerge.

The European Parliament has taken the right approach by maintaining a zero-tolerance stance. Now the Council must align with this position and close the AI loophole entirely. This means:

  • Criminalising all creation, possession, and distribution of AI-generated child sexual abuse material;
  • Explicitly prohibiting the training of any type of AI model with existing abuse imagery, or the possession of such models;
  • Banning the production and distribution of guides and tools designed to create such content
  • Ensuring no exceptions that could be exploited by offenders.

 

A Critical Moment

The chatbot discovery is not an isolated incident. As one Internet Watch Foundation analyst noted, this was “not really surprising” but rather “an inevitable consequence when new technologies are weaponised for malicious purposes by bad actors”.

The EU stands at a critical juncture. The recast Directive represents a historic opportunity to establish comprehensive minimum protections across the 27 Member states that can adapt to emerging threats. But only if legislators resist the temptation to carve out exceptions.

Children deserve protection that is as sophisticated as the threats they face.

Kerry Smith, Chief Executive of the IWF, emphasised: “The recast Child Sexual Abuse Directive represents Europe's best chance to close dangerous loopholes that criminals are already exploiting. Every day we delay comprehensive action, more sophisticated methods emerge to harm children. The Council must align with Parliament's position and reject any 'personal use' exceptions – there is simply no legitimate reason for anyone to possess this material, and any ambiguity in the law will be weaponised by offenders”. 

Angèle Lefranc, Advocacy Officer at Fondation pour l'Enfance, added: “The European Union is at a pivotal moment and will have, in the coming weeks, the opportunity to position itself as a leader in the fight against this scourge. We call on the Council of the European Union to unconditionally recognise that AI-generated child abuse is a crime.”

European legislators must hold the line and ensure that the law reflects a simple truth: there is never, under any circumstances, a legitimate use for child sexual abuse material, regardless of how it was created.

IWF calls for Council to agree to Danish compromise on the Child Sexual Abuse Regulation before the temporary derogation expires

IWF calls for Council to agree to Danish compromise on the Child Sexual Abuse Regulation before the temporary derogation expires

The Internet Watch Foundation (IWF) is urging the Council of the European Union to agree to the Danish compromise on the proposed Regulation to combat the spread of child sexual abuse (CSAR).

11 September 2025 Blog
Move for a Safer Internet returns for its third year: A Cyber-Led Sporting Challenge

Move for a Safer Internet returns for its third year: A Cyber-Led Sporting Challenge

Pinsent Masons and the Internet Watch Foundation (IWF) build on two years of impact and collaboration.

3 September 2025 Blog
New age assurance requirements: what does this mean for children’s online safety?

New age assurance requirements: what does this mean for children’s online safety?

Last month the UK Protection of children’s Codes came into force, requiring online platforms to prevent children from encountering harm online.

12 August 2025 Blog