IWF urges for ‘loophole’ to be closed in proposed EU laws criminalising AI child sexual abuse as synthetic videos make ‘huge leaps’ in sophistication

Published:  Fri 11 Jul 2025

Rapidly improving technology means AI-generated child sexual abuse videos are now “indistinguishable” from genuine imagery, say experts at the Internet Watch Foundation (IWF), Europe’s largest hotline dedicated to finding and removing child sexual abuse material online.

New data, published today (Friday, July 11) by the IWF, show confirmed reports of AI child sexual abuse imagery have risen 400%, with AI child sexual abuse discovered on 210 webpages in the first six months of 2025 (January 1 – June 30).

In the same period in 2024, IWF analysts found AI child sexual abuse imagery on 42 webpages.

Disturbingly, the number of AI-generated videos has rocketed in this time, with 1,286 individual AI videos of child sexual abuse being discovered in the first half of this year compared to just two in the same period last year.

Of those confirmed child sexual abuse videos, 1,006 were assessed as the most extreme (Category A) imagery under law – videos which can depict rape, sexual torture or bestiality.

All the AI videos confirmed by the IWF so far this year are so convincing they must be treated under UK law exactly as if they were genuine footage. EU law does not yet explicitly address synthetic abuse imagery, but legislators are currently negotiating an update to the 2011 Child Sexual Abuse Directive, which is intended to close this gap.

The charity is warning, however, that the Council of the EU’s current approach on the proposed Recast Directive contains a deeply concerning loophole that would allow the possession of AI-generated child sexual abuse imagery for “personal use”.

As the current negotiations around the legislation progress, the IWF is urging the Council to align its position with the EU Parliament by removing the personal use exception for AI-generated images and videos and ensuring the robust criminalisation of the creation, possession and distribution of AI child sexual abuse manuals and models across the EU.

Derek Ray-Hill, IWF Interim CEO
Derek Ray-Hill, IWF Interim CEO

Derek Ray-Hill, Interim Chief Executive of the IWF, said: “We must do all we can to prevent a flood of synthetic and partially synthetic content joining the already record quantities of child sexual abuse we are battling online. I am dismayed to see the technology continues to develop at pace, and that it continues to be abused in new and unsettling ways.

“Just as we saw with still images, AI videos of child sexual abuse have now reached the point where they can be indistinguishable from genuine films. The children depicted are often real and recognisable, the harm this material does is real, and the threat it poses threatens to escalate even further.”

The IWF says the picture-quality of AI-generated videos of child sexual abuse has progressed in “leaps and bounds” over the past year, and that criminals are now creating AI-generated child sexual abuse videos at scale – sometimes including the likenesses of real children. Highly realistic videos of abuse are no longer confined to very short, glitch-filled clips, and the potential for criminals to create even longer, more detailed videos is becoming a reality.

Analysts also warn AI-generated child sexual abuse is becoming more “extreme” as criminals become more adept at creating and depicting new obscene scenarios.

Mr Ray-Hill added: “There can be no justifiable exception for the personal possession or consumption of AI images and videos depicting the sexual abuse and torture of children. Children have been harmed in the process, and AI is magnifying this harm. Predators use this technology to make new AI images of known child victims to suit their personal predilections. And in some cases, existing images of child sexual abuse have been used in datasets to train AI generators.

“The EU is making great strides on this vital issue and it must not falter. We urge the Council to rethink its position and align with the EU Parliament to pass legislation that is fit for purpose and ensures that survivors are at the heart of all decisions made.”

The IWF has unique powers* to proactively hunt down child sexual abuse imagery on the internet. Because of this, its trained and experienced analysts are often the first to discover new ways criminals are abusing technology.

IWF Senior Internet Content Analyst
IWF Senior Internet Content Analyst

Jeff** is a Senior Analyst at the IWF specialising in AI. He said: “In terms of video quality, child sexual abuse imagery creators are leaps and bounds ahead of where they were last year.

“The first AI child sexual abuse videos we saw were deepfakes – a known victim’s face put on to an actor in an existing adult pornographic video. It wasn’t sophisticated but could still be pretty convincing. The first fully synthetic child sexual abuse video we saw at the beginning of last year was just a series of jerky images put together, nothing convincing.

“But now they have really turned a corner. The quality is alarmingly high, and the categories of offence depicted are becoming more extreme as the tools improve in their ability to generate video showing two or more people. The videos also include sets showing known victims in new scenarios.

“Just as still images jumped to photorealistic as demand increased and the tools were improved, it was only a matter of time before videos went the same way.”

IWF analysts also warn that the sophistication and realism of these videos is still developing, and there are indications criminals themselves cannot believe how easy it is to create child sexual abuse imagery using AI tools.

One perpetrator, writing in an online forum, said: “Technology moves so fast – just when I finally understand how to use a tool, something newer and better comes along.”

 

* In 2014, the UK government empowered the IWF to proactively identify and remove CSAM online. It is the only non-law enforcement agency in Europe, and one of only a small handful of similar organisations the world over, with these powers.

**Name changed to protect analyst’s identity.

Full feature-length AI films of child sexual abuse will be ‘inevitable’ as synthetic videos make ‘huge leaps’ in sophistication in a year

Full feature-length AI films of child sexual abuse will be ‘inevitable’ as synthetic videos make ‘huge leaps’ in sophistication in a year

New data reveals AI child sexual abuse continues to spread online as criminals create more realistic, and more extreme, imagery.

11 July 2025 News
EU Parliament leads the way in tackling AI-generated child sexual abuse material

EU Parliament leads the way in tackling AI-generated child sexual abuse material

EU Parliament champions urgent legal reforms to combat AI-generated child sexual abuse, prioritising survivor protection and closing dangerous loopholes.

8 July 2025 News
Professionals working with children given ‘vital guidance’ to tackle threat of AI-generated child sexual abuse material

Professionals working with children given ‘vital guidance’ to tackle threat of AI-generated child sexual abuse material

New aid created by the NCA and IWF raises awareness of the risks to children caused by the ‘weaponised’ technology.

27 June 2025 News