Rapidly improving technology means AI-generated child sexual abuse videos are now “indistinguishable” from genuine imagery, say experts at the Internet Watch Foundation (IWF), Europe’s largest hotline dedicated to finding and removing child sexual abuse material online.
New data, published today (Friday, July 11) by the IWF, show confirmed reports of AI child sexual abuse imagery have risen 400%, with AI child sexual abuse discovered on 210 webpages in the first six months of 2025 (January 1 – June 30).
In the same period in 2024, IWF analysts found AI child sexual abuse imagery on 42 webpages.
Disturbingly, the number of AI-generated videos has rocketed in this time, with 1,286 individual AI videos of child sexual abuse being discovered in the first half of this year compared to just two in the same period last year.
Of those confirmed child sexual abuse videos, 1,006 were assessed as the most extreme (Category A) imagery under law – videos which can depict rape, sexual torture or bestiality.
All the AI videos confirmed by the IWF so far this year are so convincing they must be treated under UK law exactly as if they were genuine footage. EU law does not yet explicitly address synthetic abuse imagery, but legislators are currently negotiating an update to the 2011 Child Sexual Abuse Directive, which is intended to close this gap.
The charity is warning, however, that the Council of the EU’s current approach on the proposed Recast Directive contains a deeply concerning loophole that would allow the possession of AI-generated child sexual abuse imagery for “personal use”.
As the current negotiations around the legislation progress, the IWF is urging the Council to align its position with the EU Parliament by removing the personal use exception for AI-generated images and videos and ensuring the robust criminalisation of the creation, possession and distribution of AI child sexual abuse manuals and models across the EU.