In 2025, the IWF identified 8,029 AI-generated images and videos depicting realistic child sexual abuse. The most dramatic shift has been the emergence of AI-generated abuse videos. In 2025 alone, we identified 3,440 AI-generated child sexual abuse videos, compared with just 13 the year before. Nearly two thirds of these videos were classified as Category A, the most extreme category of abuse material.
In my previous blog, I highlighted new findings from our Hotline revealing, for the first time, AI-generated child sexual abuse images linked directly to chatbot platforms. Since then, the UK Government has committed to regulating AI chatbots – a welcome and important step.
The UK is moving in the right direction and continues to demonstrate leadership in tackling online child sexual abuse. New measures in the Crime and Policing Bill target the tools used to generate AI CSAM, as well as guidance that enables offenders to exploit AI for this purpose.
The speed of technological change means we cannot afford to wait until harms have already escalated before acting.
What we are seeing today – highly realistic AI-generated abuse videos and increasingly sophisticated tools – reflects a period where safety safeguards were not consistently embedded into AI systems from the outset. We should learn from that experience.
Encouragingly, the Government has already introduced provisions in the Crime and Policing Bill that will allow designated authorities such as the Internet Watch Foundation to test AI models. As a global leader in tackling child sexual abuse imagery online, we stand ready to support this work and help ensure independent scrutiny is built into the development process.
The opportunity now is to ensure safety-by-design becomes a non-negotiable standard in AI development. The best vehicle for further safeguards to prevent the generation of AI CSAM is an AI Bill.