Since the IWF first started monitoring AI in early 2023, we’ve seen a frightening advancement in the ability to generate child sexual abuse imagery artificially. In 2025, 8,029 AI-generated images and videos were assessed by IWF as showing realistic child sexual abuse.
IWF’s new report - Harm without limits: AI child sexual abuse material through the eyes of our Analysts - seeks to centre the human impact of AI CSAM (child sexual abuse material), setting out clearly the harm caused to children and wider society. It captures the views of our expert Analysts, who are on the frontline of removing AI CSAM from the internet, as well as excerpts from dark web offender communities, where users openly celebrate the accessibility and sophistication of AI-generated abuse.
While the proportion of AI-generated material remains comparatively small among the huge amount of CSAM our Analysts act on every year, the volume and severity of AI imagery has increased exponentially, due to the availability and ease of tools. We now face a technological landscape that can generate infinite violations with unprecedented ease.
AI CSAM is widespread and growing: In 2025, we assessed 8,029 AI-generated images and videos as showing realistic child sexual abuse. This imagery appears across both dark web and mainstream commercial platforms on the clear web.
AI CSAM is increasingly extreme and sophisticated: Realistic full-motion AI video content is now commonplace. 65% of videos (2,233 in total) identified by the IWF last year were classified as Category A, the most extreme classification.
AI CSAM causes real harm and is highly gendered: Generative models can be trained and fine-tuned using photographic abuse imagery, directly re-victimising survivors. AI CSAM fuels sexual interest in children, normalises extreme violence, and increases the risk of contact offending.
AI child sexual abuse chatbots are accessible on the clear web: IWF Analysts have identified AI-generated child sexual abuse images shared on AI chatbot services, which encourage users to act out simulated child sexual abuse scenarios.
AI CSAM tools are converging: Advances in AI have driven the convergence of tools that previously required separate capabilities. Single applications can now generate abusive imagery with minimal effort, removing the need for technical expertise and significantly lowering barriers to entry.
In 2024, the IWF identified a 380% rise in actionable AI-generated reports. This year marked the transition from static "deepfakes" to the first realistic AI-generated videos appearing on the dark web.
The IWF published its first dedicated research into AI abuse in October 2023 after analysts spotted the first renderings of synthetic abuse material in the spring of that year.
AI CSAM is an umbrella term for image or video content depicting the sexual abuse of children that has been created either entirely by or with the assistance of generative AI systems. These systems can generate content across multiple modalities, including text, image, audio and video.
Low-Rank Adaptation (LoRA) is a fine-tuning technique that allows users to customise generative models with minimal technical skill or financial resources. LoRAs can create realistic deepfakes of specific children using as few as 20 existing images in as little as 15 minutes.
In 2025, the IWF identified 3,443 AI-generated child sexual abuse videos, representing a 26,385% increase compared to 2024, when only 13 such videos were recorded.
Yes. Emerging AI models allow users to combine pictures, text, and audio to generate videos with synthetic "audio deepfakes", cloned voices of children used to simulate abusive scenarios.
Agentic AI refers to systems capable of achieving specific goals with minimal human supervision. These tools lower barriers to entry for offenders by automating the building and maintenance of illegal online platforms.
Under UK law, Category A is the most severe classification for abuse material, encompassing depictions of penetrative sexual activity, sadism, or sexual activity with an animal. In 2025, 65% of AI CSAM videos identified were classified as Category A.
In February 2025, the UK Government introduced a new criminal offence under the Crime and Policing Bill for making, adapting, possessing, or supplying a "CSA image-generator".
IWF Analysts have identified AI-generated images of children shared on clear-web chatbot services. These services often encourage users to simulate sexual conversations or act out abusive scenarios.
This is a legal tactic where offenders claim that genuine evidence of contact abuse was actually generated by AI and therefore does not depict a real child. This exploits the "liars’ dividend," where public awareness of synthetic media creates plausible deniability for real-world crimes.
Analysis shows that AI-generated abuse is highly gendered, with girls comprising 97% of the illegal AI-generated images assessed by the IWF in 2025.
Current AI CSAM often appears deliberately imperfect to emulate amateur photography, making it indistinguishable from real photographic imagery to the untrained eye. IWF Analysts often only recognise material as synthetic because they are familiar with the specific victims depicted.
Report disclaimer: The images used in these reports include screenshots of content available on the clear and dark web. We've attempted to cite the sources of these screenshots, some of which depict likenesses of famous people or films. These likenesses have been generated by someone submitting prompts to AI models. They are not images of the actors or from the films themselves. This goes some way to demonstrate the photorealism of images produced by AI models.