Artificial Intelligence (AI) and the Production of Child Sexual Abuse Imagery

2024 Update: Understanding the Rapid Evolution of AI-Generated Child Abuse Imagery

The Internet Watch Foundation (IWF) has identified a significant and growing threat where AI technology is being exploited to produce child sexual abuse material (CSAM). Our first report in October 2023 revealed the presence of over 20,000 AI-generated images on a dark web forum in one month where more than 3,000 depicted criminal child sexual abuse activities. Since then the issue has escalated and continues to evolve.

This new July 2024 updated report evaluates what has changed since October 2023 with AI child sexual abuse imagery and the technology being abused to create it. It should be considered an update to the initial report and be reviewed alongside it.

AI-generated imagery of child sexual abuse has progressed at such an accelerated rate that the IWF is now seeing the first realistic examples of AI videos depicting the sexual abuse of children.

These incredibly realistic deepfake, or partially synthetic, videos of child rape and torture are made by offenders using AI tools that add the face or likeness of a real person or victim. 


Key Updates from the July 2024 Report

  1. Increase in AI-generated Child Sexual Abuse Material: The latest findings show over 3,500 new AI-generated criminal child sexual abuse images have been uploaded on to the same dark web forum as previously analysed in October 2023.
  2. More Severe Images: Of the AI-generated images confirmed to be child sexual abuse on the forum, more images depicted the most severe Category A abuse, indicating that perpetrators are more able to generate complex ‘hardcore’ scenarios.
  3. Emergence of AI Child Sexual Abuse Videos: AI-generated child sexual abuse videos, primarily deepfakes, have started circulating, highlighting rapid technological advancements in AI models/generators. Increasingly, deepfake videos shared in dark web forums take adult pornographic videos and add a child’s face using AI tools. 
  4. Clear Web Increase: There is a noticeable increase in AI-generated child sexual abuse imagery on the clear web, including on commercial sites.
  5. AI Child Sexual Abuse Featuring Known Victims and Famous Children: Perpetrators increasingly use fine-tuned AI models to generate new imagery of known victims of child sexual abuse or famous children.

October 2023 Report Summary

Download IWF's initial AI Child Sexual Abuse Report from October 2023. 

The key findings of this report are as follows:

  • In total, 20,254 AI-generated images were found to have been posted to one dark web CSAM forum in a one-month period.
  • Of these, 11,108 images were selected for assessment by IWF analysts. These were the images that were judged most likely to be criminal.
    (The remaining 9,146 AI-generated images either did not contain children or contained children but were clearly non-criminal in nature.)
  • 12 IWF analysts dedicated a combined total of 87.5 hours to assessing these 11,108 AI-generated images.

Any images assessed as criminal were criminal under one of two UK laws. These are: 

  • The Protection of Children Act 1978 (as amended by the Criminal Justice and Public Order Act 1994). This law criminalises the taking, distribution and possession of an “indecent photograph or pseudo-photograph of a child”.
  • The Coroners and Justice Act 2009. This law criminalises the possession of “a prohibited image of a child”. These are non-photographic – generally cartoons, drawings, animations or similar.

2,562 images were assessed as criminal pseudo-photographs, and 416 assessed as criminal prohibited images.

Other key findings

  1. AI-generated content currently comprises a small proportion of normal IWF activities, though one of its defining features is its potential for rapid growth.
  2. Perpetrators can legally download everything they need to generate these images, then can produce as many images as they want – offline, with no opportunity for detection. Various tools exist for improving and editing generated images until they look exactly like the perpetrator wants.
  3. Most AI CSAM found is now realistic enough to be treated as ‘real’ CSAM. The most convincing AI CSAM is visually indistinguishable from real CSAM, even for trained IWF analysts. Text-to-image technology will only get better and pose more challenges for the IWF and law enforcement agencies.
  4. There is now reasonable evidence that AI CSAM has increased the potential for the re-victimisation of known child sexual abuse victims, as well as for the victimisation of famous children and children known to perpetrators. The IWF has found many examples of AI-generated images featuring known victims and famous children.
  5. AI CSAM offers another route for perpetrators to profit from child sexual abuse. The first examples of this new commerciality have been identified by the IWF.
  6. Creating and distributing guides to the generation of AI CSAM is not currently an offence, but could be made one. The legal status of AI CSAM models (files used for generating images) is a more complicated question.

AI Report Conclusions


Progress in computer technologies, including progress in generative AI, has enormous potential to better our lives, and misuse of this technology is a small part of this picture.

The development of computer technologies like the growth of the internet, the spread of video-calling and livestreaming, and the development of CGI and image-editing programs, have enabled the widespread production and distribution of CSAM that is currently in evidence.

It is too early to know whether generative AI should be added to the list above as a notable technology that comprises a step change in the history of the production and distribution of CSAM.

Nonetheless, this report evidences a growing problem that boasts several key differences from previous technologies. Chief among those differences is the potential for offline generation of images at scale – with the clear potential to overwhelm those working to fight online child sexual abuse and divert significant resources from real CSAM towards AI CSAM.

In this context, it is worth re-emphasising that this is the worst, in terms of image quality, that AI technology will ever be. Generative AI only surfaced in the public consciousness in the past year; a consideration of what it will look like in another year – or, indeed, five years – should give pause.

At some point on this timeline, realistic full-motion video content will become commonplace. The first examples of short AI CSAM videos have already been seen – these are only going to get more realistic and more widespread.

Solving some of the problems posed by AI-generated indecent images now will be necessary to create models for deployment against the growth of video content in the future.

Yellow warning sign with an exclamation mark

Report disclaimer: The images used in this report are screenshots of content available on the clear and dark web. We've attempted to cite the sources of these screenshots, some of which depict likenesses of famous people or films. These likenesses have been generated by someone submitting prompts to AI models. They are not images of the actors or from the films themselves. This goes some way to demonstrate the photorealism of images produced by AI models.