How AI is being abused to create child sexual abuse imagery

Report: IWF research into how artificial intelligence (AI) is increasingly being used to create child sexual abuse imagery online.

In 2023, the Internet Watch Foundation (IWF) investigated its first reports of child sexual abuse material (CSAM) generated by artificial intelligence (AI).

Initial investigations uncovered a world of text-to-image technology. In short, you type in what you want to see in online generators and the software generates the image.

The technology is fast and accurate – images usually fit the text description very well. Many images can be generated at once – you are only really limited by the speed of your computer. You can then pick out your favourites; edit them; direct the technology to output exactly what you want.

These AI images can be so convincing that they are indistinguishable from real images.

Report summary

Child sexual abuse images generated using artificial intelligence is a new and growing area of concern.

The key findings of this report are as follows:

  • In total, 20,254 AI-generated images were found to have been posted to one dark web CSAM forum in a one-month period.
  • Of these, 11,108 images were selected for assessment by IWF analysts. These were the images that were judged most likely to be criminal.
    (The remaining 9,146 AI-generated images either did not contain children or contained children but were clearly non-criminal in nature.)
  • 12 IWF analysts dedicated a combined total of 87.5 hours to assessing these 11,108 AI-generated images.

Any images assessed as criminal were criminal under one of two UK laws. These are: 

  • The Protection of Children Act 1978 (as amended by the Criminal Justice and Public Order Act 1994). This law criminalises the taking, distribution and possession of an “indecent photograph or pseudo-photograph of a child”.
  • The Coroners and Justice Act 2009. This law criminalises the possession of “a prohibited image of a child”. These are non-photographic – generally cartoons, drawings, animations or similar.

2,562 images were assessed as criminal pseudo-photographs, and 416 assessed as criminal prohibited images.

Other key findings

  1. AI-generated content currently comprises a small proportion of normal IWF activities, though one of its defining features is its potential for rapid growth.
  2. Perpetrators can legally download everything they need to generate these images, then can produce as many images as they want – offline, with no opportunity for detection. Various tools exist for improving and editing generated images until they look exactly like the perpetrator wants.
  3. Most AI CSAM found is now realistic enough to be treated as ‘real’ CSAM. The most convincing AI CSAM is visually indistinguishable from real CSAM, even for trained IWF analysts. Text-to-image technology will only get better and pose more challenges for the IWF and law enforcement agencies.
  4. There is now reasonable evidence that AI CSAM has increased the potential for the re-victimisation of known child sexual abuse victims, as well as for the victimisation of famous children and children known to perpetrators. The IWF has found many examples of AI-generated images featuring known victims and famous children.
  5. AI CSAM offers another route for perpetrators to profit from child sexual abuse. The first examples of this new commerciality have been identified by the IWF.
  6. Creating and distributing guides to the generation of AI CSAM is not currently an offence, but could be made one. The legal status of AI CSAM models (files used for generating images) is a more complicated question.

AI Report Conclusions


Progress in computer technologies, including progress in generative AI, has enormous potential to better our lives, and misuse of this technology is a small part of this picture.

The development of computer technologies like the growth of the internet, the spread of video-calling and livestreaming, and the development of CGI and image-editing programs, have enabled the widespread production and distribution of CSAM that is currently in evidence.

It is too early to know whether generative AI should be added to the list above as a notable technology that comprises a step change in the history of the production and distribution of CSAM.

Nonetheless, this report evidences a growing problem that boasts several key differences from previous technologies. Chief among those differences is the potential for offline generation of images at scale – with the clear potential to overwhelm those working to fight online child sexual abuse and divert significant resources from real CSAM towards AI CSAM.

In this context, it is worth re-emphasising that this is the worst, in terms of image quality, that AI technology will ever be. Generative AI only surfaced in the public consciousness in the past year; a consideration of what it will look like in another year – or, indeed, five years – should give pause.

At some point on this timeline, realistic full-motion video content will become commonplace. The first examples of short AI CSAM videos have already been seen – these are only going to get more realistic and more widespread.

Solving some of the problems posed by AI-generated indecent images now will be necessary to create models for deployment against the growth of video content in the future.

Yellow warning sign with an exclamation mark

Report disclaimer: The images used in this report are screenshots of content available on the clear and dark web. We've attempted to cite the sources of these screenshots, some of which depict likenesses of famous people or films. These likenesses have been generated by someone submitting prompts to AI models. They are not images of the actors or from the films themselves. This goes some way to demonstrate the photorealism of images produced by AI models.