The Internet Watch Foundation (IWF) has identified a significant and growing threat where AI technology is being exploited to produce child sexual abuse material (CSAM). Our first report in October 2023 revealed the presence of over 20,000 AI-generated images on a dark web forum in one month where more than 3,000 depicted criminal child sexual abuse activities. Since then the issue has escalated and continues to evolve.
This new July 2024 updated report evaluates what has changed since October 2023 with AI child sexual abuse imagery and the technology being abused to create it. It should be considered an update to the initial report and be reviewed alongside it.
These incredibly realistic deepfake, or partially synthetic, videos of child rape and torture are made by offenders using AI tools that add the face or likeness of a real person or victim.
Download IWF's initial AI Child Sexual Abuse Report from October 2023.
The key findings of this report are as follows:
Any images assessed as criminal were criminal under one of two UK laws. These are:
2,562 images were assessed as criminal pseudo-photographs, and 416 assessed as criminal prohibited images.
Progress in computer technologies, including progress in generative AI, has enormous potential to better our lives, and misuse of this technology is a small part of this picture.
The development of computer technologies like the growth of the internet, the spread of video-calling and livestreaming, and the development of CGI and image-editing programs, have enabled the widespread production and distribution of CSAM that is currently in evidence.
It is too early to know whether generative AI should be added to the list above as a notable technology that comprises a step change in the history of the production and distribution of CSAM.
Nonetheless, this report evidences a growing problem that boasts several key differences from previous technologies. Chief among those differences is the potential for offline generation of images at scale – with the clear potential to overwhelm those working to fight online child sexual abuse and divert significant resources from real CSAM towards AI CSAM.
In this context, it is worth re-emphasising that this is the worst, in terms of image quality, that AI technology will ever be. Generative AI only surfaced in the public consciousness in the past year; a consideration of what it will look like in another year – or, indeed, five years – should give pause.
At some point on this timeline, realistic full-motion video content will become commonplace. The first examples of short AI CSAM videos have already been seen – these are only going to get more realistic and more widespread.
Report disclaimer: The images used in this report are screenshots of content available on the clear and dark web. We've attempted to cite the sources of these screenshots, some of which depict likenesses of famous people or films. These likenesses have been generated by someone submitting prompts to AI models. They are not images of the actors or from the films themselves. This goes some way to demonstrate the photorealism of images produced by AI models.