AI chatbots and child sexual abuse: a wake-up call for urgent safeguards

Published:  Mon 22 Sep 2025

Written by:   Emma Hardy, Communications Director

Today, the Internet Watch Foundation (IWF) has published new data revealing, for the first time, child sexual abuse images linked directly to AI chatbots.

As the UK's Guardian newspaper reported this weekend, our analysts uncovered criminal material on a platform hosting multiple chatbot “characters” designed to let users simulate sexual scenarios with child avatars – with some depicted as young as seven years old.

With concerns and media attention around AI chatbots continuing to grow, the reality is children are being placed at risk. What may appear to be harmless technology is already posing multiple risks to children’s online safety.

What we found

Between 1 June and 7 August 2025, IWF analysts actioned 17 reports of AI-generated child sexual abuse material (CSAM) from a single site:

  • 94% were Category C images depicting children aged 11–13.
  • One report depicted a child estimated to be between 7 and 10 years old.
  • Metadata confirmed these were created using explicit prompts, deliberate instructions to generate illegal content.

Taking action together

In July, I chaired a roundtable of experts to explore the emerging risks posed by AI chatbots and companions. The discussion focused on:

  • The nature and scale of harm linked to AI chatbot technologies;
  • Whether current legislation, such as the Online Safety Act, is equipped to respond to these risks;
  • The next steps to ensure AI tools are safe by design.
Emma Hardy, IWF Communications Director
Emma Hardy, IWF Communications Director

While child sexual abuse was a central concern, during the session we also examined wider issues including mental health, self-harm, data privacy, and the impact of AI on children's behaviour and emotional wellbeing. We’re especially grateful to Pinsent Masons for hosting and supporting this important discussion.

Why this matters

AI-generated CSAM is not a victimless crime. Some AI models are trained on real abuse material, embedding the trauma of real children into synthetic content. Research shows that consuming CSAM – whether AI-generated or not – can normalise abuse, escalate offending, and fuel demand for further exploitation.

As one of our analysts explained:

“Unfortunately, to see AI chatbots used in this way doesn't come as a big surprise. It seems an inevitable consequence of when new technology is ‘turned bad’ by the wrong people. We know offenders will use all means at their disposal to create, share and distribute child sexual abuse material.”

Our Hotline’s latest findings are a clear warning sign. Without urgent safeguards, AI risks becoming a weapon for abusers rather than a force for good. Child protection and safety by design must be at the heart of AI regulation.

No Loopholes: New Development Shows the EU Must Close the AI Gap through the Recast CSA Directive

No Loopholes: New Development Shows the EU Must Close the AI Gap through the Recast CSA Directive

A disturbing new development highlights exactly why comprehensive legislation cannot wait.

22 September 2025 Blog
IWF calls for Council to agree to Danish compromise on the Child Sexual Abuse Regulation before the temporary derogation expires

IWF calls for Council to agree to Danish compromise on the Child Sexual Abuse Regulation before the temporary derogation expires

The Internet Watch Foundation (IWF) is urging the Council of the European Union to agree to the Danish compromise on the proposed Regulation to combat the spread of child sexual abuse (CSAR).

11 September 2025 Blog
Move for a Safer Internet returns for its third year: A Cyber-Led Sporting Challenge

Move for a Safer Internet returns for its third year: A Cyber-Led Sporting Challenge

Pinsent Masons and the Internet Watch Foundation (IWF) build on two years of impact and collaboration.

3 September 2025 Blog