AI chatbots and child sexual abuse: a wake-up call for urgent safeguards

Published:  Mon 22 Sep 2025

Written by:   Emma Hardy, Communications Director

Today, the Internet Watch Foundation (IWF) has published new data revealing, for the first time, child sexual abuse images linked directly to AI chatbots.

As the UK's Guardian newspaper reported this weekend, our analysts uncovered criminal material on a platform hosting multiple chatbot “characters” designed to let users simulate sexual scenarios with child avatars – with some depicted as young as seven years old.

With concerns and media attention around AI chatbots continuing to grow, the reality is children are being placed at risk. What may appear to be harmless technology is already posing multiple risks to children’s online safety.

What we found

Between 1 June and 7 August 2025, IWF analysts actioned 17 reports of AI-generated child sexual abuse material (CSAM) from a single site:

  • 94% were Category C images depicting children aged 11–13.
  • One report depicted a child estimated to be between 7 and 10 years old.
  • Metadata confirmed these were created using explicit prompts, deliberate instructions to generate illegal content.

Taking action together

In July, I chaired a roundtable of experts to explore the emerging risks posed by AI chatbots and companions. The discussion focused on:

  • The nature and scale of harm linked to AI chatbot technologies;
  • Whether current legislation, such as the Online Safety Act, is equipped to respond to these risks;
  • The next steps to ensure AI tools are safe by design.
Emma Hardy, IWF Communications Director
Emma Hardy, IWF Communications Director

While child sexual abuse was a central concern, during the session we also examined wider issues including mental health, self-harm, data privacy, and the impact of AI on children's behaviour and emotional wellbeing. We’re especially grateful to Pinsent Masons for hosting and supporting this important discussion.

Why this matters

AI-generated CSAM is not a victimless crime. Some AI models are trained on real abuse material, embedding the trauma of real children into synthetic content. Research shows that consuming CSAM – whether AI-generated or not – can normalise abuse, escalate offending, and fuel demand for further exploitation.

As one of our analysts explained:

“Unfortunately, to see AI chatbots used in this way doesn't come as a big surprise. It seems an inevitable consequence of when new technology is ‘turned bad’ by the wrong people. We know offenders will use all means at their disposal to create, share and distribute child sexual abuse material.”

Our Hotline’s latest findings are a clear warning sign. Without urgent safeguards, AI risks becoming a weapon for abusers rather than a force for good. Child protection and safety by design must be at the heart of AI regulation.

AI-generated child sexual abuse: now cannot be the moment the EU downs tools

AI-generated child sexual abuse: now cannot be the moment the EU downs tools

A new IWF report reveals record levels of AI‑generated child sexual abuse imagery and alarming insight into how offenders are exploiting emerging technologies. The charity is urging EU lawmakers to introduce a zero‑tolerance ban on AI‑generated abuse and the tools used to create it.

24 March 2026 Blog
AI-generated child sexual abuse: why safety by design must be the next step

AI-generated child sexual abuse: why safety by design must be the next step

We’re calling for an AI Bill that includes key measures to ensure safety-by-design becomes a non-negotiable standard in AI development.

24 March 2026 Blog
Europe is about to make it illegal to protect children online

Europe is about to make it illegal to protect children online

The European Parliament has one last chance to save its child protection system.

23 March 2026 Blog