AI chatbots and child sexual abuse: a wake-up call for urgent safeguards

Published:  Mon 22 Sep 2025

Written by:   Emma Hardy, Communications Director

Today, the Internet Watch Foundation (IWF) has published new data revealing, for the first time, child sexual abuse images linked directly to AI chatbots.

As the UK's Guardian newspaper reported this weekend, our analysts uncovered criminal material on a platform hosting multiple chatbot “characters” designed to let users simulate sexual scenarios with child avatars – with some depicted as young as seven years old.

With concerns and media attention around AI chatbots continuing to grow, the reality is children are being placed at risk. What may appear to be harmless technology is already posing multiple risks to children’s online safety.

What we found

Between 1 June and 7 August 2025, IWF analysts actioned 17 reports of AI-generated child sexual abuse material (CSAM) from a single site:

  • 94% were Category C images depicting children aged 11–13.
  • One report depicted a child estimated to be between 7 and 10 years old.
  • Metadata confirmed these were created using explicit prompts, deliberate instructions to generate illegal content.

Taking action together

In July, I chaired a roundtable of experts to explore the emerging risks posed by AI chatbots and companions. The discussion focused on:

  • The nature and scale of harm linked to AI chatbot technologies;
  • Whether current legislation, such as the Online Safety Act, is equipped to respond to these risks;
  • The next steps to ensure AI tools are safe by design.
Emma Hardy, IWF Communications Director
Emma Hardy, IWF Communications Director

While child sexual abuse was a central concern, during the session we also examined wider issues including mental health, self-harm, data privacy, and the impact of AI on children's behaviour and emotional wellbeing. We’re especially grateful to Pinsent Masons for hosting and supporting this important discussion.

Why this matters

AI-generated CSAM is not a victimless crime. Some AI models are trained on real abuse material, embedding the trauma of real children into synthetic content. Research shows that consuming CSAM – whether AI-generated or not – can normalise abuse, escalate offending, and fuel demand for further exploitation.

As one of our analysts explained:

“Unfortunately, to see AI chatbots used in this way doesn't come as a big surprise. It seems an inevitable consequence of when new technology is ‘turned bad’ by the wrong people. We know offenders will use all means at their disposal to create, share and distribute child sexual abuse material.”

Our Hotline’s latest findings are a clear warning sign. Without urgent safeguards, AI risks becoming a weapon for abusers rather than a force for good. Child protection and safety by design must be at the heart of AI regulation.

Why We Need to Speak with One Voice on Children’s Online Safety

Why We Need to Speak with One Voice on Children’s Online Safety

Parents across the world are calling for clearer, stronger action to keep children safe online.

21 November 2025 Blog
Online child sexual abuse: The EU has a choice. Not between privacy and protection, but between indifference and compassion

Online child sexual abuse: The EU has a choice. Not between privacy and protection, but between indifference and compassion

The debate on the EU’s proposed Child Sexual Abuse Regulation (CSAR) has been dominated by one loud slogan. A slogan which may have dire consequences for the safety and wellbeing of millions of children worldwide.

10 November 2025 Blog
Move for a Safer Internet: Three Years, Thousands of Minutes, and £60,000 Raised to Protect Children Online

Move for a Safer Internet: Three Years, Thousands of Minutes, and £60,000 Raised to Protect Children Online

Three years ago, when Pinsent Masons set out to unite their communities to raise money for the Internet Watch Foundation (IWF), no one could have predicted how far their idea would go or how many people would still be moving for the cause three years later.

7 November 2025 Blog