While child sexual abuse was a central concern, during the session we also examined wider issues including mental health, self-harm, data privacy, and the impact of AI on children's behaviour and emotional wellbeing. We’re especially grateful to Pinsent Masons for hosting and supporting this important discussion.
Why this matters
AI-generated CSAM is not a victimless crime. Some AI models are trained on real abuse material, embedding the trauma of real children into synthetic content. Research shows that consuming CSAM – whether AI-generated or not – can normalise abuse, escalate offending, and fuel demand for further exploitation.
As one of our analysts explained:
“Unfortunately, to see AI chatbots used in this way doesn't come as a big surprise. It seems an inevitable consequence of when new technology is ‘turned bad’ by the wrong people. We know offenders will use all means at their disposal to create, share and distribute child sexual abuse material.”
Our Hotline’s latest findings are a clear warning sign. Without urgent safeguards, AI risks becoming a weapon for abusers rather than a force for good. Child protection and safety by design must be at the heart of AI regulation.