IWF analysts uncover platform hosting chatbot “characters” designed to let users simulate sexual scenarios with child avatars.
The IWF’s latest AI report exposes rapidly escalating harms to children as the EU moves to scale back the tools that detect and remove child sexual abuse material online. The charity warns that the EU must act urgently to criminalise AI‑generated abuse and preserve essential detection systems before risks intensify further.
Research report by PIER at Anglia Ruskin University, providing insight into girls and their parents' understanding of self-generated CSAM.
The IWF is one of the most effective hotlines in the world at removing child sexual abuse imagery from the internet, but this has only been possible thanks to the key international partnerships.
This report conducted in collaboration with the Policing Institute for the Eastern Region (PIER) highlights the gravity of self-generated child sexual abuse material.
Our #HomeTruths (TALK) and Gurls Out Loud 'self-generated' child sexual abuse prevention campaign.
IWF wants to help young people stay safe online by making sure you know what to do if you accidentally see sexual images or videos of someone you think might be under 18.
Discover the latest trends & data in the fight against online child sexual abuse imagery in the 2023 Annual Report from the Internet Watch Foundation (IWF).
Explore how ICAP sites use pyramid-style schemes to distribute child sexual abuse material, increasing public exposure and aiding criminal profits.
Explore trends in child-led reporting in our 2025 Annual Data & Insights Report. We analyse how young people access support and report illegal content online.