AI imagery getting more ‘extreme’ as IWF welcomes new rules allowing thorough testing of AI tools
The IWF welcomes new measures to help make sure digital tools are safe as new data shows AI child sexual abuse is still spreading.
The phenomenon of self-generated child sexual abuse material (SG-CSAM) has escalated in recent years, driven by the proliferation of smartphone camera technology and increased internet accessibility. The COVID-19 pandemic, which led to prolonged periods of lockdown, further compounded this issue. Self-generated child sexual abuse material includes intimate or sexually explicit content created by and featuring minors, which can be shared either voluntarily or through coercion, grooming or blackmail. This report, funded by the Oak Foundation and conducted in collaboration with the Policing Institute for the Eastern Region (PIER) and the Internet Watch Foundation (IWF), aims to build an evidence base to inform targeted prevention campaigns. The primary objectives were to investigate effective public awareness campaigns, design and deliver targeted public campaigns and evaluate their effectiveness in educating children, parents, carers and educators about self-generated child sexual abuse material.

Social media's intrinsic role in the lives of young people necessitates a thorough understanding of the challenges they face online. This project emphasises the importance of incorporating the perspectives of children, young people, parents and educators in developing sensitive and effective responses to self-generated child sexual abuse material. By exploring how children and young people perceive, understand and navigate these issues, the report seeks to highlight the complexity and gravity of self-generated child sexual abuse material. It underscores the need for campaigns that do not merely focus on abstinence but also address safe sharing practices and the realistic contexts in which children and young people operate online. The research findings presented in this report mark the culmination of the project's research phase, aiming to contribute to a more informed and responsive approach to safeguarding young people in the digital age.
The IWF welcomes new measures to help make sure digital tools are safe as new data shows AI child sexual abuse is still spreading.
While providing legal certainty is desirable, the IWF says voluntary detection alone is not enough to meet the scale of the child sexual abuse crisis online.
More than nine in ten people in the UK say they are concerned at how images and videos of children being sexually abused are shared through end-to-end encrypted (E2EE) messaging services.
The debate on the EU’s proposed Child Sexual Abuse Regulation (CSAR) has been dominated by one loud slogan. A slogan which may have dire consequences for the safety and wellbeing of millions of children worldwide.
Three years ago, when Pinsent Masons set out to unite their communities to raise money for the Internet Watch Foundation (IWF), no one could have predicted how far their idea would go or how many people would still be moving for the cause three years later.