Young people looking at phones

Self-generated child sexual abuse fieldwork findings report by PIER


"It's normal these days"

The phenomenon of self-generated child sexual abuse material (SG-CSAM) has escalated in recent years, driven by the proliferation of smartphone camera technology and increased internet accessibility. The COVID-19 pandemic, which led to prolonged periods of lockdown, further compounded this issue. Self-generated child sexual abuse material includes intimate or sexually explicit content created by and featuring minors, which can be shared either voluntarily or through coercion, grooming or blackmail. This report, funded by the Oak Foundation and conducted in collaboration with the Policing Institute for the Eastern Region (PIER) and the Internet Watch Foundation (IWF), aims to build an evidence base to inform targeted prevention campaigns. The primary objectives were to investigate effective public awareness campaigns, design and deliver targeted public campaigns and evaluate their effectiveness in educating children, parents, carers and educators about self-generated child sexual abuse material. 

Policing Institute for the Eastern Region

Social media's intrinsic role in the lives of young people necessitates a thorough understanding of the challenges they face online. This project emphasises the importance of incorporating the perspectives of children, young people, parents and educators in developing sensitive and effective responses to self-generated child sexual abuse material. By exploring how children and young people perceive, understand and navigate these issues, the report seeks to highlight the complexity and gravity of self-generated child sexual abuse material. It underscores the need for campaigns that do not merely focus on abstinence but also address safe sharing practices and the realistic contexts in which children and young people operate online. The research findings presented in this report mark the culmination of the project's research phase, aiming to contribute to a more informed and responsive approach to safeguarding young people in the digital age.

Recent IWF News

New partnership aims to protect children online and advance safer digital advertising

New partnership aims to protect children online and advance safer digital advertising

‘Protected by Mediocean’, a leading solution for holistic ad verification has joined the Internet Watch Foundation to strengthen safeguards in the digital media supply chain and help protect children online.

14 August 2025 News
New age assurance requirements: what does this mean for children’s online safety?

New age assurance requirements: what does this mean for children’s online safety?

Last month the UK Protection of children’s Codes came into force, requiring online platforms to prevent children from encountering harm online.

12 August 2025 Blog
« Les images d’enfants victimes de violences sexuelles générées par l’IA prolongent la souffrance des victimes »

« Les images d’enfants victimes de violences sexuelles générées par l’IA prolongent la souffrance des victimes »

Le directeur de l’Internet Watch Foundation, Derek Ray-Hill, alerte, dans une tribune au « Monde », sur la production d’images pédocriminelles grâce à l’intelligence artificielle et sur la nécessité de les criminaliser.

31 July 2025 IWF In The News
Safehire.ai joins IWF to reinforce digital safeguarding in recruitment

Safehire.ai joins IWF to reinforce digital safeguarding in recruitment

New Member Safehire.ai says the organisation is proud to join the Internet Watch Foundation (IWF) as it strengthens a shared mission to protect children from online harm.

28 July 2025 News
Smart Axiata partnership will protect millions from exposure to child sexual abuse images and videos

Smart Axiata partnership will protect millions from exposure to child sexual abuse images and videos

One of the largest telecoms providers in Cambodia is now a partner in our mission to defend children online and remove child sexual abuse material (CSAM) from the internet.

17 July 2025 News
Full feature-length AI films of child sexual abuse will be ‘inevitable’ as synthetic videos make ‘huge leaps’ in sophistication in a year

Full feature-length AI films of child sexual abuse will be ‘inevitable’ as synthetic videos make ‘huge leaps’ in sophistication in a year

New data reveals AI child sexual abuse continues to spread online as criminals create more realistic, and more extreme, imagery.

11 July 2025 News