Expert analysts have taken action against 200,000 websites containing child sexual abuse material
This report conducted in collaboration with the Policing Institute for the Eastern Region (PIER) highlights the gravity of self-generated child sexual abuse material.
Campaigners are warning teenagers and their parents about online grooming and sexual exploitation as schools break up for the summer.
New IWF data reveals a startling increase in ‘self-generated’ material where children have been tricked or groomed by predators.
The European Parliament is taking a decisive stand against the rise of AI-generated child sexual abuse material (AI-CSAM), co-hosting a high-level briefing with the Internet Watch Foundation (IWF) to address this urgent threat. With a 380% increase in AI-CSAM reports in 2024, the Parliament is pushing for robust legal reforms through the proposed Child Sexual Abuse Directive. Key priorities include criminalising all forms of AI-generated CSAM, removing legal loopholes such as the “personal use” exemption, and enhancing cross-border enforcement. The IWF and the European Child Sexual Abuse Legislation Advocacy Group (ECLAG) urge the Council of the EU to align with Parliament’s strong stance to protect children and support survivors. This article highlights the scale of the threat, the evolving technology behind synthetic abuse imagery, and the critical need for updated EU legislation.
The Internet Watch Foundation is pleased to be among the winners of the Digital Communication Awards 2021.