The European Parliament is taking a decisive stand against the rise of AI-generated child sexual abuse material (AI-CSAM), co-hosting a high-level briefing with the Internet Watch Foundation (IWF) to address this urgent threat. With a 380% increase in AI-CSAM reports in 2024, the Parliament is pushing for robust legal reforms through the proposed Child Sexual Abuse Directive. Key priorities include criminalising all forms of AI-generated CSAM, removing legal loopholes such as the “personal use” exemption, and enhancing cross-border enforcement. The IWF and the European Child Sexual Abuse Legislation Advocacy Group (ECLAG) urge the Council of the EU to align with Parliament’s strong stance to protect children and support survivors. This article highlights the scale of the threat, the evolving technology behind synthetic abuse imagery, and the critical need for updated EU legislation.
Expert analysts have taken action against 200,000 websites containing child sexual abuse material
In conjunction with partners in the private and public sector, we regularly run campaigns aimed at raising awareness & prevention of child sexual abuse online.
This report conducted in collaboration with the Policing Institute for the Eastern Region (PIER) highlights the gravity of self-generated child sexual abuse material.
Campaigners are warning teenagers and their parents about online grooming and sexual exploitation as schools break up for the summer.
Europe remains the world’s largest hoster of child sexual abuse imagery with 62% of known images and videos being traced to a European Union country* in 2021.
We have a powerful sense of mission, with clarity, focus and purpose to our work. Our one single task – beyond all else – is the elimination of child sexual abuse material online.
New IWF data reveals a startling increase in ‘self-generated’ material where children have been tricked or groomed by predators.