Dutch MEP Jeroen Lenaers visits the IWF offices in Cambridge, UK, to hear directly from frontline experts about the harms of AI in the fight against online child sexual abuse.
From 3 April, the EU will become the only region worldwide without legal certainty allowing technology companies to detect child sexual abuse material online, prompting urgent warnings from child protection experts and global tech organisations. A coalition of 246 civil society groups and major industry players has condemned lawmakers for failing to extend the temporary legal framework that permitted privacy‑preserving detection tools, leaving companies unsure whether safeguarding systems remain lawful. With the EU already hosting the highest concentration of known child sexual abuse material - 62% of confirmed webpages in 2024 - experts warn the situation will worsen, reducing detections, hampering investigations, and emboldening offenders. As the EU’s proposed permanent legislation remains deadlocked, industry leaders and protection advocates stress that immediate action is essential to prevent increased harm to children across Europe and beyond.
The debate on the EU’s proposed Child Sexual Abuse Regulation (CSAR) has been dominated by one loud slogan. A slogan which may have dire consequences for the safety and wellbeing of millions of children worldwide.
The European Parliament is taking a decisive stand against the rise of AI-generated child sexual abuse material (AI-CSAM), co-hosting a high-level briefing with the Internet Watch Foundation (IWF) to address this urgent threat. With a 380% increase in AI-CSAM reports in 2024, the Parliament is pushing for robust legal reforms through the proposed Child Sexual Abuse Directive. Key priorities include criminalising all forms of AI-generated CSAM, removing legal loopholes such as the “personal use” exemption, and enhancing cross-border enforcement. The IWF and the European Child Sexual Abuse Legislation Advocacy Group (ECLAG) urge the Council of the EU to align with Parliament’s strong stance to protect children and support survivors. This article highlights the scale of the threat, the evolving technology behind synthetic abuse imagery, and the critical need for updated EU legislation.
IWF CEO Kerry Smith calls for complete EU ban of AI abuse content at high-level meeting of global experts in Rome