The Internet Watch Foundation (IWF) and the European Parliament’s Intergroup on Children’s Rights co-hosted a high-level technical briefing on the growing threat of AI-generated child sexual abuse material (AI-CSAM) in Brussels on 1 July. The session brought together MEPs and staff, the IWF’s Chief Technology Officer, and child protection advocates to assess the scale of this emerging risk and discuss the urgent responses needed.
Opening the event, MEP Veronika Cifrová Ostrihoňová stressed the importance of centring survivors in EU policymaking, citing the words of one child sexual abuse victim: “People say we should protect users’ privacy. What about mine?”
Her statement highlighted a critical truth: the trauma experienced by victims of sexual abuse is compounded by the digital permanence and repeated sharing of their images – and made all the more unimaginable by the prospect that their image could be used to create more abusive material through the use of AI.
As MEP Cifrová Ostrihoňová emphasised, the numbers speak for themselves. In 2024, IWF confirmed 245 reports of AI CSAM – up 380% from 2023 – containing 7,644 images and videos. Nearly 40% of this material depicted Category A abuse (the most severe under UK law), compared to 21% in all the CSAM assessed by IWF in 2024. In 2024, 97% of the CSAM found by IWF analysts depicted girls; for AI-generated CSAM, this figure jumped to 98%.
AI-generated CSAM was first identified by the IWF in 2023. Since then, the technology has evolved rapidly. Today, offenders are using a range of generative tools – from text-to-image generators to “nudifying” apps – to create and share material that depicts child sexual abuse. The most advanced systems are now even capable of producing short, hyper-realistic videos, and this is extremely alarming.