In 2025, the IWF identified 8,029 AI-generated images and videos depicting realistic child sexual abuse. The most dramatic shift has been the emergence of AI-generated abuse videos – 3,443 in 2025, compared with just 13 the year before. Nearly two thirds were classified as Category A, the most extreme category of abuse material under UK law.
In January 2026, EU policymakers were rightly outraged over the risks posed by the Grok AI tool. The incident galvanised legislators and accelerated calls for a ban on AI nudification tools, with proposals now advancing at pace through both the European Parliament and the Council of the EU.
Yet two months later, EU institutions have failed to extend the temporary derogation to the ePrivacy Directive that allows platforms to detect and report child sexual abuse material voluntarily. Unless a solution is found before 3 April, technology companies will face legal uncertainty about whether they can search for and block child sexual abuse on their services – including AI-generated material, which will be able to circulate unchecked.
Unless negotiators return to the table, the EU has effectively given the green light to uploading, sharing and seeking AI images and videos depicting the sexual abuse of children.
This is a contradiction that cannot stand. Strengthening laws against generating AI CSAM means nothing if the systems that detect and remove it are switched off.
Separate to the temporary derogation, the recast of the EU Child Sexual Abuse Directive (2011/92/EU) and the implementation of the EU Artificial Intelligence Act (2024/1689) together represent a generational opportunity to close legal loopholes and embed child protection into AI governance. The EU AI Office should be empowered to act decisively against non-compliant developers, with voluntary codes of practice backed by binding obligations. The IWF stands ready to support independent scrutiny throughout this process.