There is, however, one remaining opportunity to prevent this. The European People’s Party group in the Parliament has tabled an amendment that would extend the Interim Regulation in exactly the same terms as the Commission's proposal and the Council General Approach, without any changes. This amendment will be put to a plenary vote on Thursday.
Many MEPs remain unaware of what the Interim Regulation is, what its expiry means, and what is being asked of them. There has been considerable confusion, some of it deliberate, about what online child sexual abuse detection involves in practice.
The most common technique is hash-matching, in use by companies for more than 15 years. When a piece of child sexual abuse material is identified and removed, it is assigned a unique number. Detection systems compare content uploaded to a platform against a database – like the IWF hash list which holds over 3 million individual hashes – of those numbers. If there is a match, the content is flagged and removed. No human being reads private messages to do this. The system only compares one number against another.
That is just one example. There are many more. If Thursday’s vote fails, these technologies will grind to a halt in the EU. Millions of known images and videos of child sexual abuse that are blocked today will be allowed to circulate across EU servers. Even if they wanted to, the companies will be powerless to stop it.
The incongruity is that the EU is telling technology companies that they may continue to use exactly this kind of technology to detect and block known malware and cybersecurity threats. It is not, however, permitting them to use the same approach for known child sexual abuse material. The inconsistency is not just baffling. It is morally indefensible.
Those who argue that this legal gap is theoretical, or that the harm will be minimal, should reckon with the evidence published this month by Ofcom and Protect Children, the Finnish child protection organisation. Their findings, drawn from direct research into perpetrator behaviour, are stark.
Two in three perpetrators had been exposed to child sexual abuse material before the age of 18. Nearly a quarter first encountered it accidentally – they were not looking for it. Three in ten have now viewed AI-generated child sexual abuse material, and one in ten have created it themselves.
The research also shows that detection and moderation systems work. A third of perpetrators recall encountering a warning or block when searching for this material. These interventions change behaviour. They save children from abuse.
The same research reveals something that should alarm every policymaker in Brussels. Over the past five years, a third of perpetrators say that accessing child sexual abuse material has become harder. They attribute this directly to platform moderation, site shutdowns and law enforcement activity. The systems that are about to be disabled are the systems that have been making a difference.
What happens when those systems go dark? The research tells us that too. Perpetrators seek permissive platforms with high levels of privacy, anonymity and poor content moderation. Remove the deterrent, and the platforms that once carried some risk for abusers become open territory. The criminals who have been driven to the dark web because of effective moderation on mainstream platforms will return. They will bring others with them.