Europe is about to make it illegal to protect children online

Published:  Mon 23 Mar 2026

Written by:  Dan Sexton, IWF Chief Technology Officer

On 3 April, the systems protecting children from online sexual abuse will become illegal to operate across the EU. A last-chance vote in the European Parliament on Thursday could change that. Members of the European Parliament must act now.

In less than two weeks, a change to EU law that many people know little about will have a catastrophic effect on children’s safety. Technology companies operating in the European Union will lose the legal right to search their own platforms for child sexual abuse material. Not because anyone has decided that child protection does not matter or because the threat has diminished. But because a temporary legal provision is expiring, and the politicians tasked with replacing it have so far failed to do so.

The systems that technology companies have built to detect, remove and report images and videos of children being sexually abused will become legally inoperable in the EU. The abusers know this. The rest of us should too.

For years, technology companies have operated under a voluntary framework in the EU, developing tools to keep child sexual abuse material off their platforms. In 2020, changes to the e-Privacy Directive created legal uncertainty about whether this voluntary detection work was permissible. A temporary law clarified that companies could continue doing the right thing while a permanent solution was negotiated. This phase of that derogation expires on 3 April.

The European Commission put forward a proposal for a permanent framework in 2022. It has been mired in political deadlock ever since, caught in a debate between those who frame privacy and child protection as fundamentally incompatible. They are not. That argument has been enough to stall the legislation, and now the clock has run out. 

 

The result is a legal vacuum. And in that vacuum, children will be harmed.

There is, however, one remaining opportunity to prevent this. The European People’s Party group in the Parliament has tabled an amendment that would extend the Interim Regulation in exactly the same terms as the Commission's proposal and the Council General Approach, without any changes. This amendment will be put to a plenary vote on Thursday.  

Many MEPs remain unaware of what the Interim Regulation is, what its expiry means, and what is being asked of them. There has been considerable confusion, some of it deliberate, about what online child sexual abuse detection involves in practice.  

The most common technique is hash-matching, in use by companies for more than 15 years. When a piece of child sexual abuse material is identified and removed, it is assigned a unique number. Detection systems compare content uploaded to a platform against a database – like the IWF hash list which holds over 3 million individual hashes – of those numbers. If there is a match, the content is flagged and removed. No human being reads private messages to do this. The system only compares one number against another. 

That is just one example. There are many more. If Thursday’s vote fails, these technologies will grind to a halt in the EU. Millions of known images and videos of child sexual abuse that are blocked today will be allowed to circulate across EU servers. Even if they wanted to, the companies will be powerless to stop it. 

The incongruity is that the EU is telling technology companies that they may continue to use exactly this kind of technology to detect and block known malware and cybersecurity threats. It is not, however, permitting them to use the same approach for known child sexual abuse material. The inconsistency is not just baffling. It is morally indefensible. 

Those who argue that this legal gap is theoretical, or that the harm will be minimal, should reckon with the evidence published this month by Ofcom and Protect Children, the Finnish child protection organisation. Their findings, drawn from direct research into perpetrator behaviour, are stark. 

Two in three perpetrators had been exposed to child sexual abuse material before the age of 18. Nearly a quarter first encountered it accidentally – they were not looking for it. Three in ten have now viewed AI-generated child sexual abuse material, and one in ten have created it themselves.  

The research also shows that detection and moderation systems work. A third of perpetrators recall encountering a warning or block when searching for this material. These interventions change behaviour. They save children from abuse. 

The same research reveals something that should alarm every policymaker in Brussels. Over the past five years, a third of perpetrators say that accessing child sexual abuse material has become harder. They attribute this directly to platform moderation, site shutdowns and law enforcement activity. The systems that are about to be disabled are the systems that have been making a difference. 

What happens when those systems go dark? The research tells us that too. Perpetrators seek permissive platforms with high levels of privacy, anonymity and poor content moderation. Remove the deterrent, and the platforms that once carried some risk for abusers become open territory. The criminals who have been driven to the dark web because of effective moderation on mainstream platforms will return. They will bring others with them.

 

The issue has been wrongly framed as a debate between civil liberties and child safety, as though protecting one destroys the other. This is a false choice, weaponised to justify inaction.

 

Privacy matters. Nobody serious in the technology industry disputes that. But the systems at the centre of this debate do not read private communications. Child safety technology cannot be used for surveillance any more than anti-virus scanners or spam filters. It is built to stop harmful content and nothing else. Automated detection systems look for images of child abuse, the same way an email spam filter looks for malicious code. Framing this as a risk to privacy is a distraction.

The same EU policymakers who were outraged by the Grok scandal are allowing the legal framework for child protection on those platforms to collapse. It is impossible to take seriously a commitment to online child safety that does not extend to keeping child sexual abuse material – and the tools used to create it – off the internet.

The derogation must be extended immediately to maintain the status quo while negotiations on a permanent framework continue. MEPs can still vote to extend the Interim Regulation in its existing form, without changes, while negotiations on a permanent framework continue. It is workable, it is proportionate, and it is grounded in the technical reality of how these systems function.

Beyond that, the EU needs a permanent regulation that is workable, proportionate and grounded in the technical reality of how these systems function.

The technology exists. The evidence for its effectiveness exists. The only question is whether MEPs will use their powers to make a difference.

Unless EU policymakers come to their senses by 3 April, Europe will become a safer place for those who abuse children, and a more dangerous one for the children themselves. That is not a trade-off that any of us should be willing to accept.

The Internet Watch Foundation (IWF) is the largest hotline in Europe dedicated to finding and removing child sexual abuse material from the internet. 2025 was the worst year on record for online child sexual abuse material uncovered by IWF analysts, with increasing levels of photo-realistic AI material contributing to the sky-rocketing level of abuse imagery found. 

EU failure on temporary derogation puts children at risk

EU failure on temporary derogation puts children at risk

The legal protections that allow companies in the EU to voluntarily detect, find, and remove child sexual abuse material on their platforms are about to expire, as legislative negotiations grind to a halt.

17 March 2026 Statement
Why the EU’s temporary law allowing companies to detect child sexual abuse online must be extended

Why the EU’s temporary law allowing companies to detect child sexual abuse online must be extended

Child safety is on the line - the EU must extend its temporary law before vital protections are turned off.

9 March 2026 Blog
IWF urges EU leaders to act now on child sexual abuse as 109 organisations demand robust CSAR

IWF urges EU leaders to act now on child sexual abuse as 109 organisations demand robust CSAR

IWF joins 108 civil society groups urging EU leaders to pass strong laws now to tackle the growing crisis of child sexual abuse online.

9 February 2026 News