As of 3 April 2026, the European Union entered a critical legislative gap that threatens the safety of children across the digital landscape. The expiration of the temporary derogation to the ePrivacy Directive means that tech companies no longer have a clear, harmonised legal basis to voluntarily detect and remove child sexual abuse material (CSAM) from interpersonal communication services. This failure of EU lawmakers to secure an extension or a permanent regulatory framework risks allowing predatory activity to circulate unchecked on platforms in the EU.
The consequences of this legal uncertainty are both immediate and devastating. Data from a similar period of uncertainty in 2020 showed a 58% drop in reports of child abuse from EU-based services in just 18 weeks, a statistic that reflects a decrease in detection, not a decrease in abuse. Without the ability to use privacy-preserving tools like hash-matching and AI-based classifiers, the industry’s capacity to safeguard victims and prevent the revictimisation of survivors is being systematically dismantled.
We are calling for urgent action from the EU to restore legal certainty and expedite the Child Sexual Abuse Regulation (CSAR). Protecting children is not a threat to privacy; it is a fundamental human right. Whether you are a concerned citizen, a civil society organisation, or an industry leader, now is the time to demand a robust, permanent framework that empowers companies to fight online exploitation and ensures the safety of the next generation.
Since 3 April 2026, companies in the EU no longer have the legal basis to protect children online effectively.
The background: EU privacy law generally restricts the scanning of interpersonal communications. At the same time, many online services rely on privacy-preserving detection technologies to identify and remove child sexual abuse material (CSAM) shared on their platforms.
What has changed: Until 3 April 2026, a temporary legal exemption (the “temporary derogation”) to an existing piece of law allowed companies to voluntarily use such detection tools on interpersonal communication services in full compliance with EU law.
The problem: This exemption expired on 3 April 2026. EU lawmakers have not agreed on an extension, and no alternative legal framework is in place.
Detection of CSAM is not explicitly prohibited. However, the absence of a clear legal basis creates significant uncertainty:
Read more about the history of this temporary legislation here.
Without legal certainty, companies may scale back their efforts to protect children. The consequences would be devastating – both in Europe and globally.
We have seen this before: In 2020, during a similar period of uncertainty, reports of child sexual abuse from EU-based services fell by 58% in just 18 weeks.
Our data demonstrate that we need to protect children now more than ever.
We are calling for the restoration of legal certainty for voluntary detection so that companies operating in Europe can continue protecting children from abuse with confidence.
Policymakers must expedite negotiations on the permanent legal basis for the detection and removal of CSAM in the EU. The proposed Child Sexual Abuse Regulation must include a robust, ambitious framework that empowers companies to do more to protect their platforms.
For more details on the CSA regulation, please see our FAQS.
The IWF stands firm in its commitment to protecting children from online exploitation, despite the recent lapse of the EU temporary derogation. As vital detection tools for child sexual abuse material (CSAM) are forced offline, IWF leadership warns of the global risks posed by this legislative vacuum.
Here, IWF CEO Kerry Smith outlines the devastating impact of this failure and explains why a permanent EU legislative framework is non-negotiable for child safety:
This is a devastating failure for child protection in the EU – and globally – as proven tools for the detection of child sexual abuse material will be forced offline. Thanks to political gamesmanship, the EU has now opened the door for predators to target children without fear of reprisal. This is the time to increase safety measures to protect children online, not reduce them. Every day without detection means more harm for children, more victims and more abuse images and videos circulating online.
Dan Sexton, IWF Chief Technology Officer
READ BLOGOn 25 March, four European commissioners criticised the European Parliament for failing to approve the temporary legal basis to fight online child sexual abuse material. In a letter to the Parliament, the four commissioners said children will be left vulnerable to exploitation if the bill does not pass.
The European Commission's letter was signed by Executive Vice-President of the European Commission for Technological Sovereignty, Security, and Democracy Henna Virkkunen, Internal Affairs Commissioner Magnus Brunner, Justice Commissioner Michael McGrath and European Commissioner for Intergenerational Fairness, Youth, Culture and Sport Glenn Micallef. The full letter is available here.
Over 240 civil society organisations have signed a joint statement to:
"The signatories of this statement call on the EU to seize the opportunity to turn uncertainty into constructive action by working urgently toward a CSA Regulation that enables voluntary detection of child sexual abuse within online interpersonal communication settings."
"We need a durable CSA Regulation with a clear framework for voluntary detection, ensuring that companies retain both the clear legal basis and the practical tools to identify and report child sexual abuse material. Without voluntary detection, effective protection cannot be sustained. Without a regulation, voluntary detection is not secure. The two are inseparable and urgent."
The statement has been coordinated by European Child sexual abuse Legislation Advocacy Group (ECLAG), a coalition of over 80 child rights NGOs joining forces to fight to protect children from sexual violence and abuse, of which IWF sits on the Steering Group.
Leading tech companies, including Google, LinkedIn, Snapchat, Meta, Microsoft and TikTok, have published a joint statement “[urging] lawmakers in Europe to swiftly agree on a way forward for voluntary CSAM detection in interpersonal communication services and enable the continuation of established tools to protect minors.”
“Failure to act will reduce the legal clarity that has enabled companies for nearly 20 years to voluntarily detect and report known child sexual abuse material (CSAM) in interpersonal communication services,” the tech companies said. You can read the full statement and add your company’s name here.
Alongside this, the IWF has given voice to its members to advocate for a way forward for voluntary CSAM detection, which can be accessed here.
On 3 April, Google, Microsoft, Meta and Snapchat reaffirmed their commitment to the detection of CSAM on a voluntary basis. See more from Google here.
Public opinion strongly supports action at the EU level to tackle online child sexual abuse and ensure companies can detect and report it.
This 2025 survey of more than 6,000 adults in Germany, Italy and Poland, shows that 88% of respondents want their respective governments to back an EU law designed to protect millions of European children from online sexual abuse.
Support is consistently high across countries:
There is high public concern about the spread of child sexual abuse material across the surveyed Member States and strong support for measures that prevent its distribution. More than eight in 10 adults say they support the regulation that could see companies proactively detect and block images and videos of children being sexually abused online.
The fight to protect children from online sexual abuse requires a united front from citizens, industry leaders and civil society. Your voice is essential in urging EU policymakers to prioritise child safety legislation and restore the legal tools needed to detect and remove CSAM.
Whether you are an individual advocate or represent an organisation, there are several high-impact ways you can support the IWF’s call for action and help close the current legislative gap.
If you are an EU citizen:
If you are a civil society organisation:
If you are a company operating in the EU:
Under EU privacy law, services are generally not allowed to scan the content of interpersonal communications. At the same time, many companies already use technologies that help identify child sexual abuse material and detect attempts to groom children online. Without a specific legal exemption, these safety tools could be interpreted as being in conflict with privacy rules.
The temporary derogation resolved this potential conflict by explicitly stating that online services can use their detection tools on a voluntary basis, while remaining compliant with EU law. It does not require companies to ‘scan’ messages, but provides legal certainty for those that choose to use safety measures on a voluntary basis only to detect illegal material.
The temporary framework allowed online platforms to continue using established, highly targeted technologies to detect child sexual abuse within interpersonal communication environments. These include tools that can:
These technologies are highly privacy-preserving and widely used across the internet. They are deployed successfully in the background on platforms we all use every single day. They play a vital role in identifying victims and stopping the spread of material that revictimises victims and survivors with every share, view and click.
The temporary derogation was essential because it provided legal certainty for companies working to detect and report child sexual abuse on their platforms. Without it, some companies may feel they cannot safely continue these efforts due to the risk of conflicting with EU privacy rules.
We saw the real-world consequences of legal uncertainty once before. In late 2020, when companies were unsure whether detecting abuse was permitted under EU law, reports of child sexual abuse material from EU-based accounts to the National Center for Missing and Exploited Children (NCMEC) dropped by 58% in just 18 weeks.
Databases of known child sexual abuse material do not appear spontaneously. They are built because previously unknown material is first detected, verified and confirmed by authorised bodies such as the Internet Watch Foundation. This process depends on technology companies deploying a layered set of tools, including hash-matching, AI-based classifiers, and human moderation, that work together to identify both known and previously unknown CSAM.
Fewer reports translate into fewer opportunities for authorities to identify victims and intervene. Legal certainty is not just a technicality; it is critical to the proactive work done by companies to protect children online.
Since the EU Parliament last debated these issues, the digital landscape has changed dramatically. Generative AI technologies now make it possible to create child sexual abuse material that has never existed before, content that is inherently unknown and cannot be detected through traditional hash-matching systems alone. Unlike previously circulating material, this content cannot simply be flagged by comparing it to existing databases.
This shift fundamentally changes the detection challenge. Companies now face the urgent need to identify even greater volumes of previously unseen abuse. Legal uncertainty risks slowing or halting these efforts.
Asking companies to deactivate systems developed over more than a decade to keep users safe would mean that child sexual abuse continues unchecked and children remain unseen. A drop in reports would not reflect a drop in abuse.
Last year was a record year for the number of reports actioned by the IWF. The prevalence and volume of CSAM online is enormous.
This is a highly dangerous moment for the EU to require technology companies to dismantle their safety systems.
The temporary derogation was always intended as a short-term solution while the EU negotiates a permanent law to combat child sexual abuse online. This proposal for a permanent law is called the Child Sexual Abuse Regulation.
That legislation is still under negotiation and is unlikely to be finalised before the current rules expire in April 2026. Without an extension, companies once again face legal uncertainty.
Over the course of February and March 2026, EU lawmakers negotiated their position on the extension of the temporary derogation.
The European Commission proposed extending the measure until 2028 to provide enough time for negotiations on the permanent law to conclude. EU Member States agreed that they would like the legal cover to continue until 2028 with its current scope.
Neither of these positions would imply asking companies to do anything new or different to detect CSAM on their platforms. This is about maintaining the current baseline of protection.
The situation in the European Parliament became more complicated. Ultimately, the Parliament did not succeed in finding a position that would enable interinstitutional negotiations to proceed.
As a result, the temporary derogation to the ePrivacy Directive lapsed on 3 April 2026.
The official title of the proposal is the “Regulation of the European Parliament and of the Council laying down rules to prevent and combat child sexual abuse” (reference: 2022/0155(COD)).
You can see the Commission’s original proposal here, and read more about why the Commission recognised the need for this legislation here.
It is commonly known as the “CSA Regulation”, “CSAR”, or the “CSAM proposal”. Some critics have also labelled it “Chat Control”. Read more about why this nickname is misleading here.
The legislation was originally ‘adopted’ or proposed by the European Commission on 11 May 2022.
The proposal functions as lex specialis in relation to the Digital Services Act, meaning it complements and further specifies provisions of the DSA that are particularly relevant to addressing child sexual abuse material online. Its legal basis is Article 114 of the Treaty on the Functioning of the European Union, which underpins measures aimed at ensuring the proper functioning of the EU internal market.
The proposed CSA Regulation is still a proposal, which means it is not yet in law. It must be agreed between the two co-legislators: the European Parliament, which represents European citizens, and the Council of the EU, which represents the governments of the 27 Member States. The proposed regulation was stuck in the Council of the EU for a long time, but it has now progressed to interinstitutional negotiations, commonly known as trilogues. Though this represents a significant step forward, negotiations are likely to take some months yet.
No. CSAM detection is a targeted process, not mass surveillance. Automated tools scan for matches against known indicators and only flag content that meets defined risk thresholds. Nobody has access to the content of private communications unless a case is flagged for further review under strict procedures – and even then, only the suspected illegal content is shared.
No. These tools operate within strictly defined legal and technical frameworks with a clear and limited purpose: identifying child sexual abuse material. They cannot be used to monitor individuals or political content. Their deployment is subject to independent oversight and must comply with fundamental rights protections, including privacy, data protection, and freedom of expression. Any attempt to repurpose these tools would fall outside their permitted scope and be subject to legal consequences.
No. Hash lists are built and maintained by trusted organisations, such as the IWF, following strict standards to protect their integrity. A hash list can only confirm whether a given hash matches a known image of child sexual abuse – nothing more. Platforms operate in secure environments where only the hash comparison takes place, with no access to the actual image or to user data. Companies can also use privacy-protected query methods, such as Private Set Intersection, to further strengthen security.
No. The IWF advocates for a method called upload prevention, which is entirely privacy-preserving. Upload prevention conducts a check before content reaches the server – before anything is encrypted and sent – meaning encryption is never touched or weakened. It follows the same principles as everyday security features such as malware detection.
Upload prevention is a privacy-preserving method for detecting and blocking known CSAM. It works by conducting a check on content before it reaches the server, so before anything is encrypted and sent. This means the encryption itself is never compromised. It is a balanced solution that stops known child sexual abuse material from spreading in end-to-end encrypted environments while fully protecting user privacy.
The IWF has published a dedicated explainer for anyone who wants to explore this in more detail.
If you want to read more about privacy-preserving content moderation, see this paper by the Centre for Emerging Technology and Security.
Upload prevention does not compromise encrypted environments or give anyone access to private messages. Without proactive detection in place, end-to-end encrypted environments have become major channels for the distribution of CSAM. All platforms have a duty to ensure they are not safe havens for criminals targeting children. Upload prevention allows platforms to uphold both the security of private communications and the fundamental rights of victims and survivors.
No. Blocking CSAM can never be a pointless endeavour. Every barrier put in place stops criminal content from circulating and protects victims and survivors from being revictimised. When these images resurface online, so does the abuse – for victims and survivors, the harm does not end when the crime itself stops. Every time that material is shared, their dignity and right to privacy are violated all over again. The goal is to narrow the spaces where CSAM can be shared until it is virtually impossible.