Not all Encryption is the same: social media is not ready for End-to-End Encryption

Published:  Mon 14 Mar 2022

Written by:   Dan Sexton, Chief Technology Officer

Encryption is not simply a ‘good thing’ to be pursued at all costs, or a ‘bad’ thing to be avoided, says IWF CTO Dan Sexton. Here, he explains the differences in the technology behind the debate.

Improvements in encryption have, for the most part, resulted in net positives for internet users around the world. The widespread adoption of encrypted web protocols like HTTPS is something to be celebrated.

Standard encryption (a combination of encryption in transit and encryption at rest) has revolutionised how we securely browse the web, access banking and government services, and communicate over email and messaging apps.

Technological advancements have continued to strengthen our privacy and security as we navigate an increasingly online and digitally connected world.

Until recently, privacy and security improvements have been largely compatible and complementary with the systems which have been protecting the safety of the most vulnerable web users.

This includes systems such as those used by social media, messaging, and data storage platforms to automatically detect images and videos of child sexual abuse. 

Deploying end-to-end encryption in communication and social media platforms is different, however.

While it may provide some additional privacy around the content of our messages, it completely bypasses the tools technology companies are using to detect child sexual abuse content. 

Many see end-to-end encryption (or E2EE in tech shorthand) as just another improvement for personal privacy, but the specific benefits it provides for social media and messaging are actually quite limited, particularly when compared to the impact it has on child safety.

E2EE extends standard encryption, so that only the sender and receiver can view the content of messages. This prevents the platforms themselves from accessing any data being hosted on, or passed through, their systems. It is important to remember all the popular messaging platforms are already using standard encryption, which protects our data from being intercepted by third parties.

The only real difference between end to end and standard encryption is if the technology company itself has any access to the content.

Encrypting messages so the tech provider can’t see the content might seem like a good thing, but by “turning off the lights” as the national Crime Agency (NCA) puts it, technology platforms lose the ability for lawful access and, crucially, will not be able to automatically detect images and videos of child sexual abuse being hosted or shared through their systems.

We have seen social media companies and messaging platforms doing some amazing work to identify bad actors and detect behaviour that might indicate illegal activity or a threat to child safety.

This is extremely important and should be commended. But it does not address the very real and growing problem of images and videos of child sexual abuse continuing to being shared on those platforms.

At the IWF, we specialise in finding child sexual abuse content on the open and dark web, which we turn into hashes – unique digital fingerprints. These can be used to automatically detect child sexual abuse content wherever it is shared. The use of this technology has been a massive step forward in the fight against child sexual abuse online.

There is a serious risk that turning on E2EE for social media messaging services will break automated child sexual abuse detection tools and result in millions of these images and videos in billions of messages going undetected.

Evidence of what happens when detection tools are turned off

In 2020 alone, platforms under Meta (formerly Facebook) made 20.3 million reports of suspected child sexual abuse content.

Then for the first half of 2021, due to an unintended consequence of new EU privacy laws, Meta stopped voluntarily scanning its platforms in the EU. During this period, US hotline NCMEC (National Center for Missing and Exploited Children) recorded a 58% reduction in EU reports.

The difference demonstrates not only how effective and necessary automated detection tools are in the fight against child sexual abuse material online, but also the disastrous consequences of turning them off.

It is important to remember the devastating impact that sharing this content has on the victims, and that sharing child sexual abuse material is itself a crime. The victims are constantly reminded of their abuse and revictimised each time content is shared. The offenders, we know, are more likely to offend if they perceive, or know, that the risk of getting caught is low.

If automated detection tools cannot be made to work in end-to-end encrypted environments, millions of images and videos will go undetected, emboldening the offenders that don’t believe they will get caught and failing to protect the victims of child sexual abuse.

At the IWF, our vision is to see an internet free from child sexual abuse, a safe place for children and adults to use around the world. Strong encryption is an important part of that vision, and we support the innovation and development of digital security that keeps adults and children safe from criminals, hackers and others that would do us harm online.

But the pursuit of individual privacy cannot come at the expense of victims of child sexual abuse. We must ensure that efforts to protect individual privacy do not compromise children’s safety.

The problem of child sexual abuse on the internet is only growing, and we must be careful not to take a step backwards, giving up some of the best technological solutions that we have in the fight against it.

Our call on tech companies

We urge tech companies to take a balanced approach to user privacy and child safety in social media and messaging platforms. We would like to see industry leaders exploring alternative methods to improve user privacy before adopting end-to-end encryption, at least until they can develop effective automated detection and child safety tools that are compatible with end-to-end encrypted environments.

At the IWF, we would like to be focusing our collective efforts on the next set of challenges such as the identification of new content before it is shared, combating live streaming, and halting the rise in coerced/uncoerced “self-generated” material.

Without proper consideration the rush towards E2EE will make these challenges even harder, while also compromising the effective technical solutions that are already in place to find known content.

How online predators use privacy apps. New podcast episode from the IWF

How online predators use privacy apps. New podcast episode from the IWF

In Conversation with Tegan Insoll, Head of Research at Suojellaan Lapsia, and Dan Sexton, Chief Technology Officer at the IWF

15 February 2024 Blog
What did we learn from the US Senate hearing over online harms?

What did we learn from the US Senate hearing over online harms?

By Susie Hargreaves OBE, Internet Watch Foundation CEO

1 February 2024 Blog
AI – the power to harm and to help. New podcast episode from the IWF

AI – the power to harm and to help. New podcast episode from the IWF

In Conversation With Thorn’s Head of Data Science Rebecca Portnoff and IWF Chief Technology Officer Dan Sexton

5 December 2023 Blog