This week it was reported* that TikTok will unapologetically buck the trend gripping many online platforms - choosing not to introduce end-to-end encryption to their direct messaging service.
They’re doing this because they believe it makes their platform safer.
Their services will still protect users’ privacy using standard encryption – much like that used successfully by the UK’s banking network.
In its current form, end-to-end encryption serves to hide criminality and enable the grooming and sexual extortion of children. It means companies themselves can claim they cannot see, let alone be responsible for preventing, the spread of child sexual abuse and exploitation.
So, at a time when platforms seem to be rushing to implement end-to-end encryption, TikTok’s conscious choice to step back from this move on safety grounds is an important precedent.
Platforms using end-to-end encryption include WhatsApp, Instagram, Messenger, Signal, and X. Many advocates of this kind of encryption claim existing and proven methods of detecting and disrupting known child sexual abuse material are incompatible with this extreme form of digital privacy.
This is something the IWF deeply disagrees with.
We know tools that have been tried and trusted for years can be safely and effectively used to prevent child sexual abuse imagery being uploaded into these platforms. And we know this will have no effect on the privacy of users any more than a virus guard or spam filter would – safeguards which are already successfully used to prevent dangerous files being uploaded into end-to-end encrypted environments in the first place.
But Tik Tok has gone one step further and decided the introduction of end-to-end encryption as it currently stands, without any safeguards, would be detrimental to the protections they offer their users.
We applaud this decision to prioritise user safety. A safe internet free of child sexual abuse is within reach if there are fewer places for criminals to groom children and distribute sexual imagery of their abuse.