Internet regulation, responsibility and safety: policy, practicalities and the role of providers

Published:  Fri 19 Jan 2018

Growing fears

In the past 20 years the prevailing mood among policy makers, reflecting on the internet, has gone from utopian to dystopian. Concerns about illegal activities and behaviour are driving a debate about policy and regulation. However very few practical solutions are being proposed – too often there is a general call for the big tech companies to do “something”, without any clarity as to what exactly they should do.

Fear of digital technology is growing, some of it well founded, some less so. Fear of new technology – technophobia - is not new nor is it unique to the internet. Opponents of the US postal service claimed it would encourage immorality by allowing the private delivery of birth control materials. Opponents of photography said it would ruin visual art, opponents of film said it would encourage loose sexual morals. History teaches us the importance of a balanced approach to challenges we face, including how we adjust to and manage emerging technologies.

It should be obvious that the internet and associated digital technologies are overwhelmingly beneficial, in fact they are essential. Services such as health, education, transport and utilities depend upon it. Modern business depends upon digital technology. Online banking and shopping are flourishing. New businesses have sprung into existence and now dominate the FTSE 500.  Maintaining an open global internet architecture is key to sustaining a beneficial internet. In turn this also means respecting users’ privacy and ensuring adequate encryption. No one would want to use an online banking system with a built in “back door”.

The tragedy of the commons

But we cannot close our eyes to the challenges facing the global internet. In 1968 an evolutionary biologist, Garret Hardin, published an essay in the journal Science called "The Tragedy of the Commons," a theory highlighting the problems of overpopulation. Since then the term has been used to describe any situation where individual users acting out of self-interest undermine the common good of all users by abusing the commons through their cumulative actions. It is a fair summary of the challenge on an open interconnected internet where terrorists, child abusers, misogynists, racists and criminals can flourish and threaten the integrity of the internet “commons”.

The traditional response of governments to the tragedy of the commons has been to fence it off – which in the internet world would be equivalent to establishing 190 or more separate legal jurisdictions and 190+ internets, which would end the internet as we know it. Can we preserve the open character of the internet without it becoming walled off national property? Is there a way of regulating content that recognises the uniqueness of the internet compared to offline communications?

There are two obvious differences between the online and offline worlds – the sheer volume of content published every second, (for example one hour of content uploaded onto YouTube every second) and the vast range of globally distributed providers. This makes the notion that we can construct a conventional national offline legal regulatory process to manage harmful content nonsense. It would amount to either closing the internet or doing nothing about such harmful content.

Lessons learned

For 20 years the Internet Watch Foundation (IWF) has been working with industry to take down illegal images of child sexual abuse online. The lessons of the IWF – developed over time, learning as we go, including from our mistakes, have helped us identify the basic principles that could shape an approach to content regulation online. There are five key lessons.

  1. Content that is deemed to be harmful and which should be removed from the internet should be defined in law and not subject to discretionary, subjective interpretation. For reasons of democracy the process for recommending content removal should be independent of government and political manipulation.
  2. Though essentially self-regulatory, the actual process for removing content from companies should be independent of individual companies themselves. It should not be left to the individual companies themselves – commercial imperatives can too easily shape decisions and, in any case, smaller companies can’t afford the review mechanisms larger companies can. Some kind of independent process, with company membership, needs to be established, governed by a majority of independent board members drawn from the relevant stakeholders for the particular type of content being regulated.
  3. The sheer volume of material means that algorithms will almost certainly have to be used to analyse the mass of content but if potentially questionable material is found, human analysts should make the final recommendations to remove the content. These human analysts should be well trained and supported psychologically and managerially. In addition, there should some system of quality assurance that reviews a selection of recommendations (in the IWF between five and 15 percent are reviewed). This rigorous internal process can be supplemented by an independent audit.
  4. Any independent body managing content removal recommendations should itself be subject to judicial review to ensure accountability. 
  5. Finally, the independent body and the companies themselves should be transparent about their practice and compliance through the medium of an annual transparency report.

There is no perfect solution to regulating the internet while ensuring the free flow of information, ideas and opinions and protecting the human rights of users. But it is time to bring the aspiration to deal with bad content down to some practical ideas. The experience of the IWF is a valuable starting point for such a discussion.

How online predators use privacy apps. New podcast episode from the IWF

How online predators use privacy apps. New podcast episode from the IWF

In Conversation with Tegan Insoll, Head of Research at Suojellaan Lapsia, and Dan Sexton, Chief Technology Officer at the IWF

15 February 2024 Blog
What did we learn from the US Senate hearing over online harms?

What did we learn from the US Senate hearing over online harms?

By Susie Hargreaves OBE, Internet Watch Foundation CEO

1 February 2024 Blog
AI – the power to harm and to help. New podcast episode from the IWF

AI – the power to harm and to help. New podcast episode from the IWF

In Conversation With Thorn’s Head of Data Science Rebecca Portnoff and IWF Chief Technology Officer Dan Sexton

5 December 2023 Blog