IWF response to Coimisiún na Meán’s call for inputs, online safety

Developing Ireland’s first binding Online Safety Code for Video Sharing Platforms

About the Internet Watch Foundation:

The IWF is a UK based charity that works in partnership with the internet industry, law enforcement and government to remove from the internet (with the co-operation of industry) child sexual abuse images and videos wherever they are hosted in the world and non-photographic images of child sexual abuse hosted in the UK.

  • We exist for public benefit and perform two unique functions in the UK: We provide a secure and anonymous place for the public to report suspected online child sexual abuse images and videos, and Non-Photographic Images include cartoons, drawings, computer generated imagery (CGI) and other non-photographic depictions of child sexual abuse that are deemed to have breached sections 62-69 of the Coroners and Justice Act (2009).
  • We use the latest technology to search the global internet proactively for child sexual abuse images and videos, then work with partners to get them removed.

In addition, the IWF has established reporting portals – places to anonymously and safely report online child sexual abuse imagery – in 49 countries around the world, serving 2.5 billion people.

There is a Memorandum of Understanding between the National Police Chiefs’ Council (NPCC) and Crown Prosecution Service (CPS) that governs our operations. This ensures immunity from prosecution for our analysts and recognises our role as the “appropriate authority” for the issuing of Takedown Notices in the UK. Operationally, we are independent of UK Government and law enforcement but work closely with both.

The IWF also plays a vital role in providing the internet industry with several quality-assured technical services to prevent the spread of known child sexual abuse images and videos online and to stop the upload of images in the first place. These include image hashing utilising Microsoft’s PhotoDNA, a URL blocking list of live webpages, keywords list, domain alerts, payment brand alerts, newsgroup alerts and simultaneous alerts (for US companies only). Key to this is our trusted relationship with the internet industry which enables us to act as a broker between them, and government and law enforcement.

Our work is funded almost entirely by the internet industry: 90% of our funding comes from our members with the remaining 10% of our funding coming from the .UK domain name registry provider, Nominet who fund our work as one third of the UK’s Safer Internet Centre.

The IWF has previously received additional Government funding for specific projects and is open to further diversifying its funding mix in the future.

We are a charity registered in England & Wales with an 11-person Board of Trustees of which, eight are independent members and three are industry representatives. The IWF Hotline is audited biennially by an independent team, led by a family court judge, and the report published in full.

Question 1: What do you think our main priorities and objectives should be in the first binding Online Safety Code for VSPS? What are the main online harms you would like to see it address and why?

The Internet Watch Foundation’s remit is outlined in the background information at the start of this submission and therefore our response to this consultation is focused on ensuring that the Online Safety Code addresses the issue of child sexual abuse and exploitation online.

We believe that this is important, not only because it is one of the most egregious harms online, but also because there is clear legal certainty over what is and isn’t illegal and complements and works with other existing legislation. The Digital Services Act (DSA) provisions have recently come into effect which will require platforms to take a much more proactive approach to addressing harms on their platforms. The DSA will require platforms to assess the level of risk their services could be abused by bad actors and requires them to take steps to address these risks. The DSA complements legal requirements already in place through the e-commerce directive, which also requires companies to “expeditiously” remove illegal content once they become aware of it on their platforms. This could be either through their own teams of engineers and moderators discovering it, the public reporting it or trusted flagger programmes or organisations like hotlines and helplines bringing this content to their attention.

We also believe that it is important to tackle this type of content because mechanisms already exist to prevent the upload and spread of this imagery. The IWF provides technical services to its membership which helps them keep their platforms free from known child sexual abuse material. This includes image hashing technology, webpage blocking, and keywords, being the three main services that would be most applicable to video sharing platforms.

The IWF also has an interest in ensuring that children cannot access content that is age inappropriate for them and we are particularly concerned about children’s free and easy access to online pornography. We are keen to see the application of age verification, assurance and estimation techniques on video sharing platforms that is appropriate to the level of risk that they pose to children. For example, a video sharing platform that is focussed solely on the distribution of adult pornographic content should be ensuring that it is taking steps to verify that users accessing their services are over the age of 18. For other sites not offering such content and where the risk is lower, it may be more appropriate and proportionate to use age assurance or estimation technologies.

We are also keen to ensure that in relation to adult pornography, that platforms are also age verifying and obtaining the consent of the people appearing in the images and videos that are uploaded to the platform. This will help to stem the stream of new child sexual abuse images potentially uploaded to adult websites and will also, hopefully, reduce incidents of intimate image abuse on these websites too.

In summary, these are also all areas that are required to be covered in the transposition of the Audio-Visual Media Services Directive (AVMSD) and, can, as the consultation points out be developed as part of the Online Safety and Media Regulation Act 2022, of which we are of course supportive.

Question 2: What types of online harms do you think should attract the most stringent risk mitigation measures by VSPS? How could we evaluate the impact of different types of harms e.g., severity, speed at which harm may be caused? Is there a way of classifying harmful content that you consider it would be useful for us to use?

As set out in response to question 1, we are keen to ensure that the most egregious harms on the internet receive the greatest level of attention and focus from regulators. We are most concerned to ensure that illegal content and specifically, child sexual abuse is covered.

Question 3: Do you have reports, academic studies or other relevant independent research that would support your views? If you do, please share them with us with links to relevant reports, studies or research.

The Internet Watch Foundation’s annual report for 2022[1] details information which may be relevant and useful to reference as an evidence base of why greater controls are needed online. In terms of headline statistics in 2022, we assessed 375,230 reports of suspected child sexual abuse material and confirmed 255,588 reports as containing illegal content.

In the last two years, we have seen a doubling in the most severe forms of child sexual abuse, as we confirmed in 2022, 51,369 reports of Category A child sexual abuse material up from 25,050 in 2018.

We are also extremely concerned by the rise in self-generated child sexual abuse content. This is where children have been groomed, coerced, tricked, or deceived into producing images and videos of themselves and have then shared them online. In 2022, we removed 199,363 reports containing self-generated child sexual abuse material and this now accounts for three quarters of all the content we have actioned for removal. The 11-13 age range remains the fastest growing age range appearing in this content, but in the past year we have seen a 60% increase in 7–10-year-olds appearing in this content.

The IWF has also been responding to Ofcom’s preparations for the Online Safety Bill in the UK, you can read a copy of our submission to their call for evidence on our website[2], which may also be useful in helping to further shape the response in Ireland.

Another useful report that you may want to consider, was published by the Australian e-safety commissioner in December 2022, which was the first regulatory report anywhere in the world which provided insight into how the companies (Meta, WhatsApp, Google, Microsoft, Skype, Omegle and Snap) responded to the first regulatory notices for CSE/A as part of the Basic Online Safety Expectations determinations 2022.[3]

Similarly, Ofcom has also produced its first report on Video Sharing Platforms[4] which provides further insights into some of the measures we have outlined in our response to question 1.

Question 4: What approach do you think we should take to the level of detail in the Code? What role could non-binding guidance play in supplementing the Code?

The IWF has always advocated for a principles-based approach to regulation and urged Government and regulators not to be overly prescriptive in their approach to regulation. We believe that primary legislation should set the framework of what is expected of those in scope of regulation which gives regulators the ability to be flexible in their response in order that regulation continues to keep pace with changes in technology.

We believe that differing harms may require different legislative and regulatory responses. It is also important to note that many of these platforms are unique in the way they are designed and no one platform is established in the same way another is, despite the fact they may appear to have very similar characteristics.

It is important that the regulator regime is also flexible enough to work in partnership with other regulatory regimes, such as for example, the regimes on data protection.

In terms of the options set out in the consultation document, we would favour a mixed approach in the code (Option 3). On some issues like CSE/A we would like to see some element of prescriptiveness for example in setting out some of the options that a platform can take to prevent CSAM from appearing on their platforms, such as utilising the tools and services the IWF has to offer in respect of Image Hashing, URL blocking and Keywords. But not all these services may be applicable to a video sharing platform, and they should of course be given the flexibility to prove if they are not deploying these measures that they are mitigating the harm in some other way to the same or preferably improved standard.

Question 5: What do you think would be the most effective structure for the Code? What are the most important factors we should consider when we decide how to structure the Code?

We believe that the greatest focus for the Code should be on the areas of harm that cause the most damage to society. As set out at the start of this submission our interest is to ensure that tackling the spread of child sexual abuse is a priority in this Code.

Tackling illegal content must be a priority, but platforms will not get their approach to this right unless they are able to assess the level of risk, ensure they have effective terms and conditions and are enforcing them and are also taking steps to moderate content. We would urge a holistic systems and processes-based approach in the development of the Code.

Question 6: How should we design the Code to minimise the potential for conflict and maximise the potential for synergies in how platforms comply with it and the DSA?

We believe that regulation should be flexible enough to operate with other regulatory regimes such as the Digital Services Act. In respect of the provisions in the DSA and referred to within this consultation document, we expect that as a hotline, if we were notify a platform in scope of the proposed regulation, that they would act on a notice and act expeditiously, in line with the terms set out in the e-commerce directive to remove the offending illegal content.

Secondly, the DSA sets out steps platforms must take to risk assess the likelihood that their services could be abused to host or facilitate illegal activity. We believe it would make sense if the risk assessment criteria in the DSA are aligned with the provisions within this Code to ensure that companies are not having to carry out multiple risk assessments which could be confusing, burdensome, and risks a lack of alignment between regimes, with one regime telling them they have to do something one way and another regime being in direct conflict, which must be avoided.

We do, however, agree as we set out in response to Question 1 that this Code does represent an opportunity to build on the DSA provisions by adding additional obligations in areas such as age verification or on directing platforms on the tools and services they should be using to prevent the spread of illegal content on their platforms.

Question 7: To what extent, if at all, should the Code require VSPS providers to take measures to address content connected to video content?

The IWF is supportive of the application of the non-exhaustive list of 10 measures that need to be taken by video sharing platforms to comply with Article 28b of the Directive.

Question 8: How should we ask VSPS providers to introduce a feature that allows users to declare when videos contain advertising or other type of commercial communications? Should the Code include specific requirements about the form in which the declaration should take? What current examples are there that you regard as best practice?

The IWF does not have a view or anything to add in response to this question.

Question 9: How should we ask VSPS providers to introduce and design a flagging mechanism in the Code? How can we ensure that VSPS providers introduce the mechanism in a user-friendly and transparent way? How should we ask VSP Providers to report the decisions they’ve made on content after it has been flagged? To what extent should we align the Code with similar provisions on flagging in the DSA?

As in answers to previous questions, regulatory alignment with other regimes is important and therefore we favour alignment with Article 16 of the Digital Services Act which sets out criteria for trusted flagger programmes in respect of illegal content. As a hotline providing notice and takedown, we would expect that this provision would cover the IWF and other hotlines. Of course, in Ireland there is the Irish Hotline, and we would also anticipate that they would be a “trusted flagger” of content to video sharing platforms.

We believe that reporting process in place on platforms should be clear, easily accessible to users and clearly set out in comprehensible terms and conditions platforms have in place.

Question 10: What requirements should the Code include about age verification and age assurance? What sort of content should be shown by default to users who are logged out or in private browsing mode and whose age cannot be verified or assured? What evidence is there about the effectiveness of age estimation techniques? What current practices do you regard as best practice? Where accounts are not age verified should default privacy settings be used, should content default to universal content and should contact by others be more limited?

As stated elsewhere in our response to this consultation, the IWF is supportive of the introduction of age verification, assurance, and estimation procedures and that this should be proportionate to the level of risk a platform poses dependent on the content it provides.

This, however, does not sit within the IWF’s area of expertise and we therefore don’t feel best place to add anything to our previous answers on this question.

Question 11: What requirements should the Code have in relation to content rating? What do you consider to be current best practice? What experiences have you had using content rating systems on platforms and do you think they have been effective? What steps could we ask VSPS to take to ensure content is rated accurately by users?

The IWF is not best placed to respond to this question, but as the consultation sets out, the standards suggested through the Irish Film Classification Office sounds like a good possible standard to align to. We have a good relationship with the British Board of Film Classification (BBFC) and suggest there could be some alignment of approaches between the two organisations.

Question 12: What requirements should the Code have in relation to parental control features? How can we ensure that VSPS providers introduce the mechanism in a user-friendly and transparent way? Can you point to any existing example of best practice in this area? Should parental controls be ‘turned-on’ by default for accounts of minors or where age is not verified?

The IWF supports the active involvement and interest of parents, guardians, and carers in keeping their children safe online. We believe that they should have access to tools and features that enable them to protect their children online.

We have seen through the introduction of the Age-Appropriate Design Code several examples of best practice from platforms, in terms of ensuring Childrens' accounts are private by default, that children cannot be discovered by adults as part of their friends' suggestions and some companies have also introduced measures which set-up sleep reminders and limit screen time provisions for children.

It is important that both children and their parents, guardians and carers are aware of the availability of these tools and products, they are easily accessible and available to users and easy to set-up. We support many of the suggestions made above, such as ensuring that childrens’ accounts are set to private by default.

Question 13: What requirements should the Code contain to ensure that VSPS provide for effective media literacy measures and tools?

The IWF is supportive of media literacy measures and tools as part of the Code.

Question 14: How should we ask VSPS providers to address online harms in their terms and conditions in the Code, including the harms addressed under Article 28b? How should key aspects of terms and conditions be brought to users’ attention? What examples are there of best practice in relation to terms and conditions including content moderation policies and guidelines?

The IWF believes that platforms should be highlighting to users that illegal content such as the distribution of child sexual abuse material is not tolerated on its platforms and that should such be content be discovered on a user's account, they will have their account immediately suspended and the content referred to the relevant law enforcement agencies.

Question 15: How should we ask VSPS providers to address content moderation in the Code? Are there any current practices which you consider to be best practice? How should we address automated content detection and moderation in the Code?

Companies should all have procedures in place to detect and prevent the distribution of child sexual abuse material at the point of upload. The IWF offers tools products and services which assist video sharing platforms in complying with this, by offering image hash lists, webpage blocking and keyword terms as the most appropriate and applicable services to video sharing platforms. Much of this can be automated by companies, to automatically report 100% matches against this hash list and if companies are deploying PhotoDNA there are tolerance levels, they can set to detect similar content, where one or several parts of an image may have been altered to avoid detection processes.

As highlighted in response to other questions, video sharing platforms

Question 16: What requirements should the Code include about procedures for complaint-handling and resolution, including out-ofcourt redress or alternative-dispute resolution processes? To what extent should these requirements align with similar requirements in the DSA? What current practices could be regarded as best practice? How frequently should VSPS providers be obliged to report to the Commission on their complaint handling systems and what should those reports contain? Should there be a maximum time-period for VSPS providers to handle user complaints and if so, what should that period be?

The IWF does not have anything to add in response to this question.

Question 17: What approach do you think the Code should take to ensuring that the safety measures we ask VSPS providers to take are accessible to people with disabilities?

Terms and conditions, user safety functionality should be easily comprehendible to all users. Best practice in this area could include easy read versions of terms and conditions. Other than this, the IWF is not best placed to respond to this question, other than to say that it is important that people with disabilities are given support with their online experiences.

Question 18: What approach do you think the Code should take to risk assessments and safety by design? Are there any examples you can point us towards which you consider to be best practice?

The IWF supports this regulation’s focus on systems and processes platforms have in place to protect their users and encourages them to take a safety by design approach, based on a risk assessment process which is conducted by both the company and regulator. Whilst we are supportive of the provisions in the Digital Services Act that focusses on Very Large Online Platforms (VLOPS), it is important to consider that very small, fast-growing platforms may also be at risk of causing high harms for users. It is important that there is good engagement within start-up communities of their regulatory obligations and ensuring that they are supported both in their desire to grow but do it in a way that is safe and secure by design.

It is important to also consider that future EU regulation related to preventing and combatting child sexual abuse is also based on a platform’s ability to assess risk and respond accordingly to the threat and risk that they pose.

Another regulatory approach which is being taken in the UK includes the introduction of a duty of care on platform providers to ensure they are keeping users safe on their platforms. In Australia, these take the form of Basic Online Safety Expectations (BOSE).

Question 19: How do you think that cooperation with other regulators and bodies can help us to implement the Code for VSPS?

We agree that collaboration with other regulatory bodies, such as those outlined in the consultation will be important and we actively encourage the involvement of relationships with the global regulators network and ERGA. We also would encourage you to develop relationships with service providers such as the IWF of datasets which can help keep video sharing platforms free from the spread and proliferation of child sexual abuse material on their platforms. It would be particularly beneficial to recommend the adoption of these services within the code.

Question 20: What approach do you think we should take in the Code to address feeds which cause harm because of the aggregate impact of the content they provide access to? Are there current practices which you consider to be best practice in this regard?

The IWF has nothing to add in response to this question.

Question 21: Do you have any views on how requirements for commercial content arranged by a VSPS provider itself should be reflected in the Code?

The IWF has nothing to add in response to this question.

Question 22: What compliance monitoring and reporting arrangements should we include in the Code?

The IWF would be happy to consider assisting with compliance monitoring and reporting arrangements, by providing data that we have on the extent of harm on platforms, our annual report already provides detailed information on most of this and could have a role in helping to evidence the prevalence of child sexual abuse online.

We would, however, draw the line at wanting to be involved in any enforcement action directed towards companies.

It would be beneficial if monitoring and compliance arrangements were able to include information from providers about the amount of attempts or “hits” against IWF services such as image hash lists, webpage blocking lists would be helpful in us further understanding the prevalence and how effective these services are at preventing viewing offences or the upload and further distribution of this illegal imagery.

Question 23: Should the Code have a transition period or transition periods for specific issues? Which areas touched on in this Call for Inputs may VSPS providers require time to transition the most? What time frame would be reasonable for a transition period?

It is important that companies are given sufficient time to prepare for regulation, however, the regulation of video sharing platforms through the EU’s Audio-Visual Media Services Directive, should have already commenced. It could be reasonable to suggest that companies could already be taking steps to protect their users based on best practice from other jurisdictions.

We would urge a swift adoption of the Code but do recognise that companies may need sufficient time to prepare before the enforcement aspects of the regulation take effect. As we have seen with the development of the Digital Services Act, the enforcement aspects of the regulation have taken around 12 months to come into force.

[1] https://annualreport2022.iwf.org.uk/#

[2] https://www.iwf.org.uk/media/tnelu2yi/online-safety-cfe-response-form.pdf

[3] https://www.esafety.gov.au/sites/default/files/2022-12/BOSE%20transparency%20report%20Dec%202022.pdf

[4] https://www.ofcom.org.uk/online-safety/information-for-industry/vsp-regulation/first-year-report