Working internationally & fighting self-generated content

Published:  Tue 20 Jun 2017

I was privileged to be invited to speak at the International Conference on Online Child Safety which was hosted by ECPAT Taiwan in Taipei.  Taking place on 8 June at the University of Taiwan, it brought together a range of stakeholders in online child protection, including law enforcement agencies, academics and practitioners with the aim of highlighting the strategies in place to protect children online and tackling the challenges which remain.  As one of the most effective hotlines in the world for combatting online child sexual abuse imagery, the IWF has an important role to play in exchanging knowledge and best practice in this field. 

As IWF’s Technical Researcher, I spoke about our research into so-called “self-generated” sexual images of children.  One of the major challenges when discussing this type of content is to adequately define what is meant by “self-generated”.  The circumstances which lead children to appear in these images are diverse – ranging from the production in the context of a consensual relationship with a member of their peer group to situations of grooming, coercion and extortion. Whatever the reason behind the production of these images is, we work to ensure this content is removed from the internet to protect these children from further revictimization.

In recent years, we’ve seen younger and young children appearing in self-generated sexual imagery. We’ve also seen a rise in people selling these sorts of images.  Once posted online, these images are often collected and repeatedly shared by offenders, making permanently removing these images extremely challenging.  It’s vital to take action to better protect children and young people from such exploitation.

To achieve this, we’re working with Childline to make it easier for older children to confidentially report sexual images of themselves that are being distributed online. A challenge for IWF, which Childline can help us with, is enabling older children to verify their age so we know we’re dealing with an image of a child, rather than an adult. We then add the digital fingerprints of these images to the IWF Image Hash List. This means companies using this list can immediately identify and remove these images from their services, stopping the redistribution cycle and protecting all the children that have suffered abuse.

IWF has been pioneering the fight against child sexual abuse material online for over 20 years. In my eight years at IWF, I’ve seen first-hand how offenders exploit the borderless nature of the internet to distribute and redistribute hundreds of thousands of child sexual abuse images, including so-called self-generated sexual images of children. The importance of strong international relationships and harmonised strategies to protect children online cannot be overstated if we are to achieve our ultimate goal of eliminating child sexual abuse imagery online wherever it is hosted.

 

How online predators use privacy apps. New podcast episode from the IWF

How online predators use privacy apps. New podcast episode from the IWF

In Converstaion with Tegan Insoll, Head of Research at Suojellaan Lapsia, and Dan Sexton, Chief Technology Officer at the IWF

15 February 2024 Blog
What did we learn from the US Senate hearing over online harms?

What did we learn from the US Senate hearing over online harms?

By Susie Hargreaves OBE, Internet Watch Foundation CEO

1 February 2024 Blog
AI – the power to harm and to help. New podcast episode from the IWF

AI – the power to harm and to help. New podcast episode from the IWF

In Conversation With Thorn’s Head of Data Science Rebecca Portnoff and IWF Chief Technology Officer Dan Sexton

5 December 2023 Blog