Early IWF figures indicate an increase in online child sexual abuse images taken down

Published:  Wed 9 Dec 2015

The Internet Watch Foundation (IWF) can confirm that the number of reports of child sexual abuse imagery online actioned for removal in the first half of 2015, was significantly higher than in 2014.

During the whole of 2014, when the IWF began proactively searching for images in April, 31,266 reports were actioned for removal. This year (2015) that figure was equalled on July 20, at 11.05am.

IWF CEO Susie Hargreaves said: “At the #WePROTECT summit, we pledged to do all we could to eliminate Child Sexual Abuse Material (CSAM) on the internet. Working with the internet industry, our team of specialist analysts have worked incredibly hard to identify and remove these hideous images, making the internet a safer place for all users – young or old.”

In addition to the IWF reports, this year has seen the roll out of a phased implementation of the IWF Hash List.

Not to be confused with a ‘hash tag’, the IWF Hash is a type of digital fingerprint of an image. There are billions of images on the internet and by creating a digital fingerprint of a single image, it can be removed - like finding a needle in a haystack.

Since June 2015, IWF have created just under 19,000 category ‘A’ hashes of child sexual abuse material (CSAM). Each individual hash contained an illegal image of child sexual abuse. IWF category ‘A’ refers to the ‘worst of the worst’ level of sexual abuse and violence inflicted on children, in line with UK and US definitions.  The 19,000 images identified were then loaded onto the Hash List and given to five global internet companies, who had volunteered to conduct a robust test on the list through their systems during the implementation period.

The five Members taking part in the phased implementation were Facebook, Google, Microsoft, Twitter and Yahoo. The key advantages identified were:

    Victims’ images can be identified and removed more quickly, preventing them from being shared time and time again.
    Child sexual abuse images will be prevented from being uploaded to the internet in the first place. This gives internet companies the power to stop people from repeatedly sharing the images on their services.
    Internet users are protected from accidentally stumbling across the images online.

The hashes created during the implementation stage were sourced from images forensically captured on the Home Office Child Abuse Image Database (CAID)*. In the future, hashes will also be created from images that our highly-trained analysts have assessed and sourced from IWF public reports, online industry reports and by proactively searching for criminal content.

Full statistics relating to the IWF are published annually in its annual and charity report here: www.iwf.org.uk/accountability/annual-reports
 
Notes to editors:

Contact: Lisa Stacey, IWF Communications Manager +44 (0) 1223 203030 or +44 (0) 7929 553679.

1.    The just under 19,000 Hash List figure relates to the collation of Category ‘A’ images between June 2015 and October 2015. The CSAM was sourced from the Home Office CAID database.

2.    IWF have three categories of CSAM, A, B and C. A is the most severe.

3.    All other figures source IWF, for additional information: www.iwf.org.uk

CAID went live at the end of 2014 and contains indecent images of children as well as hashes of those images. All police forces across the UK are due to be connected and using CAID by the end of 2015.

The Police have shared data from CAID with the IWF in order to assist our work with internet companies. Home Office’s new Child Abuse Image Database (CAID).

Teenage boys targeted as hotline sees ‘heartbreaking’ increase in child ‘sextortion’ reports

Teenage boys targeted as hotline sees ‘heartbreaking’ increase in child ‘sextortion’ reports

The IWF and NSPCC say tech platforms must do more to protect children online as confirmed sextortion cases soar.

18 March 2024 News
Pioneering chatbot reduces searches for illegal sexual images of children

Pioneering chatbot reduces searches for illegal sexual images of children

A major 18-month trial project has demonstrated a first-of-its-kind chatbot and warning message can reduce the number of online searches that may potentially be indicative of intent to find sexual images of children.

29 February 2024 News
“Trailblazing” partnership takes aim at criminals profiting from child sexual abuse online

“Trailblazing” partnership takes aim at criminals profiting from child sexual abuse online

Criminals running commercial child sexual abuse ‘brands’ are taking advantage of a ‘loophole’ to remain online. This new partnership aims to shut them down for good.

7 February 2024 News