Public exposure to ‘chilling’ AI child sexual abuse images and videos increases
Hotline actions more webpages of criminal AI content on clear web in past six months than in entire previous year
Published: Mon 10 Aug 2015
The IWF will provide hashes of child sexual abuse images to the online industry to speed up the identification and removal of this content worldwide.
This enables the internet industry to actively protect their customers and help victims of child sexual abuse.
Victims’ images can be identified and removed more quickly, preventing them from being shared time and time again. Child sexual abuse images will be prevented from being uploaded to the internet in the first place. This gives internet companies the power to stop people from repeatedly sharing the images on their services. Men, women and children of all ages are protected from accidentally stumbling across the images online.
The hash list steps up efforts to make the internet a hostile place to share, view, download and trade images of children being sexually abused.
Not to be confused with a “hash tag”, a hash is a digital fingerprint of an image. There are billions of images on the internet and by creating a digital fingerprint of a single image, you can pluck it out, like finding a needle in a haystack.
IWF will automatically begin creating three types of hashes to meet the needs of the online industry. It will create PhotoDNA (technology developed by Microsoft), MD5 and SHA-1 hashes.
Many internet companies can make use of the hash list. They could be companies which provide services such as:
Hashes will only be created from images that the highly-trained IWF analysts have assessed, regardless of whether the image was sourced from a public report, a report from the online industry, an image actively found by our analysts, or an image from the Home Office’s new Child Abuse Image Database (CAID).
In November 2014 during the #WePROTECT summit, Prime Minister David Cameron announced tougher measures to combat online child sexual abuse material. One of the focusses of the summit was to look at ways to improve identifying illegal images and getting them removed. During the summit, industry members agreed on a statement of action:
“Building on the success of technologies such as PhotoDNA and video hashing, we will continue to work on new tools and techniques to help improve the detection and removal of images and videos of child sexual abuse”.
As well as having access to CAID, the IWF are in the unique position where the hotline analysts are able to actively seek out child sexual abuse material using their expertise. On average, the IWF can take action to remove around 500 URLs (web addresses) containing child sexual abuse material every day. One URL may contain one, to thousands of images. By hashing all the child sexual abuse images found on each URL, the size of the hash list will increase significantly every day. It has the potential to reach millions of hashes of images. The more hashes given to the online industry, the greater the protection offered on companies’ online services.
Five IWF Members, Facebook, Google, Microsoft, Twitter and Yahoo, are using the hash list so far. The list will then be rolled out to all eligible Members.
IWF CEO Susie Hargreaves said: “The IWF Hash List could be a game-changer and really steps up the fight against child sexual abuse images online.
“This is something we have worked on with our Members since the Prime Ministers’ #WePROTECT summit last December. We’ll soon be able to offer the hash list to all IWF Members, who are based around the world.
“It means victims’ images can be identified and removed more quickly, and we can prevent known child sexual abuse images from being uploaded to the internet in the first place.”
Hotline actions more webpages of criminal AI content on clear web in past six months than in entire previous year