Hive partners with IWF to reduce the spread of child sexual abuse imagery online

Published:  Thu 23 Jan 2025

Horrific imagery of children being sexually hurt, abused and exploited is being spread across the internet on a vast scale and at an unprecedented rate. In 2023, the Internet Watch Foundation (IWF) acted to remove over 275,000 webpages containing child sexual abuse, more than ever before in its history.

Hive, a provider of cloud-based AI solutions, including content moderation models used by hundreds of online communities, is partnering with the IWF to help stop child sexual abuse online.

As part of the partnership, Hive will integrate IWF datasets into its content moderation services for all customers using Hive’s models to analyse text in messages or overlaid on video and images.

The IWF datasets include the IWF’s URL List, a comprehensive list of webpages with confirmed images and videos of child sexual abuse material (CSAM), as well as the IWF Keywords List, a unique list of words, phrases and codes that offenders may use to conceal child sexual abuse imagery on legitimate networks and platforms.

Users of Hive’s video and image moderation software who are, or become, IWF Members will also be able to access IWF’s “hashes”, unique digital fingerprints of millions of known child sexual abuse images and videos, to identify criminal content and prevent it from being shared on their platforms. 

Hive’s partnership with the IWF bolsters Hive’s capability to help its customers detect and mitigate CSAM on their platforms through a single, integrated application programming interface (API).

Through vital collaboration with Hive and other tech industry organisations, the IWF ensures that efforts to disrupt child sexual abuse online are strengthened, enabling more images and videos to be blocked and removed while protecting children in online spaces.

Kevin Guo, Co-founder and CEO of Hive said: “With the emergence and accessibility of generative AI technology, it is easier than ever before for bad actors to create and distribute harmful online content, including CSAM.

“Hive’s AI models for content moderation provide a scalable way to manage moderation risks generally, and our partnership with the IWF allows us to further expand the role we are playing in specifically helping our customers protect the safety and wellbeing of children.”

Derek Ray-Hill, Interim CEO of the IWF, said: “Thwarting attempts by offenders to view, share and profit from the distribution of horrific images of child sexual abuse on the internet is not a task that can be conducted by the IWF alone.

“Strong partnerships with a clear focus on tackling child sexual abuse material online are integral to success, which is why we are grateful to organisations like Hive who are determined to make a difference. Together we can work towards a better, safer internet for all.”

Find out more about becoming a Member and the services the IWF can provide here

Europe in ‘last chance saloon’ as new paper shows child sexual abuse can be blocked before being shared in E2EE services

Europe in ‘last chance saloon’ as new paper shows child sexual abuse can be blocked before being shared in E2EE services

IWF paper sets out how end-to-end encrypted messaging can be protected from child sexual abuse without breaking encryption.

9 October 2025 News
Vital partnership enhances ThreatLocker ability to tackle child sexual abuse imagery

Vital partnership enhances ThreatLocker ability to tackle child sexual abuse imagery

As a new Member of the Internet Watch Foundation, Threatlocker reinforces its mission to disrupt cybercrime and support child protection online.

30 September 2025 News
‘Disturbing’ AI-generated child sexual abuse images found on hidden chatbot website that simulates indecent fantasies

‘Disturbing’ AI-generated child sexual abuse images found on hidden chatbot website that simulates indecent fantasies

Internet watchdog says this is the first time it has identified imagery of child sexual abuse linked to AI chatbots.

22 September 2025 News