This episode explores what needs to be done to try and control the explosion in harmful AI-generated child sexual abuse imagery.
‘Protected by Mediocean’, a leading solution for holistic ad verification has joined the Internet Watch Foundation to strengthen safeguards in the digital media supply chain and help protect children online.
A chilling excerpt from a new IWF report that delves into what analysts at the child protection charity currently see regarding synthetic or AI-generated imagery of child sexual abuse.
A unique safety tech tool which uses machine learning in real-time to detect child sexual abuse images and videos is to be developed by a collaboration of EU and UK experts.
AI-Generated Child Abuse Sexual Imagery Threatens to “Overwhelm” Internet
AI-generated child sexual abuse is on the agenda at the White House as Internet Watch Foundation CEO Susie Hargreaves flies to Washington to discuss how to address the rising threat.
New data released by the Internet Watch Foundation (IWF) shows almost 20,000 webpages of child sexual abuse imagery in the first half of 2022 included ‘self-generated’ content of 7- to 10-year-old children.
An IWF research study on Category A child sexual abuse images and videos which fit the ‘self-generated’ definition.
The capacity for horrific images of AI-generated child sexual abuse to be reproduced at scale was underlined by IWF in the lead-up to the UK government’s AI Safety Summit.