A list of where to report some of the other types of harmful content you may see online.
IWF confirms it has begun to see AI-generated imagery of child sexual abuse being shared online, with some examples being so realistic they would be indistinguishable from real imagery.
This episode explores what needs to be done to try and control the explosion in harmful AI-generated child sexual abuse imagery.
The Internet Watch Foundation (IWF) and more than 65 child rights organisations are urgently calling on EU leaders to get vital child sexual abuse legislation ‘back on track’ to making the internet a safer place for children, following a vote by the European Parliament votes that dramatically limits the scope of the regulation.
A chilling excerpt from a new IWF report that delves into what analysts at the child protection charity currently see regarding synthetic or AI-generated imagery of child sexual abuse.
AI-Generated Child Abuse Sexual Imagery Threatens to “Overwhelm” Internet
A unique safety tech tool which uses machine learning in real-time to detect child sexual abuse images and videos is to be developed by a collaboration of EU and UK experts.
Elliptic, a global leader in digital asset decisioning, has partnered with the Internet Watch Foundation (IWF) to strengthen efforts in preventing the financing of child sexual abuse images and videos through cryptocurrencies and blockchain infrastructure.
The capacity for horrific images of AI-generated child sexual abuse to be reproduced at scale was underlined by IWF in the lead-up to the UK government’s AI Safety Summit.
Wednesday’s hearing brings into sharp focus the problems that organisations like ours, the Internet Watch Foundation, are dealing with every day.