A new report from the IWF shows how the pace of AI development has not slowed as offenders are using better, faster and more accessible tools to generate new criminal images and videos.
IWF confirms it has begun to see AI-generated imagery of child sexual abuse being shared online, with some examples being so realistic they would be indistinguishable from real imagery.
This episode explores what needs to be done to try and control the explosion in harmful AI-generated child sexual abuse imagery.
‘Protected by Mediocean’, a leading solution for holistic ad verification has joined the Internet Watch Foundation to strengthen safeguards in the digital media supply chain and help protect children online.
The Internet Watch Foundation (IWF) and more than 65 child rights organisations are urgently calling on EU leaders to get vital child sexual abuse legislation ‘back on track’ to making the internet a safer place for children, following a vote by the European Parliament votes that dramatically limits the scope of the regulation.
A chilling excerpt from a new IWF report that delves into what analysts at the child protection charity currently see regarding synthetic or AI-generated imagery of child sexual abuse.
A unique safety tech tool which uses machine learning in real-time to detect child sexual abuse images and videos is to be developed by a collaboration of EU and UK experts.
AI-Generated Child Abuse Sexual Imagery Threatens to “Overwhelm” Internet
IWF analysts uncover platform hosting chatbot “characters” designed to let users simulate sexual scenarios with child avatars.
AI imagery getting more ‘extreme’ as IWF welcomes new rules allowing thorough testing of AI tools