Hannah Swirsky, Head of Policy and Public Affairs at IWF, sets out why AI is an issue for anyone whose images appear online.
Protect your Generative AI Model from the devastating harm caused by online child sexual abuse through corporate membership with the Internet Watch Foundation.
A new report from the IWF shows how the pace of AI development has not slowed as offenders are using better, faster and more accessible tools to generate new criminal images and videos.
IWF research into how artificial intelligence (AI) is increasingly being used to create child sexual abuse imagery online
IWF confirms it has begun to see AI-generated imagery of child sexual abuse being shared online, with some examples being so realistic they would be indistinguishable from real imagery.
This episode explores what needs to be done to try and control the explosion in harmful AI-generated child sexual abuse imagery.
‘Protected by Mediocean’, a leading solution for holistic ad verification has joined the Internet Watch Foundation to strengthen safeguards in the digital media supply chain and help protect children online.
A chilling excerpt from a new IWF report that delves into what analysts at the child protection charity currently see regarding synthetic or AI-generated imagery of child sexual abuse.
AI-Generated Child Abuse Sexual Imagery Threatens to “Overwhelm” Internet
AI-generated child sexual abuse is on the agenda at the White House as Internet Watch Foundation CEO Susie Hargreaves flies to Washington to discuss how to address the rising threat.
A unique safety tech tool which uses machine learning in real-time to detect child sexual abuse images and videos is to be developed by a collaboration of EU and UK experts.