Hannah Swirsky, Head of Policy and Public Affairs at IWF, sets out why AI is an issue for anyone whose images appear online.
Natterhub's Caroline Allams offers practical tips to help protect your children online
IWF join ECLAG coalition colleagues outside the EU Parliament in Brussels to highlight the importance of passing the Child Sexual Abuse Regulation.
Protect your Generative AI Model from the devastating harm caused by online child sexual abuse through corporate membership with the Internet Watch Foundation.
IWF research into how artificial intelligence (AI) is increasingly being used to create child sexual abuse imagery online
A new report from the IWF shows how the pace of AI development has not slowed as offenders are using better, faster and more accessible tools to generate new criminal images and videos.
New data reveals AI child sexual abuse continues to spread online as criminals create more realistic, and more extreme, imagery.
The IWF is calling for greater clarity on online harms as MPs warn new online safety legislation needs to be made more robust to help keep children safe online.
IWF confirms it has begun to see AI-generated imagery of child sexual abuse being shared online, with some examples being so realistic they would be indistinguishable from real imagery.
This episode explores what needs to be done to try and control the explosion in harmful AI-generated child sexual abuse imagery.
‘Protected by Mediocean’, a leading solution for holistic ad verification has joined the Internet Watch Foundation to strengthen safeguards in the digital media supply chain and help protect children online.
The IWF welcomed the new Bill, but said there needs to be greater clarity in how the Bill will be implemented.