
Global leaders and AI developers can act now to prioritise child safety
By Hannah Swirsky, Head of Policy and Public Affairs at IWF
By Hannah Swirsky, Head of Policy and Public Affairs at IWF
Tighter rules come as IWF warns AI-generated child sexual abuse imagery reports have quadrupled in a year.
The Home Office said fake images are being used to blackmail children and force them to livestream further abuse.
Britain will make it illegal to use artificial intelligence tools that create child sexual abuse images.
This partnership will bolster Hive’s capability to help its customers detect and mitigate CSAM on their platforms through a single, integrated API.
Fears ‘blatant get-out clause’ in safety rules may undermine efforts to crack down on criminal imagery.
Even the smallest platforms can help prevent child abuse imagery online.
Internet Watch Foundation Interim CEO Derek Ray-Hill writes on why we are working with Telegram to tackle child sexual abuse material online.
New online safety guidelines need to be more ambitious if the “hopes of a safer internet” are to be realised, the IWF warns.
Local MP Ian Sollom learned about the herculean task faced by analysts at the Internet Watch Foundation (IWF) who find, assess and remove child sexual abuse material on the internet.