In Conversation With Thorn’s Head of Data Science Rebecca Portnoff and IWF Chief Technology Officer Dan Sexton.
Incredibly realistic child sexual abuse imagery can be generated easily, swiftly and in vast amounts by offenders using Artificial Intelligence (AI) technology. Recently, Internet Watch Foundation (IWF) Analysts found more than 10,000 AI-generated images shared on just one dark web child abuse forum. In the case of almost 2,600 of these images, the depictions of sexual child abuse were indistinguishable from real abuse images. You can read our latest report on how AI is being abused to create child sexual abuse imagery here.
This episode explores what needs to be done to try and control this explosion in harmful imagery online and how other AI or machine-learning tools could be used to counter the phenomenon. From responsible tech development, to deployment, and the content itself that the AI is being trained on – all this and more are discussed by Thorn’s Rebecca Portnoff, who has dedicated her career to building technology and driving initiatives to defend children from sexual abuse, and Dan Sexton who leads the development of world-leading technology to support the work of the IWF Hotline.
Listen to the full episode here.