Our CTO: The challenge of AI

Dan Sexton, Chief Technology Officer

 

An important part of my role as Chief Technology Officer at the IWF is to understand the opportunities and threats that emerging technologies pose to our mission to rid the internet of child sexual abuse material.  

Arguably, no technology could have as big an effect on our work as AI. While there are positive opportunities, such as using classifiers – machine learning algorithms – to detect previously unknown abuse content, the threat of AI-generated child sexual abuse images is already here, and causing real harm. 

This year, our analysts saw first-hand how perpetrators misuse AI tools to make realistic images of child sexual abuse. This is not a problem for tomorrow but one that is happening right now, and the speed at which the technology is developing is staggering.  

This is why our technology team constantly works to improve and adapt the platforms that support the IWF’s core work to find, assess and remove child sexual abuse from the internet, including AI-generated imagery that is so realistic it must be categorised as criminal content.  

In 2023, we continued to build on the success of our content assessment platform, IntelliGrade, enhancing the efficiency and quality of our image assessments. We started recording all imagery that contained more than one child, and next year our content assessors will for the first time be able to record information about every child victim visible in illegal images and videos. We’ve also begun an ambitious programme to redesign IWF’s reporting and services platforms, updating the technology and improving our vital content removal and data services. 

While the threat posed by the misuse of generative AI is worrying, there have been positive discussions across government and the tech industry on regulation and safety by design, steps which would help prevent the creation of the unlawful imagery in the first place.  

Another reason we are optimistic is the landmark passing of the Online Safety Act into UK law. The Act will, we hope, ensure that proven technical interventions to find and block known images and videos of child sexual abuse online are used everywhere that there is a high risk of child sexual abuse material being distributed.  

We can now set our sights on a future where the millions of images and videos that our analysts and assessors have seen and hashed over the years, including AI-generated imagery, will be effectively and accurately blocked from being uploaded or shared on any regulated platform.