Chris Farrimond, Director of Threat Leadership at the National Crime Agency, said: “AI systems, as they become more widely used, will potentially make it easier for abusers to commit a range of child sexual abuse offences.
“As the IWF study has identified, we are currently seeing AI-generated content feature in a handful of cases, but the risk from this is increasing and we are taking it extremely seriously.
“The creation or possession of pseudo-images – one created using AI or other technology – is an offence in the UK. As with other such child sexual abuse material viewed and shared online, pseudo-images also play a role in the normalisation and escalation of abuse among offenders.
“Tackling child sexual abuse in all its forms is a priority for the NCA and alongside our policing partners, we are currently arresting around 800 people and safeguarding around 1,200 children every month. We will investigate individuals who create, share, possess, access or view a pseudo-image in the same way as if the image is of a real child.
“There is a very real possibility that if the volume of AI-generated material increases, this could greatly impact on law enforcement resources, increasing the time it takes for us to identify real children in need of protection.
“Alongside partners, we are working with industry on two key outcomes: mitigating future offending by developing AI models which cannot be used to generate CSA material, and ensuring our ability to identify illegal activity keeps up with technology.”
As well as finding and removing instances of AI generated child sexual abuse material, IWF analysts have discovered an online “manual” written by offenders with the aim of helping other criminals train the AI and refine their prompts to return ever more realistic results.
Mr Sexton said IWF analysts have seen evidence online offenders are circumventing safety measures put in place on AI image generators to have them produce increasingly realistic sexual imagery of children.
Ms Hargreaves said the threat posed by these images is real, and called on AI companies to help stop the abuse of their platforms.
She said: “We see online communities of offenders discussing ways they can get around the safety controls set up to prevent the abuse of these tools.
“We know criminals can and do exploit any new technology, and we are at a crossroads with AI. The continued abuse of this technology could have profoundly dark consequences – and could see more and more people exposed to this harmful content.
“Depictions of child sexual abuse, even artificial ones, normalise sexual violence against children. We know there is a link between viewing child sexual abuse imagery and going on to commit contact offences against children.
“My worry is if this material becomes more widely and easily available, and can be produced at will – at the click of a button- we are heading for a very dangerous place in the future.”
Information on how companies can work with the IWF to help safeguard against the spread of child sexual abuse imagery can be found here.