Professionals working with children given ‘vital guidance’ to tackle threat of AI-generated child sexual abuse material

Published:  Fri 27 Jun 2025

New aid created by the NCA and IWF raises awareness of the risks to children caused by the ‘weaponised’ technology.

A new guide warning about the ‘disturbing’ rise in the abuse of AI to create nude and sexual imagery of children has been issued to professionals working with children and young people to help them address the new threat.

The guide will equip education practitioners and all those who support children and young people in the UK with the knowledge they need to appropriately respond to incidents involving AI-generated child sexual abuse material.

Issued this week, the new guide makes it clear AI child sexual abuse imagery “should be treated with the same level of care, urgency and safeguarding response as any other incidence involving child sexual abuse material” and aims to dispel any misconception that AI imagery causes less harm than real photographs or videos.

NCA, CEOP and IWF logos

The “vital guidance” has been issued by the National Crime Agency (NCA) and the Internet Watch Foundation (IWF) and has been developed in response to the risks that the misuse of swiftly evolving AI-image generation tools pose and the need to provide professionals with essential clarity and information on the issue.

In a supporting statement, Safeguarding Minister Jess Phillips MP, stressed the importance of a whole society response to the ‘disturbing’ trend and that technology cannot be allowed to be ‘weaponised against children’.

Jess Phillips MP, Minister for Safeguarding and Violence Against Women and Girls)
Jess Phillips MP, Minister for Safeguarding and Violence Against Women and Girls

Minister for Safeguarding and Violence against Women and Girls, Jess Phillips said: “The rise in AI-generated child sexual abuse imagery is highly disturbing and it is vital that that every arm of society keeps up with the latest online threats. I’m pleased the Internet Watch Foundation and National Crime Agency are making sure teachers have the information they need to better protect young people from this horrific harm.

“AI-generated child sexual abuse is illegal and we know that sick predators’ activities online often lead to them carrying out the most horrific abuse in person. We will not allow technology to be weaponised against children and we will not hesitate to go further to protect our children online.”

AI-generated child sexual abuse content is a rapidly increasing threat. The IWF, the UK hotline dedicated to finding and removing child sexual abuse material from the internet, processed 245 reports in 2024 which contained actionable AI-generated images of child sexual abuse. This is a 380% increase on 2023 where just 51 contained AI imagery. Furthermore, these 245 reports equated to over 7,500 images and a small number of videos – reflecting the volume of illegal imagery that can be found on webpages.

Some of the reports that the IWF receives are directly from children who refer to ‘fake’ or ‘AI-edited’ images of themselves and are often closely linked with cases of sexual extortion. This is a type of blackmail where someone tries to use intimate, naked or sexual images or videos of a child to blackmail or extort them, often for either more images or for money.

Derek Ray-Hill, IWF Interim CEO
Derek Ray-Hill, IWF Interim CEO

Derek Ray-Hill, Interim CEO at the IWF said: “This is a new and evolving threat, and the devastating harms it can inflict are very real indeed. Teachers and professionals are on the front line, and this vital new guidance will give them the tools they need to respond confidently and decisively to protect children and young people.

“The creation and distribution of AI-manipulated and fake sexual imagery of a child can have a devastating impact on the victim. It can be used to blackmail and extort young people. There can be no doubt that real harm is inflicted and the capacity to create this type of imagery quickly and easily, even via an app on a phone, is a real cause for concern.

“Reports of AI-generated child sexual abuse material must be taken seriously and the safety, care and support of victims should be the central pillar of any response. The IWF Hotline acts swiftly to assess and remove AI-generated child sexual abuse imagery from the internet, and I would urge any young people who have been affected to use the Report Remove service which will put the power back in their hands and allow us to get this material taken down if it does get spread on the open web.”

According to UK law, child sexual abuse material is always illegal, regardless of how it is created, and the taking, distribution and possession of an “indecent photograph or pseudo photograph of a child” is a criminal offence. The guide stresses that AI imagery must be treated in the same way as any other safeguarding issue regarding child sexual abuse material and notes “there have been cases where young people have used AI to create nude images of their peers” and should be treated accordingly.

In a survey conducted by the NCA in 2024, 26% of education practitioners who responded said that they were not aware that AI child sexual abuse material (AI-CSAM) is illegal, and most were not sure if the young people in their setting were aware of the illegality either. The survey aim was to capture professionals’ concerns regarding AI-enabled child sexual abuse and the extent of the harm they have seen, and most of the respondents (53%) replied that guidance for professionals was the most urgently needed resource.

Alex Murray, National Crime Agency Director of Threat Leadership and policing lead for artificial intelligence
Alex Murray, National Crime Agency Director of Threat Leadership and policing lead for artificial intelligence

Alex Murray, National Crime Agency Director of Threat Leadership and policing lead for artificial intelligence, said: “AI-generated child sexual abuse material is a threat, with research from the Internet Watch Foundation showing an increase in reporting.

“Generative AI image creation tools will increase the volume of child sexual abuse material available online, creating difficulties with identifying and safeguarding victims due to the vast improvements in how real photos appear.

“Our survey showed that more than a quarter of respondents were not aware that AI-generated CSAM is illegal. A majority of professionals felt that guidance was needed to help them deal with this threat, which is why we’ve worked closely with the IWF to produce this resource. It will help professionals better understand AI, how it’s used to create CSAM and ways of responding to an incident involving children or young people.

 “Protecting every single child from harm should matter to everyone. This is why we continue to work closely with partners to tackle this threat and are investing in technology to assist us with CSA investigations to safeguard children.

“Tackling child sexual abuse is a priority for the NCA and our policing partners, and we will continue to investigate and prosecute individuals who produce, possess, share or search for CSAM, including AI-generated CSAM.”

The document, Child sexual abuse imagery generated by artificial intelligence: An essential guide for professionals who work with children and young people, was issued to a network of 38,000 professionals and partners working with children in the UK to raise awareness of the issue and provide information and guidance.

The guide also provides professionals with a step-by-step response for dealing with incidents relating to AI-generated child sexual imagery, such as how best to handle any illegal material and ensuring that victims are given the appropriate support they need.

Victoria Green, Marie Collins Foundation CEO
Victoria Green, Marie Collins Foundation CEO

Victoria Green, Marie Collins Foundation CEO, said: “This is a vital step forward in recognising that the harm caused by AI-generated child sexual abuse imagery is real and far-reaching.

“The idea that AI-CSAM is somehow ‘victimless’ is both false and deeply damaging. Such content contributes to a harmful ecosystem of abuse and should be met with the same safeguarding standards, urgency and accountability as any other form of CSAM. This guidance is a timely and necessary resource.”

As well as being distributed across the UK, the guide is also hosted on the IWF and the NCA’s CEOP websites. Tailored versions of the guidance have been created for England, Scotland, Wales and Northern Ireland.

The NCA and IWF are united in their commitment to protecting children online and will continue to work together to identify new opportunities for collaboration on this issue.

Children and young people can use the Report Remove tool from the IWF and Childline to report AI-generated child sexual abuse material that has been shared or might be shared online.

Elliptic joins IWF to prevent the financing of child exploitation through cryptocurrencies and blockchain infrastructure

Elliptic joins IWF to prevent the financing of child exploitation through cryptocurrencies and blockchain infrastructure

Partnership will strengthen efforts to stop criminals profiting from the sale of child sexual abuse imagery

11 June 2025 News
IWF joins with partners to transform the global response for victims and survivors of online child sexual abuse

IWF joins with partners to transform the global response for victims and survivors of online child sexual abuse

The Internet Watch Foundation has joined with a consortium of partners to develop the Artemis Survivor Hub (ASH) – a revolutionary, victim-focused response to online child sexual exploitation.

11 June 2025 News
EU countries urged to have ‘courage’ and push for better laws to protect children at IWF’s Annual Report launch in Brussels

EU countries urged to have ‘courage’ and push for better laws to protect children at IWF’s Annual Report launch in Brussels

Expert speakers highlighted the need for an EU-wide framework for detecting, reporting and removing child sexual abuse material from the internet

29 May 2025 News