Alex Murray, National Crime Agency Director of Threat Leadership and policing lead for artificial intelligence, said: “AI-generated child sexual abuse material is a threat, with research from the Internet Watch Foundation showing an increase in reporting.
“Generative AI image creation tools will increase the volume of child sexual abuse material available online, creating difficulties with identifying and safeguarding victims due to the vast improvements in how real photos appear.
“Our survey showed that more than a quarter of respondents were not aware that AI-generated CSAM is illegal. A majority of professionals felt that guidance was needed to help them deal with this threat, which is why we’ve worked closely with the IWF to produce this resource. It will help professionals better understand AI, how it’s used to create CSAM and ways of responding to an incident involving children or young people.
“Protecting every single child from harm should matter to everyone. This is why we continue to work closely with partners to tackle this threat and are investing in technology to assist us with CSA investigations to safeguard children.
“Tackling child sexual abuse is a priority for the NCA and our policing partners, and we will continue to investigate and prosecute individuals who produce, possess, share or search for CSAM, including AI-generated CSAM.”
The document, Child sexual abuse imagery generated by artificial intelligence: An essential guide for professionals who work with children and young people, was issued to a network of 38,000 professionals and partners working with children in the UK to raise awareness of the issue and provide information and guidance.
The guide also provides professionals with a step-by-step response for dealing with incidents relating to AI-generated child sexual imagery, such as how best to handle any illegal material and ensuring that victims are given the appropriate support they need.