AI must be a force for good and not a threat to children

Published:  Mon 30 Oct 2023

IWF and Home Office co-host AI summit fringe event on tackling looming danger of AI-generated child sexual abuse imagery.

The capacity for horrific and realistic images of AI-generated child sexual abuse to be reproduced at scale was underlined by the Internet Watch Foundation (IWF) in London today in the lead up to the UK government’s AI Safety Summit this week.

Tech giants, charities, academics and international government representatives joined the Home Office and the IWF at the event which highlighted the threat looming ahead for policymakers, law enforcement and child protection organisations.

Last week, the IWF released evidence that thousands of AI-generated child sexual abuse images could be found on the dark web, most of which is realistic enough to be treated as real imagery under UK law.

Government AI Safety Summit Logo

The increased availability of this imagery not only poses a real risk to the public by normalising sexual violence against children, but some of the imagery is based on children who have appeared in ‘real’ child sexual abuse material in the past. This means survivors of traumatic abuse are being repeatedly victimised.

The surge in AI-generated images could slow law enforcement agencies from tracking down and identifying victims of child sexual abuse, and detecting offenders and bringing them to justice.

Twenty-seven organisations, including the IWF, TikTok, Snapchat, Stability AI and the governments of the US and Australia, have now signed a pledge to tackle the threat of AI-generated child abuse imagery.

Signatories to the joint statement have pledged to sustain “technical innovation around tackling child sexual abuse in the age of AI”. The statement affirms that AI must be developed in “a way that is for the common good of protecting children from sexual abuse across all nations”.

Susie Hargreaves OBE, IWF CEO
Susie Hargreaves OBE, IWF CEO at the AI Safety Summit

IWF CEO Susie Hargreaves OBE said at the event: “About six months ago, we started seeing child sexual abuse that was text to image generated AI images. We decided to do a mini-study to see how bad this actually was.

“We are seeing this content now – we are seeing a lot of it. It is not a victimless crime. We are seeing content trained on real children. We are seeing famous people. We are seeing children who have never had their images shared before who are having images generated of them.”

She added: “It is not something we can just dismiss. This is real child sexual abuse and we need to be very alert to the fact that this is a really big problem.”

Suella Braverman Home Secretary
Suella Braverman, Home Secretary

In her speech Home Secretary Suella Braverman said: “AI presents a huge risk, but also an opportunity to tackle child sexual abuse.

“Now is our opportunity, which we simply must seize, to ensure that these risks do not materialise. Only through collective joint action that harnesses our combined expertise and knowledge can we ensure that appropriate safety measures are put in place.”

Braverman continued: “In the UK, the Internet Watch Foundation is a critical partner in our efforts to eradicate child sexual abuse online. They have begun to see AI generated child sexual abuse imagery. I am very grateful to the Internet Watch Foundation for their tireless efforts to ensure that the images and videos of children being abused are removed from the internet.

“International action and cooperation is so vital. As a global leader in tackling child sexual abuse, the UK is uniquely placed to bring the world together to ensure that AI is built safely and securely so the huge benefits can be enjoyed by all.

Braverman added: “This is just the start of the conversation, and the UK government wants to continue working collaboratively over the next few weeks and months on these issues. And I hope that we can speak as one voice with the joint statement that we have prepared that will send an unequivocal message that AI must be a force for good, and not a threat to children.”

Statistics released by the IWF last week showed that in a single month, we investigated more than 11,000 AI images which had been shared on a dark web child abuse forum. Almost 3,000 of these images were confirmed to breach UK law – meaning they depicted child sexual abuse. 

Some of the images are based on celebrities, whom AI has ‘de-aged’ and are then depicted being abused. There are even images based on innocuous images of children posted online, which AI has been able to ‘nudify’ and visually remove the clothing.

Aylo and IWF partnership ‘paves the way’ for adult sites to join war on child sexual abuse online

Aylo and IWF partnership ‘paves the way’ for adult sites to join war on child sexual abuse online

The ‘world first’ standards will help to ‘set and raise’ standards to prevent the upload and distribution of online child sexual abuse imagery.

17 May 2024 News
Biggest telecoms and digital services company in NZ plays its part in securing a safer internet for all

Biggest telecoms and digital services company in NZ plays its part in securing a safer internet for all

Spark joins the Internet Watch Foundation as a Member, helping to keep the internet free from child sexual abuse content.

16 May 2024 News
Heimdal joins fight against child sexual abuse material online

Heimdal joins fight against child sexual abuse material online

Global cybersecurity company partners with IWF to tackle child sexual abuse imagery on the internet

25 April 2024 News