AI must be a force for good and not a threat to children

Published:  Mon 30 Oct 2023

IWF and Home Office co-host AI summit fringe event on tackling looming danger of AI-generated child sexual abuse imagery.

The capacity for horrific and realistic images of AI-generated child sexual abuse to be reproduced at scale was underlined by the Internet Watch Foundation (IWF) in London today in the lead up to the UK government’s AI Safety Summit this week.

Tech giants, charities, academics and international government representatives joined the Home Office and the IWF at the event which highlighted the threat looming ahead for policymakers, law enforcement and child protection organisations.

Last week, the IWF released evidence that thousands of AI-generated child sexual abuse images could be found on the dark web, most of which is realistic enough to be treated as real imagery under UK law.

Government AI Safety Summit Logo

The increased availability of this imagery not only poses a real risk to the public by normalising sexual violence against children, but some of the imagery is based on children who have appeared in ‘real’ child sexual abuse material in the past. This means survivors of traumatic abuse are being repeatedly victimised.

The surge in AI-generated images could slow law enforcement agencies from tracking down and identifying victims of child sexual abuse, and detecting offenders and bringing them to justice.

Twenty-seven organisations, including the IWF, TikTok, Snapchat, Stability AI and the governments of the US and Australia, have now signed a pledge to tackle the threat of AI-generated child abuse imagery.

Signatories to the joint statement have pledged to sustain “technical innovation around tackling child sexual abuse in the age of AI”. The statement affirms that AI must be developed in “a way that is for the common good of protecting children from sexual abuse across all nations”.

Susie Hargreaves OBE, IWF CEO
Susie Hargreaves OBE, IWF CEO at the AI Safety Summit

IWF CEO Susie Hargreaves OBE said at the event: “About six months ago, we started seeing child sexual abuse that was text to image generated AI images. We decided to do a mini-study to see how bad this actually was.

“We are seeing this content now – we are seeing a lot of it. It is not a victimless crime. We are seeing content trained on real children. We are seeing famous people. We are seeing children who have never had their images shared before who are having images generated of them.”

She added: “It is not something we can just dismiss. This is real child sexual abuse and we need to be very alert to the fact that this is a really big problem.”

Suella Braverman Home Secretary
Suella Braverman, Home Secretary

In her speech Home Secretary Suella Braverman said: “AI presents a huge risk, but also an opportunity to tackle child sexual abuse.

“Now is our opportunity, which we simply must seize, to ensure that these risks do not materialise. Only through collective joint action that harnesses our combined expertise and knowledge can we ensure that appropriate safety measures are put in place.”

Braverman continued: “In the UK, the Internet Watch Foundation is a critical partner in our efforts to eradicate child sexual abuse online. They have begun to see AI generated child sexual abuse imagery. I am very grateful to the Internet Watch Foundation for their tireless efforts to ensure that the images and videos of children being abused are removed from the internet.

“International action and cooperation is so vital. As a global leader in tackling child sexual abuse, the UK is uniquely placed to bring the world together to ensure that AI is built safely and securely so the huge benefits can be enjoyed by all.

Braverman added: “This is just the start of the conversation, and the UK government wants to continue working collaboratively over the next few weeks and months on these issues. And I hope that we can speak as one voice with the joint statement that we have prepared that will send an unequivocal message that AI must be a force for good, and not a threat to children.”

Statistics released by the IWF last week showed that in a single month, we investigated more than 11,000 AI images which had been shared on a dark web child abuse forum. Almost 3,000 of these images were confirmed to breach UK law – meaning they depicted child sexual abuse. 

Some of the images are based on celebrities, whom AI has ‘de-aged’ and are then depicted being abused. There are even images based on innocuous images of children posted online, which AI has been able to ‘nudify’ and visually remove the clothing.

Huw Edwards’ offences highlight how WhatsApp can be abused by predators sharing criminal imagery of children, IWF warns

Huw Edwards’ offences highlight how WhatsApp can be abused by predators sharing criminal imagery of children, IWF warns

There is still nothing to stop criminals sharing child sexual abuse imagery via WhatsApp, even in the wake of the Huw Edwards scandal, the IWF warned.

20 September 2024 News
IWF urges young people to get help as criminals target younger children in ‘sextortion’ scams

IWF urges young people to get help as criminals target younger children in ‘sextortion’ scams

Younger and younger children are being targeted by online criminals in financially motivated “sextortion” scams, as the IWF urges young people to report offences and get help.

20 September 2024 News
Pinsent Masons' Move for a Safer Internet 2024

Pinsent Masons' Move for a Safer Internet 2024

Teams from across the cyber industry will join multinational law firm Pinsent Masons to raise thousands of pounds in a new campaign aimed at helping the IWF’s “vital” mission to keep the internet safe.

16 September 2024 News