White House roundtable is 'important moment' in recognising threat of AI child sexual abuse imagery

Published:  Mon 13 Nov 2023

AI-generated child sexual abuse is on the agenda at the White House as Internet Watch Foundation CEO Susie Hargreaves flies to Washington to discuss how to address the rising threat.

Today (November 13), Ms Hargreaves attended the White House Roundtable on Preventing AI-Generated Image-Based Sexual Abuse.

The White House convened the event as a follow-up to the U.K.’s Global AI Safety Summit and the release of the Biden - Harris Administration’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI.

The roundtable brought together experts from the U.S and U.K, global civil society advocates, survivors, and researchers to discuss policy and technology-based recommendations for preventing and addressing AI-generated image-based sexual abuse.

The event was chaired jointly by Rachel Vogelstein, Deputy Director and Special Assistant to the President at the White House Gender Policy Council and Special Advisor on Gender at the White House National Security Council, and Michelle Donelan, the UK’s Secretary of State for Science, Innovation and Technology.

Ms Hargreaves said: “AI-generated child sexual abuse imagery is a very real threat we are facing right now. Putting this incredibly powerful technology in the hands of sexual predators and people wanting to create harmful material has terrifying potential to flood the internet with a tsunami of abuse imagery.

“This will normalise the sexual abuse of children, and undermine our efforts to make the internet a safer place, and to identify and protect real victims.

“I am pleased this threat is being taken seriously – and being invited to talk about the dangers at the White House is an important moment. We need to see world Governments working in cooperation to get a grip on this threat now, before it really is too late.”

IWF CEO Susie Hargreaves, third from right, joined a select group of experts for a roundtable at the White House, including (l-r) Dr Elissa Redmiles, Georgetown University; David Wright, South West Grid for Learning; Rachel Vogelstein, Special Assistant to the President and Deputy Director, White House Gender Policy Council; Michelle Donelan MP, Secretary of State for the UK’s DSIT; NCMEC CEO Michelle DeLaune; & Dr Rebecca Portnoff, Head of Data Science at Thorn.
IWF CEO Susie Hargreaves, third from right, joined a select group of experts for a roundtable at the White House, including (l-r) Dr Elissa Redmiles, Georgetown University; David Wright, South West Grid for Learning; Rachel Vogelstein, Special Assistant to the President and Deputy Director, White House Gender Policy Council; Michelle Donelan MP, Secretary of State for the UK’s DSIT; NCMEC CEO Michelle DeLaune; & Dr Rebecca Portnoff, Head of Data Science at Thorn.

Last month, the IWF published a major study into the abuse of AI image generators, which criminals are using to produce life-like child sexual abuse imagery.

The study focused on a single dark web forum dedicated to child sexual abuse imagery.

In a single month (September 1 – September 31, 2023)

  • The IWF investigated 11,108 AI images which had been shared on a dark web child abuse forum.
  • Of these, 2,978 were confirmed as images which breach UK law – meaning they depicted child sexual abuse.
  • Of these images, 2,562 were so realistic, the law would need to treat them the same as if they had been real abuse images*.
  • More than one in five of these images (564) were classified as Category A, the most serious kind of imagery which can depict rape, sexual torture, and bestiality.
  • More than half (1,372) of these images depicted primary school-aged children (seven to 10 years old).
  • As well as this, 143 images depicted children aged three to six, while two images depicted babies (under two years old).

The UK has been quick to spot the dangers of AI-generated child sexual abuse imagery. In October, the IWF and the Home Office held an event in the lead up to the UK government’s AI Safety Summit.

The event saw 27 organisations, including the IWF, TikTok, Snapchat, Stability AI, and the governments of the US and Australia, sign a pledge to tackle the threat of AI-generated child abuse imagery.

Signatories to the joint statement pledged to sustain “technical innovation around tackling child sexual abuse in the age of AI”.

The statement affirms that AI must be developed in “a way that is for the common good of protecting children from sexual abuse across all nations”.

How online predators use privacy apps. New podcast episode from the IWF

How online predators use privacy apps. New podcast episode from the IWF

In Conversation with Tegan Insoll, Head of Research at Suojellaan Lapsia, and Dan Sexton, Chief Technology Officer at the IWF

15 February 2024 Blog
What did we learn from the US Senate hearing over online harms?

What did we learn from the US Senate hearing over online harms?

By Susie Hargreaves OBE, Internet Watch Foundation CEO

1 February 2024 Blog
AI – the power to harm and to help. New podcast episode from the IWF

AI – the power to harm and to help. New podcast episode from the IWF

In Conversation With Thorn’s Head of Data Science Rebecca Portnoff and IWF Chief Technology Officer Dan Sexton

5 December 2023 Blog