AI imagery getting more ‘extreme’ as IWF welcomes new rules allowing thorough testing of AI tools
The IWF welcomes new measures to help make sure digital tools are safe as new data shows AI child sexual abuse is still spreading.
Published: Wed 10 Dec 2014
On Wednesday 10 and Thursday 11 December 2014, representatives from over 50 countries meet in London for the We Protect global summit. The Internet Watch Foundation will be in attendance for both days.
Susie Hargreaves, Chief Executive, said: “The IWF is proud to be an active participant in the Online Child Sexual Exploitation Global Summit. As one of the world’s leading hotlines, funded by the internet industry to remove online child sexual abuse imagery and videos, we are committed to working with partners across the world to achieve our mission of eliminating online child sexual abuse.
“The IWF is acutely aware that regardless of how successful we are at removing content hosted in the UK, this is a global problem which requires every country to stand up and play an active role. By working together across the world we will move one step closer to eradicating this heinous crime.
“This is important because every single image or video is of a real child being sexually abused and every time someone views that image or video that child is re-victimised.
“We applaud the Prime Minister in taking the lead in this matter by bringing so many key stakeholders together from across the world to agree an international approach to the problem.”
The IWF welcomes new measures to help make sure digital tools are safe as new data shows AI child sexual abuse is still spreading.
More than nine in ten people in the UK say they are concerned at how images and videos of children being sexually abused are shared through end-to-end encrypted (E2EE) messaging services.