The IWF has welcomed the Government’s proposed new measures, and met with Online Safety Minister Kanishka Narayan MP this week to discuss the harms inflicted on children and the realities the IWF hotline is seeing every day.
Kerry Smith, Chief Executive of the IWF said: “We welcome the Government’s efforts to bring in new measures for testing AI models to check whether they can be abused to create child sexual abuse.
“For three decades, we have been at the forefront of preventing the spread of this imagery online – we look forward to using our expertise to help further the fight against this new threat.
“AI tools have made it so survivors can be victimised all over again with just a few clicks, giving criminals the ability to make potentially limitless amounts of sophisticated, photorealistic child sexual abuse material. Material which further commodifies victims’ suffering, and makes children, particularly girls, less safe on and off line.
“Safety needs to be baked into new technology by design. Today’s announcement could be a vital step to make sure AI products are safe before they are released.”
The changes would mean safeguards within AI systems could be tested from the start, with the aim of limiting the production of child sex abuse images in the first place.
The Government said the changes, due to be tabled as an amendment to the Crime and Policing Bill, mark a major step forward in safeguarding children in the digital age.
The Bill already contains measures which will outlaw AI models specifically designed to create AI child sexual abuse imagery, as well as guides or manuals which would help offenders create this material.