Internet Watch Foundation Senior Analyst Natalia** said: “It is very apparent from the unsettling dark web conversations observed by the IWF Hotline that AI innovations are regarded with delight by users of child sexual abuse material.
“Every new development in generative AI is extolled for its ability to enhance the realism, to heighten the severity, or make more immersive, any conceivable sexual scenario with a child. This could be through adding audio to video, being able to depict multiple people interacting or even being able to successfully manipulate imagery of a real child known to an offender.
“Instead of being a vehicle for connection, the technology only deepens offenders’ capacity to view children and victims as abstract playthings, whose likenesses can be altered endlessly for their own enjoyment.
“We know this affects victims and survivors, as its creation and distribution is just as keenly felt as with traditional forms of child sexual abuse.”
One offender quoted in the report describes how surprised they are at “just how uncensored” the technology is, exclaiming that the ability to edit and finetune is “going to be nuts”.
Another praises an AI child sexual abuse video saying it is “an absolute masterpiece” and how “anything you desire is possible in extreme realism.”
Analysts have also observed discussions on the ability to generate AI imagery of children known to offenders, with one individual saying they are “impressed with the results of [AI] image to video conversions”, and how they want to use hidden cameras to obtain footage of real children to convert into AI videos.
The IWF is calling on the UK government to tighten up laws around AI and make it mandatory for tech companies to evaluate and safeguard AI models before release to make it harder for criminals to abuse AI image generators and create child sexual abuse imagery.
This is echoed by new polling which shows* more than four in five, or 82%, of UK adults say the government should introduce regulation to ensure AI systems are safe by design and futureproofed from causing harm.
A further 78% of survey respondents agreed that AI companies should be made to test for AI-related harms before products are released to market.