Jeff** is a Senior Analyst at the IWF specialising in AI. He said: “In terms of video quality, child sexual abuse imagery creators are leaps and bounds ahead of where they were last year.
“The first AI child sexual abuse videos we saw were deepfakes – a known victim’s face put onto an actor in an existing adult pornographic video. It wasn’t sophisticated but could still be pretty convincing. The first fully synthetic child sexual abuse video we saw at the beginning of last year was just a series of jerky images put together, nothing convincing.
“But now they have really turned a corner. The quality is alarmingly high, and the categories of offence depicted are becoming more extreme as the tools improve in their ability to generate video showing two or more people. The videos also include sets showing known victims in new scenarios.
“Just as still images jumped to photorealistic as demand increased and the tools were improved, it was only a matter of time before videos went the same way.”
IWF analysts also warn that the sophistication and realism of these videos is still developing, and there are indications criminals themselves cannot believe how easy it is to create child sexual abuse imagery using AI tools.
One perpetrator, writing in an online forum, said: “Technology moves so fast – just when I finally understand how to use a tool, something newer and better comes along.”
Minister for Safeguarding and Violence Against Women and Girls, Jess Phillips said:
“These statistics are utterly horrific. Those who commit these crimes are just as disgusting as those who pose a threat to children in real life.
“AI-generated child sexual abuse material is a serious crime, which is why we have introduced two new laws to crack down on this vile material.
“Soon, perpetrators who own the tools that generate the material or manuals teaching them to manipulate legitimate AI tools will face longer jail sentences and we will continue to work with regulators to protect more children.”
Rani Govender, Policy Manager for Child Safety Online at the NSPCC, said:
“It is deeply worrying to see how rapid advances in AI are being exploited to create increasingly realistic and extreme child sexual abuse material, which is then being spread online. These new figures make it clear that this vile activity will only get worse without the right protections in place.
“Young people are reaching out to Childline in distress after seeing AI-generated sexual abuse content created in their likeness. The emotional impact on them can be devastating and long lasting, leaving them embarrassed, anxious and deeply shaken.
"As generative AI continues to develop at pace, robust measures must be introduced to ensure children’s safety is not neglected. Government must implement a statutory duty of care to children for generative AI developers. This will play a vital role in preventing further harm and ensuring children’s wellbeing is considered in the design of AI products.”
The Lucy Faithfull Foundation, a UK charity which works to help offenders, or people concerned they may have a sexual interest in children, address and change their behaviour, says the number of people contacting them in relation to their use of AI has doubled over the last year.
Frances Frost, Director of Communications and Advocacy at the Lucy Faithfull Foundation said:
“Through our anonymous Stop It Now helpline, we speak to thousands of people every year seeking our support to change their online sexual behaviour towards children. So far this year we're seeing double the number of people contacting us concerned about their own use of AI images than did last year. Crucially, these people are not viewing these AI images in isolation - 91% of the people who contact us to say they are viewing AI images say that they have also viewed sexual images of children that weren't created with AI.
“Illegal AI imagery causes real harm to real children however it is created. It generates demand for child sexual abuse images and normalises sexual violence towards children. Children who have previously been victims of sexual abuse are revictimised. AI images also make it harder for authorities to identify real cases of children who are being abused.
“Tech companies have a critical responsibility to design platforms that protect children. We’re working directly with them to implement deterrence messaging - that is, warning messages aimed at those seeking to access child sexual abuse material including AI-generated material to confront their behaviour.
“Confidential help to change behaviour is available for people viewing or creating AI-generated sexual imagery of children online. Anyone who needs support can contact our anonymous Stop It Now helpline on 0808 1000 900 and get the help they need to stop.”
Dame Chi Onwurah MP, Chair of the Science, Innovation, and Technology Committee, said:
“In the UK, we can be proud of our leadership in AI research and technology development, but we must not be complacent - particularly when it comes to preventing the misuse of emerging technologies.
Vast sums of money are being ploughed into AI, and the technology is developing at an incredible rate. But criminals are already abusing it. Without a UK regulatory framework for AI, we risk losing the opportunity to shape this technology for good.
This clear warning from the Internet Watch Foundation should set alarm bells ringing. Children, particularly girls, are having their imagery recirculated into AI sexual abuse, often depicting the most extreme forms of sexual violence. I know the safety of our children is a priority for this Government with our commitment to halve violence against women and girls within a decade. We must now heed this warning and make sure safety-by-design is not an afterthought but a foundational principle.
That’s why we must act now to ensure safety-by-design is not an afterthought, but a foundational principle in the development of emerging technologies.”