Full feature-length AI films of child sexual abuse will be ‘inevitable’ as synthetic videos make ‘huge leaps’ in sophistication in a year

Published:  Fri 11 Jul 2025

Full “feature-length” AI films of child sexual abuse may now be “inevitable” unless urgent action is taken, experts warn, as rapidly improving technology means AI video is now “indistinguishable” from genuine imagery.

New data, published today (Friday, July 11) by the Internet Watch Foundation (IWF) shows confirmed reports of AI-generated child sexual abuse imagery have risen 400%, with AI child sexual abuse discovered on 210 webpages in the first six months of 2025 (January 1 – June 30). 
In the same period in 2024, IWF analysts found AI child sexual abuse imagery on 42 webpages. Each page can contain multiple images or videos.

Disturbingly, the number of AI-generated videos has rocketed in this time, with 1,286 individual AI videos of child sexual abuse being discovered in the first half of this year compared to just two in the same period last year.

All the AI videos confirmed by the IWF so far this year have been so convincing they had to be treated under UK law exactly as if they were genuine footage.

Of the 1,286 AI videos confirmed this year, 1,006 were assessed as the most extreme (Category A) imagery – videos which can depict rape, sexual torture, and even bestiality.

Now, the charity is sounding the alarm that AI-generated videos of child sexual abuse have become so realistic they can be ‘indistinguishable’ from genuine footage of child sexual abuse.

The IWF, which is at the front line in finding and preventing child sexual abuse imagery online, says the picture-quality of AI-generated videos of child sexual abuse has progressed “leaps and bounds” over the past year, and that criminals are now creating AI-generated child sexual abuse videos at scale – sometimes including the likenesses of real children. 

Highly realistic videos of abuse are no longer confined to very short, glitch-filled clips, and the potential for criminals to create even longer, more detailed videos is becoming a reality.

Analysts also warn AI-generated child sexual abuse is becoming more “extreme” as criminals become more adept at creating and depicting new scenarios. 

This has prompted fears criminals may one day be able to create full, feature length child sexual abuse films unless urgent action is taken now. 

Derek Ray-Hill, IWF Interim CEO
Derek Ray-Hill, IWF Interim CEO

Derek Ray-Hill, Interim Chief Executive of the IWF, said: “We must do all we can to prevent a flood of synthetic and partially synthetic content joining the already record quantities of child sexual abuse we are battling online. I am dismayed to see the technology continues to develop at pace, and that it continues to be abused in new and unsettling ways.

“Just as we saw with still images, AI videos of child sexual abuse have now reached the point they can be indistinguishable from genuine films. The children being depicted are often real and recognisable, the harm this material does is real, and the threat it poses threatens to escalate even further.”

Creating, possessing and distributing AI-generated child sexual abuse imagery is illegal in the UK, but the IWF says the Government must honour its manifesto commitment to ensure the safe development and use of AI models by introducing binding regulation to make sure this developing technology is safe by design, and cannot be abused to create this material.

Mr Ray-Hill added: “We must get a grip on this. At the current rate, with the way this technology is evolving, it is inevitable we are moving towards a time when criminals can create full, feature-length synthetic child sexual abuse films of real children. It’s currently just too easy to make this material.

“A UK regulatory framework for AI is urgently needed to prevent AI technology from being exploited to create child sexual abuse material.

“While new criminal offences via the Crime and Policing Bill are welcome, the window of opportunity to ensure all AI models are safe by design is swiftly closing.

“The Prime Minister only recently pledged that the Government will ensure tech can create a better future for children. Any delays only set back efforts to safeguard children and deliver on the Government’s pledge to halve violence against girls. Our analysts tell us nearly all this AI abuse imagery features girls. It is clear this is yet another way girls are being targeted and endangered online.”

The IWF has unique powers* to proactively hunt down child sexual abuse imagery on the internet. Because of this, its trained and experienced analysts are often the first to discover new ways criminals are abusing technology.

IWF Senior Internet Content Analyst
IWF Senior Internet Content Analyst

Jeff** is a Senior Analyst at the IWF specialising in AI. He said: “In terms of video quality, child sexual abuse imagery creators are leaps and bounds ahead of where they were last year.

“The first AI child sexual abuse videos we saw were deepfakes – a known victim’s face put onto an actor in an existing adult pornographic video. It wasn’t sophisticated but could still be pretty convincing. The first fully synthetic child sexual abuse video we saw at the beginning of last year was just a series of jerky images put together, nothing convincing.

“But now they have really turned a corner. The quality is alarmingly high, and the categories of offence depicted are becoming more extreme as the tools improve in their ability to generate video showing two or more people. The videos also include sets showing known victims in new scenarios.

“Just as still images jumped to photorealistic as demand increased and the tools were improved, it was only a matter of time before videos went the same way.”

IWF analysts also warn that the sophistication and realism of these videos is still developing, and there are indications criminals themselves cannot believe how easy it is to create child sexual abuse imagery using AI tools.

One perpetrator, writing in an online forum, said: “Technology moves so fast – just when I finally understand how to use a tool, something newer and better comes along.”

 

Minister for Safeguarding and Violence Against Women and Girls, Jess Phillips said:

“These statistics are utterly horrific. Those who commit these crimes are just as disgusting as those who pose a threat to children in real life.
“AI-generated child sexual abuse material is a serious crime, which is why we have introduced two new laws to crack down on this vile material.

“Soon, perpetrators who own the tools that generate the material or manuals teaching them to manipulate legitimate AI tools will face longer jail sentences and we will continue to work with regulators to protect more children.”


Rani Govender, Policy Manager for Child Safety Online at the NSPCC, said:

“It is deeply worrying to see how rapid advances in AI are being exploited to create increasingly realistic and extreme child sexual abuse material, which is then being spread online. These new figures make it clear that this vile activity will only get worse without the right protections in place.

“Young people are reaching out to Childline in distress after seeing AI-generated sexual abuse content created in their likeness. The emotional impact on them can be devastating and long lasting, leaving them embarrassed, anxious and deeply shaken.

"As generative AI continues to develop at pace, robust measures must be introduced to ensure children’s safety is not neglected. Government must implement a statutory duty of care to children for generative AI developers. This will play a vital role in preventing further harm and ensuring children’s wellbeing is considered in the design of AI products.”

The Lucy Faithfull Foundation, a UK charity which works to help offenders, or people concerned they may have a sexual interest in children, address and change their behaviour, says the number of people contacting them in relation to their use of AI has doubled over the last year.

 

Frances Frost, Director of Communications and Advocacy at the Lucy Faithfull Foundation said:

“Through our anonymous Stop It Now helpline, we speak to thousands of people every year seeking our support to change their online sexual behaviour towards children. So far this year we're seeing double the number of people contacting us concerned about their own use of AI images than did last year. Crucially, these people are not viewing these AI images in isolation - 91% of the people who contact us to say they are viewing AI images say that they have also viewed sexual images of children that weren't created with AI.

“Illegal AI imagery causes real harm to real children however it is created. It generates demand for child sexual abuse images and normalises sexual violence towards children. Children who have previously been victims of sexual abuse are revictimised. AI images also make it harder for authorities to identify real cases of children who are being abused.

“Tech companies have a critical responsibility to design platforms that protect children. We’re working directly with them to implement deterrence messaging - that is, warning messages aimed at those seeking to access child sexual abuse material including AI-generated material to confront their behaviour.

“Confidential help to change behaviour is available for people viewing or creating AI-generated sexual imagery of children online. Anyone who needs support can contact our anonymous Stop It Now helpline on 0808 1000 900 and get the help they need to stop.”

 

Dame Chi Onwurah MP, Chair of the Science, Innovation, and Technology Committee, said:

“In the UK, we can be proud of our leadership in AI research and technology development, but we must not be complacent - particularly when it comes to preventing the misuse of emerging technologies.

Vast sums of money are being ploughed into AI, and the technology is developing at an incredible rate. But criminals are already abusing it. Without a UK regulatory framework for AI, we risk losing the opportunity to shape this technology for good.

This clear warning from the Internet Watch Foundation should set alarm bells ringing. Children, particularly girls, are having their imagery recirculated into AI sexual abuse, often depicting the most extreme forms of sexual violence. I know the safety of our children is a priority for this Government with our commitment to halve violence against women and girls within a decade. We must now heed this warning and make sure safety-by-design is not an afterthought but a foundational principle.

That’s why we must act now to ensure safety-by-design is not an afterthought, but a foundational principle in the development of emerging technologies.”

*In 2014, the UK’s Director of Public Prosecutions (then Keir Starmer) empowered the IWF to proactively identify and remove CSAM online. It is the only non-law enforcement agency in Europe, and one of only a small handful of similar organisations the world over, with these powers.

**Name changed to protect analyst’s identity.

IWF urges for ‘loophole’ to be closed in proposed EU laws criminalising AI child sexual abuse as synthetic videos make ‘huge leaps’ in sophistication

IWF urges for ‘loophole’ to be closed in proposed EU laws criminalising AI child sexual abuse as synthetic videos make ‘huge leaps’ in sophistication

New data reveals AI child sexual abuse continues to spread online as criminals create more realistic, and more extreme, imagery.

11 July 2025 News
EU Parliament leads the way in tackling AI-generated child sexual abuse material

EU Parliament leads the way in tackling AI-generated child sexual abuse material

EU Parliament champions urgent legal reforms to combat AI-generated child sexual abuse, prioritising survivor protection and closing dangerous loopholes.

8 July 2025 News
Professionals working with children given ‘vital guidance’ to tackle threat of AI-generated child sexual abuse material

Professionals working with children given ‘vital guidance’ to tackle threat of AI-generated child sexual abuse material

New aid created by the NCA and IWF raises awareness of the risks to children caused by the ‘weaponised’ technology.

27 June 2025 News