‘Disturbing’ AI-generated child sexual abuse images found on hidden chatbot website that simulates indecent fantasies

Published:  Mon 22 Sep 2025

For the first time, analysts at the Internet Watch Foundation (IWF) have identified AI-generated child sexual abuse images that are connected to AI chatbots.

Since June this year, the IWF – Europe’s largest Hotline dedicated to the identification and removal of child sexual abuse material – has found 17 incidents of AI-generated child sexual abuse material on an AI chatbot website.

The site is found on the clear web and while it first appears to offer benign chatbot characters with which users can engage and interact, IWF analysts have discovered that the website has a more sinister side.

Accessing the same website via a particular digital pathway allows users to interact with multiple chatbots that will simulate ‘abhorrent’ sexual scenarios with children. In this process, AI child sexual images are shared, some depicting children as young as seven.

Simulated scenarios that users can engage with include: ‘child prostitute in a hotel’; ‘sex with your child while your wife is on holiday’; and ‘child and teacher alone after class’.

The criminal child sexual abuse imagery is displayed when the user chooses a chatbot from a preview page on the website that describes the different chatbots’ personas.

Users have the option of generating even more images, similar to the criminal material already on display. IWF analysts did not at any point generate child sexual abuse imagery themselves while investigating the site. The AI chatbots are produced by the website’s creators as well as its users.

AI-generated child sexual abuse imagery is illegal under UK law1 and the IWF can take steps to get the illegal imagery removed from the website. Forthcoming laws2 to criminalise AI tools designed to create child sexual abuse imagery are also in motion and the IWF says the legislation ‘cannot come soon enough’.

The IWF is also urging for implementation of the UK government’s promised AI safety regulation to help prevent misuse of the technology and ensure that harms are mitigated by building in protections from the outset.

Kerry Smith, IWF CEO, standing next to IWF logo vinyl on wall
Kerry Smith, IWF CEO

IWF CEO Kerry Smith said: “Sadly, we continue to see how advances in AI technology are quickly exploited by offenders for malicious purposes. 

“The UK government is making welcome strides in tackling AI-generated child sexual abuse images and videos and the tools that create them, and the new criminal offences in the forthcoming Crime and Policing Bill cannot come soon enough. But more needs to be done, and faster. 

“The fast-moving pace of technological innovation means that legislation is always at risk of being out-of-date, which is why the principles of safety by design and child protection should be central to AI regulation.

“These chatbots are deliberately created to simulate abhorrent online sexual scenarios with children, there can be no justification for these sites to exist.”

The 17 reports of AI-generated child sexual abuse material were identified on the website between 1 June and 7 August 2025. The reports contained mostly (94%) Category C imagery – sexual posing and nudity – and featured predominantly 11 to 13-year-olds (82%). One report depicted a 7 to 10-year-old child.

The website offers users free, limited chat time before offering a subscription rate to pay for unlimited access to AI chatbot characters. Voice calls with the chatbots are advertised as ‘coming soon’.

Analysts from the IWF Hotline say, at first glance, the website appears to offer legal, adult, chatbot experiences. But after receiving reports from concerned members of the public, analysts dug deeper to find a hidden section of the site that shows the criminal material. The IWF has shared its findings with law enforcement.

Metadata from the criminal imagery show the text prompts used to generate the images, and Hotline analysts believe the instructions were very clear in their intent to create child sexual abuse imagery and could in no way be construed as innocent or accidental.

IWF Senior Internet Content Analyst, Jeff
IWF Senior Internet Content Analyst, Jeff

Jeff3, a Senior Internet Content Analyst at the IWF said: “Our first report came from a member of the public, but at that time the chatbot website showed mostly animated, manga-style images of adult and children, and none of the imagery was actionable.

“However, we later found a link on a popular social media platform that took us to the same site, which now showed content that was markedly different and darker. It was more photorealistic and included images of child sexual abuse.

“We believe there are two versions of the website, and the illegal content is revealed only if you follow a particular link or have followed a particular path from visiting other webpages. In both cases, legitimate and criminal, the URL of the website page remains the same.

“Unfortunately, to see AI chatbots used in this way doesn't come as a big surprise. It seems an inevitable consequence of when new technology is ‘turned bad’ by the wrong people. We know offenders will use all means at their disposal to create, share and distribute child sexual abuse material.”

The IWF published new data in July showing that reports of AI child sexual abuse imagery had risen by 400% in the first six months of this year. IWF analysts actioned AI child sexual abuse imagery on 210 webpages compared with 42 webpages in 2024.

The number of AI-generated videos also rocketed in this time, with 1,286 individual AI videos of child sexual abuse actioned between January and the end of June this year.

Of those confirmed child sexual abuse videos, 1,006 were assessed as the most extreme (Category A) imagery under law – videos which can depict rape, sexual torture or bestiality.

Kerry Smith added: “AI-generated child sexual abuse material is not a victimless crime. This is very disturbing imagery, and research shows that viewing CSAM can normalise the sexual abuse of children.

“We also know that in some cases, existing images of child sexual abuse have been used to train AI models, embedding a victim’s real trauma into synthetic content. This only perpetuates the cycle of suffering that victims and survivors experience.” 

Chris Sherwood, NSPCC CEO
Chris Sherwood, NSPCC CEO

Chris Sherwood, CEO at the NSPCC, said: “It is deeply troubling to see child sexual abuse material being disseminated through AI chatbots. It is clear that this technology is evolving fast and without the necessary guardrails in place.

“When companies prioritise rapid innovation and profit over safety, they risk putting children and young people in harm’s way. The absence of effective controls is already having devastating consequences for victims and survivors of online child sexual abuse.

“Tech companies must introduce robust measures to ensure children’s safety is not neglected and Government must implement a statutory duty of care to children for AI developers. This will play a vital role in preventing further harm and safeguarding the most vulnerable.”

Helen Rance, National Crime Agency Deputy Director, Child Sexual Abuse and Modern Slavery & Human Trafficking, said: “Generative artificial intelligence technologies are incredibly sophisticated, publicly accessible, and in some instances are being rushed to market without consideration for how they can be weaponised to sexually exploit children. AI systems, without the appropriate safeguards, undermine law enforcement efforts to identify and safeguard victims.

“AI-generated child sexual abuse plays a role in the normalisation and escalation of abuse among child sex offenders and is a significant concern due to the speed of the technology’s development and improvement.

“But UK law is clear – this material of anyone under 18 years old is illegal. It is an offence to produce, possess, share, or search for any material that contains or depicts child sexual abuse, regardless of whether it depicts a real child or not.

“Offenders who misuse AI tools will be caught and consequences will follow. Tackling child sexual abuse, including AI generated material, is a priority for the NCA and our policing partners and we continue to investigate and prosecute anyone engaged in this criminality.”

1 AI-generated images of child sexual abuse are illegal in the UK. The AI generated abuse which we have confirmed is actionable under the Protection of Children Act 1978, or the Coroners and Justice Act 2009 (for NPIs).

2 The Crime and Policing Bill is introducing ‘a new criminal offence that criminalises AI models that have been optimised to create child sexual abuse material’

3 Name changed to protect the analyst’s identity.

Child sexual extortion cases in the UK soar with warnings ‘ruthless’ criminals are still putting children and young people at risk

Child sexual extortion cases in the UK soar with warnings ‘ruthless’ criminals are still putting children and young people at risk

Charities warn threat of sexual extortion (or sextortion) against children ‘not diminishing’ as new data show sextortion cases soar 72% in a year

1 September 2025 News
Government must tackle sexual abuse and exploitation of girls ‘head on’ in VAWG Strategy, charities warn

Government must tackle sexual abuse and exploitation of girls ‘head on’ in VAWG Strategy, charities warn

In an urgent letter to the Home Secretary, 10 leading children’s rights groups warn children ‘bear the brunt’ of sexual abuse both on and offline.

24 August 2025 News
New partnership aims to protect children online and advance safer digital advertising

New partnership aims to protect children online and advance safer digital advertising

‘Protected by Mediocean’, a leading solution for holistic ad verification has joined the Internet Watch Foundation to strengthen safeguards in the digital media supply chain and help protect children online.

14 August 2025 News