New Internet Watch Foundation Member develops purpose-built tool that effectively balances online privacy and online safety
Protect your Generative AI Model from the devastating harm caused by online child sexual abuse through corporate membership with the Internet Watch Foundation.
A chilling excerpt from a new IWF report that delves into what analysts at the child protection charity currently see regarding synthetic or AI-generated imagery of child sexual abuse.
The latest data from the Internet Watch Foundation (IWF) reveals a record rise in UK children reporting online sexual extortion, with the Report Remove service now handling an average of nine cases a week. In 2025, the helpline saw a 66% increase in self-reports from under‑18s, confirming 1,175 cases involving harmful imagery — more than a third linked to sexually coerced extortion. Criminals are increasingly exploiting young people’s nude imagery to demand money, further content, or compliance, often using aggressive threats and personal information to create fear and control. Report Remove, run by the IWF in partnership with Childline, allows young people to block or remove nude images of themselves from the internet — even before they are shared. The majority of sextortion cases involved boys aged 14–17, highlighting a growing trend in targeted online abuse. Childline counsellors continue to support children facing blackmail, fear, and isolation. The service remains free, confidential, and available to any young person worried about their imagery being shared online.
Expert analysts have taken action against 200,000 websites containing child sexual abuse material
Kindred Tech is partnering with the Internet Watch Foundation (IWF) to tackle the spread of child sexual abuse images and videos on the internet.
Record levels of dangerous AI‑generated child sexual abuse imagery were found by the IWF in 2025, with a dramatic rise in severe content. New polling shows 82% of UK adults want government action to ensure AI systems are safe by design.
The IWF’s latest AI report exposes rapidly escalating harms to children as the EU moves to scale back the tools that detect and remove child sexual abuse material online. The charity warns that the EU must act urgently to criminalise AI‑generated abuse and preserve essential detection systems before risks intensify further.
A new IWF report reveals record levels of AI‑generated child sexual abuse imagery and alarming insight into how offenders are exploiting emerging technologies. The charity is urging EU lawmakers to introduce a zero‑tolerance ban on AI‑generated abuse and the tools used to create it.