Internet Watch Foundation sees the most extreme year on record in 2023 Annual Report and calls for immediate action to protect very young children online.
Global cybersecurity company Heimdal has joined forces with the Internet Watch Foundation to tackle child sexual abuse imagery online and make the internet a safer space for users.
New Zealand’s largest telecommunications and digital services company, Spark, joins the Internet Watch Foundation (IWF), to help keep the internet free from child sexual abuse content.
IWF and Black Forest Labs join forces to combat harmful AI-generated content. The partnership grants the frontier AI lab access to safety tech tools.
The latest data from the Internet Watch Foundation (IWF) reveals a record rise in UK children reporting online sexual extortion, with the Report Remove service now handling an average of nine cases a week. In 2025, the helpline saw a 66% increase in self-reports from under‑18s, confirming 1,175 cases involving harmful imagery — more than a third linked to sexually coerced extortion. Criminals are increasingly exploiting young people’s nude imagery to demand money, further content, or compliance, often using aggressive threats and personal information to create fear and control. Report Remove, run by the IWF in partnership with Childline, allows young people to block or remove nude images of themselves from the internet — even before they are shared. The majority of sextortion cases involved boys aged 14–17, highlighting a growing trend in targeted online abuse. Childline counsellors continue to support children facing blackmail, fear, and isolation. The service remains free, confidential, and available to any young person worried about their imagery being shared online.
On 28 April 2025, the IWF hosted MPs, peers, and staffers in Parliament to discuss the urgent findings of our 2024 Annual Data & Insights Report.
IWF analysts uncover platform hosting chatbot “characters” designed to let users simulate sexual scenarios with child avatars.
The IWF’s latest AI report exposes rapidly escalating harms to children as the EU moves to scale back the tools that detect and remove child sexual abuse material online. The charity warns that the EU must act urgently to criminalise AI‑generated abuse and preserve essential detection systems before risks intensify further.
Research report by PIER at Anglia Ruskin University, providing insight into girls and their parents' understanding of self-generated CSAM.