IWF analysts have worked through the coronavirus lockdown to make sure children are kept safe.
The European Parliament is taking a decisive stand against the rise of AI-generated child sexual abuse material (AI-CSAM), co-hosting a high-level briefing with the Internet Watch Foundation (IWF) to address this urgent threat. With a 380% increase in AI-CSAM reports in 2024, the Parliament is pushing for robust legal reforms through the proposed Child Sexual Abuse Directive. Key priorities include criminalising all forms of AI-generated CSAM, removing legal loopholes such as the “personal use” exemption, and enhancing cross-border enforcement. The IWF and the European Child Sexual Abuse Legislation Advocacy Group (ECLAG) urge the Council of the EU to align with Parliament’s strong stance to protect children and support survivors. This article highlights the scale of the threat, the evolving technology behind synthetic abuse imagery, and the critical need for updated EU legislation.
A "pioneering" new partnership between the Internet Watch Foundation and MindGeek will offer a blueprint for how the adult industry can help in the fight against child sexual abuse material online.
New Internet Watch Foundation data reveals a sharp rise in commercial child sexual abuse websites, with criminal gangs monetising children’s exploitation through subscription models and digital payments. The charity warns of systemic failures across online platforms, financial services and encrypted technologies that allow abuse to flourish. As reports of sexual extortion surge, particularly targeting boys, the IWF calls for stronger regulation of payment systems, encryption safeguards and decisive government action to disrupt the online economy of child sexual exploitation.
Peer39, a leading provider of contextual intelligence for digital advertising, has joined forces with the Internet Watch Foundation (IWF) to help disrupt and demonetise the spread of harmful content online.
IWF supports the Online Safety Act by helping adult sites detect, remove, and prevent child sexual abuse imagery online.
The public faces an “escalating risk” of accidental exposure to child sexual abuse online as a “disturbing” new trend rewards criminals for spamming social media with links to illegal material.
A new IWF report reveals record levels of AI‑generated child sexual abuse imagery and alarming insight into how offenders are exploiting emerging technologies. The charity is urging EU lawmakers to introduce a zero‑tolerance ban on AI‑generated abuse and the tools used to create it.
Record levels of dangerous AI‑generated child sexual abuse imagery were found by the IWF in 2025, with a dramatic rise in severe content. New polling shows 82% of UK adults want government action to ensure AI systems are safe by design.
Campaigners are warning teenagers and their parents about online grooming and sexual exploitation as schools break up for the summer.
IWF reveals 2024 as the worst year for online child sexual abuse imagery urging the Prime Minister to strengthen the Online Safety Act and close critical loopholes.