IWF Annual Report 2025
The child sexual abuse and exploitation online landscape in 2025 is defined by rapidly evolving threats, new technologies, and deep-rooted systemic vulnerabilities. Below, we examine our approach & evidence base, the emerging and persistent harms identified this year, the systemic conditions enabling child sexual abuse material distribution, how the IWF tackles child sexual abuse and exploitation online and how you can help.
Access a downloadable PDF version of the Executive Summary, here.
The Internet Watch Foundation (IWF) works to identify, remove and prevent the spread of child sexual abuse material online - including imagery of real children and material generated using AI tools.
The Annual Data & Insights Report 2025 examines how child sexual abuse material is created, distributed and monetised, as well as the systemic challenges that allow it to persist online. It highlights several areas of particular concern, including:
These findings reflect insights drawn from the IWF’s operational work, including proactive detection activities. They should not be interpreted as a measure of global prevalence.
Our analysis is based on verified, victim-centred assessment by trained analysts and image specialists, drawing on:
We saw a sharp rise in the volume, realism, and severity of AI-generated child sexual abuse videos.
Generative AI tools, including video models, nudification apps, subscription platforms and agentic AI systems, have lowered technical barriers, enabling offenders with minimal expertise to produce and distribute illegal content at scale. AI is being used to generate synthetic abuse, manipulate images of real children, and produce explicit chats with simulated child characters. Early signs of commercialisation are emerging, with subscription-based services offering tailored content creation.
When AI systems are trained on real victims’ imagery, synthetic material prolongs harm and enables re-victimisation. Some content is used for blackmail or sexually motivated extortion. Open-source AI tools further lower barriers, allowing offenders to adapt and deploy harmful content with minimal oversight.
Swift action by legislators and technology companies is needed to stop AI technology from being exploited to create child sexual abuse material and to perpetrate violence against women and girls. This includes regulatory requirements to ensure AI products are safe by design, banning nudification apps and tools, and closing legal loopholes to ensure AI-generated material is treated the same as other forms of child sexual abuse material in jurisdictions beyond the UK.
Girls remain disproportionately represented in sexual abuse imagery, both real and AI-generated.
Analysts frequently encounter violent sexualisation, misogynistic framing and degrading scenarios. Voyeuristic and non-consensual material circulates in “exposing” spaces where girls’ bodies are commodified for rating, identification and abusive commentary. AI tools amplify harm by recreating abuse and generating sexualised depictions at scale.
These patterns reflect entrenched gendered sexual violence online, fuelled by societal norms, power imbalances and misogyny. Non-consensual sharing, voyeurism and AI manipulation robs girls of control over their image, increasing the risk of repeated circulation and re-victimisation.
Violence against women and girls and child sexual abuse are inherently and deeply connected - with shared root causes like gender inequality, misogyny and power imbalances. A coordinated and joined-up response to these issues is essential. This includes implementing a ban on nudify apps and ensuring that the whole internet infrastructure takes action to remove and block access to non-consensual intimate imagery.
Older teenagers are increasingly caught in cycles of abuse involving ‘self-generated’ imagery, leaks, AI manipulation and sexual extortion. Boys are disproportionately represented through our child reporting services and in sexually coerced extortion cases.
Images are often self-captured in private settings and later leaked, manipulated or shared under pressure. Once online, content spreads rapidly across platforms, sometimes reaching adult platforms where teens are mistaken for adults. Sexual extortion cases escalate quickly, with offenders demanding additional images or payments. Some imagery is repackaged into humiliating collages, increasing shame and compliance.
The combination of ‘self-generated’ content, leaks and coercion is creating a fast-growing, interconnected ecosystem of harm. Once shared, images can resurface repeatedly, amplifying distress and risk.
The IWF continues to support children through Report Remove, while working with industry to adopt child sexual abuse material hashing, strengthen verification and monitoring processes, and escalate sexual extortion cases to safeguarding partners. Collaboration with the adult sector, technology platforms and regulators is critical to reduce exposure, protect teens and disrupt exploitation at scale.
A small number of jurisdictions host a disproportionate share of confirmed child sexual abuse material.
A small number of jurisdictions host most of the confirmed child sexual abuse material URLs, often concentrated in a few high-volume sites. Changes in rankings reflect sites emerging, migrating, or being disrupted. When child sexual abuse material is concentrated on a few high-volume sites in jurisdictions with slower or inconsistent takedown, material remains accessible longer, increasing the risk it will be copied, redistributed, or reposted elsewhere. The UK demonstrates that rapid, collaborative removal is effective and can limit exposure.
Effective child protection therefore depends on faster, more consistent international enforcement approaches, supported by coordinated action across industry and regulatory partners.
In the EU we are seeing a growing number of child sexual abuse URLs traced to EU member hosting services. This should serve as a clarion call to act: the EU cannot be a safe haven for child sexual abuse material. The IWF continues to work with EU institutions, member states, civil society and technology companies to ensure a harmonised and effective framework for the detection, reporting, and removal of child sexual abuse material across all EU member states. In particular, we urgently need policymakers to pass the Child Sexual Abuse Regulation and recast Directive.
The UK’s Online Safety Act strengthens legal accountability, by placing responsibility on platforms to minimise harm and deliver more positive outcomes for children. It is imperative that this legislation delivers ambitious and effective regulation to ensure services undertake necessary steps to combat child sexual abuse material online.
Child sexual abuse material distribution is becoming more resilient and widespread, with offenders exploiting weaknesses across internet infrastructure to evade detection and quickly rebuild operations.
Offenders increasingly rely on image-hosting services to upload large collections of child sexual abuse material, which are then embedded across forums and blogs. Removed content is rapidly reposted to alternative pre-registered domains or reappears under new domain endings (TLD hopping), often featuring the same material and victims. Legitimate platforms are frequently abused, and takedowns targeting only specific URLs remove content temporarily but do not prevent rapid re-uploads, limiting the overall effectiveness of enforcement.
This adaptive behaviour creates multi-layered resilience, allowing material to persist across the internet. Without coordinated action across registries, registrars, hosting providers, image hosts, and platforms, these distribution pathways remain open, increasing systemic risk.
Together, these measures target domains, hosting infrastructure and access points. However, lasting systemic impact depends on broader industry alignment and shared responsibility.
The IWF uses several tools to disrupt repeat child sexual abuse material activity across the internet infrastructure:
Together, these measures target domains, hosting infrastructure and access points. However, lasting systemic impact depends on broader industry alignment and shared responsibility.
The lack of proactive detection within end-to-end encrypted (E2EE) spaces makes them hotspots for sharing child sexual abuse images and videos. The rollout of E2EE messaging without any safeguards means services lose the ability to detect and remove child sexual abuse material. To tackle this, services must conduct pre-encryption checks on E2EE platforms, to ensure that known child sexual abuse material is detected and blocked before being shared.
Criminal networks profit from child sexual abuse material by disguising websites, routing users through monetised pathways, and exploiting viral recruitment mechanisms.
Operators hide criminal material behind adult content or maintenance pages, using referrals, viral invites, and AI-driven content to funnel users toward abusive material. Invite Child Abuse Pyramid (ICAP) sites exemplify this approach, combining recruitment and monetisation in structured networks. Delays in takedown of reported ICAP URLs allow offenders to continue distributing content and generating profit. Payment routes may be concealed or routed through encrypted messaging channels, increasing resilience.
Profit incentives embed child sexual abuse material deeper into the online ecosystem, sustaining demand, normalising abuse, and allowing content to persist across multiple sites. Disguised infrastructure, referral systems, digital advertising, and encrypted payments make disruption slower and more complex. Effective mitigation depends on coordinated action across core stakeholders, including financial institutions, connectivity providers, platforms, image-hosting services, and digital advertising networks.
Money is a significant motivator for producing child sexual abuse online. A crucial part of tackling the spread of this material is disrupting the commercial influences driving its production. This includes through the introduction of mandatory duties on financial institutions to proactively detect, take down, and report digital payment links linked to the sale of images and videos of child sexual abuse.
We combine specialist analysts, technical solutions and global partnerships to detect, disrupt, remove and prevent child sexual abuse material at scale. Our work depends on collaboration with industry, regulators, civil society and law enforcement.
We use specialised technology to actively find child sexual abuse material and maintain a growing hash database to identify known child sexual abuse material across the internet.
We work with partners to block and disrupt access to child sexual abuse material, using temporary and permanent measures to prevent exposure while content is removed.
We co-develop, test and train solutions with technology companies, from small startups to global organisations, to protect children from harm including on-device AI classifiers and privacy preserving digital forensics.
We collaborate with governments, regulators, law enforcement and tech partners to influence laws, policies and standards that protect children, promote online safety and ensure platforms act responsibly. We champion proactive detection, reporting and removal of child sexual abuse material and embed child protection in emerging technologies.
We share data, insights and guidance with the child protection sector, law enforcement, technology companies, educators, parents and children to help keep them safe online.
Robust child-safety regulation must compel services to prevent, detect and remove child sexual abuse material, including upload-prevention safeguards, safety-by-design, and coordinated international standards. Urgent implementation closes gaps that allow abuse to persist.
To discuss how we can improve online child safety legislation and strengthen regulation please contact our Policy and Public Affairs Team at [email protected]
Companies operating the internet’s core infrastructure, including registries, registrars, hosting providers, filtering companies, search engines and payment providers, should join the IWF. Rapid responses to alerts, proactive blocking tools, and coordinated disruption of redistribution routes help remove child sexual abuse material and limit its spread across the internet’s infrastructure.
Companies that build platforms, AI systems and software must ensure their products cannot be misused to generate, manipulate, or distribute child sexual abuse material. Embedding safety-by-design, strong safeguards, proactive detection, and collaborating with the IWF to share insights and co-develop protective tools, can prevent abuse at scale.
Interested in joining the IWF or exploring what membership could offer your organisation? Contact our team at [email protected]
We invite researchers and data specialists to share anonymised data, develop analytical tools and run joint projects. Together, we can identify emerging threats, test interventions, and strengthen evidence-based child protection.
To discover research opportunities and collaborate with us, contact the Data & Insights Team at [email protected].
We're convening corporate partners, trusts and foundations, impact investors, governments and philanthropy networks to power the unified response demanded by this issue – and there’s a seat at the table for you.
To explore how you can make a difference – whether through funding innovation or connecting us to your networks – contact our Partnerships Team at [email protected].
This work is made possible by IWF Members, funders, hotlines, international partners and law enforcement colleagues. We thank our analysts, assessors and data specialists, whose expertise underpins these insights.
Looking ahead, we will continue to invest in technology, partnerships, and child-centred services to prevent victimisation and make the internet safer.