A person holds a smartphone displaying the "Report Remove" tool for flagging harmful online content. The screen includes a "REPORT NOW" button.

2025 Annual Data & Insights Report: Executive Summary

IWF Annual Report 2025

 

 

This Executive Summary highlights key findings and priorities for collective action. The full report provides deeper analysis of trends, systemic risks, and the evolving online harm landscape in 2025.

The child sexual abuse and exploitation online landscape in 2025 is defined by rapidly evolving threats, new technologies, and deep-rooted systemic vulnerabilities. Below, we examine our approach & evidence base, the emerging and persistent harms identified this year, the systemic conditions enabling child sexual abuse material distribution, how the IWF tackles child sexual abuse and exploitation online and how you can help.

Access a downloadable PDF version of the Executive Summary, here.

 

 

Key Findings and Headline Statistics for 2025

 

Reports assessed in 2025

Infographic showing 451,210 reports assessed, reflecting a +6% year-over-year increase. This volume is equivalent to one report every 70 seconds. Icons include a magnifying glass over a document and a stopwatch highlighting the 70-second interval.

 

Reports confirmed as criminal in 2025

Infographic showing 311,610 reports confirmed as child sexual abuse material, a 7% year-over-year increase. This frequency is equivalent to one report every 101 seconds. Icons include a cursor on a document and a stopwatch displaying the time.

 

Report Remove submissions and actioned reports

Process showing 1,894 report removal submissions received, a 66% year-over-year increase, which led to 1,175 actioned reports. Icons for the 'Report Remove' tool and a digital cursor are included.

 

Child sexual abuse videos in 2025

An infographic in three vertical columns. From left to right: a play button icon on light orange; center column (darker orange) features large white text "63,682 child sexual abuse videos confirmed as criminal," with a highlight box on "videos" and below, a semi-circle gauge showing "+50% (YoY)"; right column (darkest orange) displays white text "Including a 29% increase in Category A material, the most severe forms of abuse," with a large white 'A' below the word 'Category'.

 

 

 

Our Methodology and Evidence-Based Approach

We want to see a safer internet where child sexual abuse and exploitation cannot happen.

The Internet Watch Foundation (IWF) works to identify, remove and prevent the spread of child sexual abuse material online - including imagery of real children and material generated using AI tools.

The Annual Data & Insights Report 2025 examines how child sexual abuse material is created, distributed and monetised, as well as the systemic challenges that allow it to persist online. It highlights several areas of particular concern, including:

  • the use of generative AI to create child sexual abuse material
  • gendered patterns of abuse against girls within child sexual abuse material
  • the evolving threat posed to older teenagers

These findings reflect insights drawn from the IWF’s operational work, including proactive detection activities. They should not be interpreted as a measure of global prevalence.

Our analysis is based on verified, victim-centred assessment by trained analysts and image specialists, drawing on: 

  • reports from the public, industry, law enforcement, Members, and hotline partners
  • URL analysis to track where abuse is hosted and how it spreads
  • image and video assessment using IntelliGrade, which helps identify the severity and nature of abuse

Together, these sources provide a multi-layered understanding of how online abuse emerges, spreads and persists.

 

 

 

Emerging & persistent harms

In 2025, analysts identified several emerging and evolving risks shaping the online child sexual abuse landscape, including the rapid growth of AI-generated child sexual abuse material, persistent gendered sexual abuse targeting girls, and the victimisation of older teenagers. These trends highlight how technological change, social dynamics and criminal exploitation intersect to create new forms of harm online. 

 

AI-generated child sexual abuse material

We saw a sharp rise in the volume, realism, and severity of AI-generated child sexual abuse videos.

  • 3,443 AI-generated child sexual abuse videos in 2025
    • a more than 260-fold increase on 2024
  • 65% of AI-generated child sexual abuse videos were Category A
    • compared with 43% of videos involving real children

Generative AI tools, including video models, nudification apps, subscription platforms and agentic AI systems, have lowered technical barriers, enabling offenders with minimal expertise to produce and distribute illegal content at scale. AI is being used to generate synthetic abuse, manipulate images of real children, and produce explicit chats with simulated child characters. Early signs of commercialisation are emerging, with subscription-based services offering tailored content creation.

When AI systems are trained on real victims’ imagery, synthetic material prolongs harm and enables re-victimisation. Some content is used for blackmail or sexually motivated extortion. Open-source AI tools further lower barriers, allowing offenders to adapt and deploy harmful content with minimal oversight.

 

Shield icon Policy overview - AI-generated child sexual abuse material

Swift action by legislators and technology companies is needed to stop AI technology from being exploited to create child sexual abuse material and to perpetrate violence against women and girls. This includes regulatory requirements to ensure AI products are safe by design, banning nudification apps and tools, and closing legal loopholes to ensure AI-generated material is treated the same as other forms of child sexual abuse material in jurisdictions beyond the UK.

Graphic titled "Quotes from the Hotline" featuring quotes on AI companion sites offering explicit chats with simulated children and the generation of criminal images indistinguishable from photographic abuse material.

Gendered sexual harm (violence against women & girls)

Girls remain disproportionately represented in sexual abuse imagery, both real and AI-generated.

  • 77% of victims in child sexual abuse images were girls
    • 97% of AI-generated child sexual abuse images depicted girls

Analysts frequently encounter violent sexualisation, misogynistic framing and degrading scenarios. Voyeuristic and non-consensual material circulates in “exposing” spaces where girls’ bodies are commodified for rating, identification and abusive commentary. AI tools amplify harm by recreating abuse and generating sexualised depictions at scale.

These patterns reflect entrenched gendered sexual violence online, fuelled by societal norms, power imbalances and misogyny. Non-consensual sharing, voyeurism and AI manipulation robs girls of control over their image, increasing the risk of repeated circulation and re-victimisation.

 

Shield Icon Policy overview - Violence against women and girls

Violence against women and girls and child sexual abuse are inherently and deeply connected - with shared root causes like gender inequality, misogyny and power imbalances. A coordinated and joined-up response to these issues is essential. This includes implementing a ban on nudify apps and ensuring that the whole internet infrastructure takes action to remove and block access to non-consensual intimate imagery.

 

Graphic titled "Quotes from the Hotline" featuring quotes about the commodification and ownership of women's and girls' bodies online, described as sinister sexualization with violent undertones.

Older Teen Victimisation

Older teenagers are increasingly caught in cycles of abuse involving ‘self-generated’ imagery, leaks, AI manipulation and sexual extortion. Boys are disproportionately represented through our child reporting services and in sexually coerced extortion cases.

  • 63,044 teenagers (14–17) appeared in 56,179 child sexual abuse images,
    • with one-third (33%) ‘self-generated’
  • 397 sextortion cases - a 127% year-on-year increase
    • with 98% of victims boys aged 14–17

Images are often self-captured in private settings and later leaked, manipulated or shared under pressure. Once online, content spreads rapidly across platforms, sometimes reaching adult platforms where teens are mistaken for adults. Sexual extortion cases escalate quickly, with offenders demanding additional images or payments. Some imagery is repackaged into humiliating collages, increasing shame and compliance.

The combination of ‘self-generated’ content, leaks and coercion is creating a fast-growing, interconnected ecosystem of harm. Once shared, images can resurface repeatedly, amplifying distress and risk.

 

Our response label Our response

The IWF continues to support children through Report Remove, while working with industry to adopt child sexual abuse material hashing, strengthen verification and monitoring processes, and escalate sexual extortion cases to safeguarding partners. Collaboration with the adult sector, technology platforms and regulators is critical to reduce exposure, protect teens and disrupt exploitation at scale. 

 

Graphic titled "Quotes from the Hotline" featuring quotes on extortion tactics against children, including emotional manipulation and threats to send private images to family and schools unless payment is made.

 

 

 

Systemic conditions enabling child sexual abuse material distribution

Hosting and blocking child sexual abuse material is fragmented across technical, commercial, and regulatory layers, often spanning jurisdictions with differing laws. Its persistence reflects the combined effects of technology, infrastructure, commercial interests, and scalability pressures, which can overshadow user safety. 

 

Child sexual abuse material hosting hotspots

A small number of jurisdictions host a disproportionate share of confirmed child sexual abuse material.

  • 310,437 URLs were actioned
    • with 63% hosted in EU member states
  • Top hosting countries by share of URLs:
    • Bulgaria: 28% (+19% year-on-year)
    • Unites States: 16% (+2% year-on-year)
    • Netherlands: 11% (-18% year-on-year)
  • The UK accounted for 951 actioned URLs (0.30% of the total)

A small number of jurisdictions host most of the confirmed child sexual abuse material URLs, often concentrated in a few high-volume sites. Changes in rankings reflect sites emerging, migrating, or being disrupted. When child sexual abuse material is concentrated on a few high-volume sites in jurisdictions with slower or inconsistent takedown, material remains accessible longer, increasing the risk it will be copied, redistributed, or reposted elsewhere. The UK demonstrates that rapid, collaborative removal is effective and can limit exposure.

Effective child protection therefore depends on faster, more consistent international enforcement approaches, supported by coordinated action across industry and regulatory partners.

 

Shield Icon Policy overview - EU Child Sexual Abuse Regulation & Directive

In the EU we are seeing a growing number of child sexual abuse URLs traced to EU member hosting services. This should serve as a clarion call to act: the EU cannot be a safe haven for child sexual abuse material. The IWF continues to work with EU institutions, member states, civil society and technology companies to ensure a harmonised and effective framework for the detection, reporting, and removal of child sexual abuse material across all EU member states. In particular, we urgently need policymakers to pass the Child Sexual Abuse Regulation and recast Directive.

 

Shield Icon Policy overview - UK Online Safety Act

The UK’s Online Safety Act strengthens legal accountability, by placing responsibility on platforms to minimise harm and deliver more positive outcomes for children. It is imperative that this legislation delivers ambitious and effective regulation to ensure services undertake necessary steps to combat child sexual abuse material online. 

 

Graphic titled "Quotes from the Hotline" regarding international reporting challenges like legal parameters and language barriers, and the daily monitoring of removals from various hosting companies.

Online recidivism & infrastructure evasion

Child sexual abuse material distribution is becoming more resilient and widespread, with offenders exploiting weaknesses across internet infrastructure to evade detection and quickly rebuild operations.

  • 7,268 unique domains were actioned
    • a 20% year-on-year increase
  • Top Countries by unique domains:
    • United States: 41%
    • Russian Federation: 14%
    • Netherlands: 14%
  • UK unique domains rose from 71 to 121
    • a 70% increase year-on-year
  • Image-hosting services accounted for 77% of actioned sites
  • Commercial sites showed increased TLD hopping
    • with 133 domain strings (+28% year-on-year)
    • and 514 hops (+74% year-on-year)

Offenders increasingly rely on image-hosting services to upload large collections of child sexual abuse material, which are then embedded across forums and blogs. Removed content is rapidly reposted to alternative pre-registered domains or reappears under new domain endings (TLD hopping), often featuring the same material and victims. Legitimate platforms are frequently abused, and takedowns targeting only specific URLs remove content temporarily but do not prevent rapid re-uploads, limiting the overall effectiveness of enforcement. 

This adaptive behaviour creates multi-layered resilience, allowing material to persist across the internet. Without coordinated action across registries, registrars, hosting providers, image hosts, and platforms, these distribution pathways remain open, increasing systemic risk.

Together, these measures target domains, hosting infrastructure and access points. However, lasting systemic impact depends on broader industry alignment and shared responsibility.

 

Our response label Our response

The IWF uses several tools to disrupt repeat child sexual abuse material activity across the internet infrastructure:

  • TLD Hopping List and Domain Alerts;
    • identify repeat offender patterns, enabling rapid domain locking and suspension.
  • IWF Hash List and Image Intercept;
    • block known abusive images and videos at the point of upload, preventing them from being reposted.
  • Domain, URL and NPI URL Lists;
    • allow ISPs, MNOs, VPNs and network providers to block access to confirmed child sexual abuse material website links when removal is delayed.

Together, these measures target domains, hosting infrastructure and access points. However, lasting systemic impact depends on broader industry alignment and shared responsibility.

Shield Icon Policy overview - End-to-end encryption

The lack of proactive detection within end-to-end encrypted (E2EE) spaces makes them hotspots for sharing child sexual abuse images and videos. The rollout of E2EE messaging without any safeguards means services lose the ability to detect and remove child sexual abuse material. To tackle this, services must conduct pre-encryption checks on E2EE platforms, to ensure that known child sexual abuse material is detected and blocked before being shared. 

  

Graphic titled "Quotes from the Hotline" featuring quotes on persistent image distribution methods where thousands of files are re-uploaded to new domains almost immediately after being removed.

Commercialised child sexual abuse material distribution networks

Criminal networks profit from child sexual abuse material by disguising websites, routing users through monetised pathways, and exploiting viral recruitment mechanisms.

  • 15,031 commercial URLs were identified
    • representing around 5% of confirmed child sexual abuse webpages
  • 2,458 commercial sites were disguised
    • approximately 16% of all commercial sites
  • ICAP sites accounted for 999 actioned reports
    • 98% of which were received from the public
      • with a further 5,234 previously actioned
  • Payment options included:
    • Cryptocurrency (3,276 instances across 1,002 URLs)
    • Money transfer services (1,600 across 901 URLs)
    • Card payments (240 across 148 URLs)

Operators hide criminal material behind adult content or maintenance pages, using referrals, viral invites, and AI-driven content to funnel users toward abusive material. Invite Child Abuse Pyramid (ICAP) sites exemplify this approach, combining recruitment and monetisation in structured networks. Delays in takedown of reported ICAP URLs allow offenders to continue distributing content and generating profit. Payment routes may be concealed or routed through encrypted messaging channels, increasing resilience.

Profit incentives embed child sexual abuse material deeper into the online ecosystem, sustaining demand, normalising abuse, and allowing content to persist across multiple sites. Disguised infrastructure, referral systems, digital advertising, and encrypted payments make disruption slower and more complex. Effective mitigation depends on coordinated action across core stakeholders, including financial institutions, connectivity providers, platforms, image-hosting services, and digital advertising networks. 

 

Shield Icon Policy overview - Financial Services Reporting

Money is a significant motivator for producing child sexual abuse online. A crucial part of tackling the spread of this material is disrupting the commercial influences driving its production. This includes through the introduction of mandatory duties on financial institutions to proactively detect, take down, and report digital payment links linked to the sale of images and videos of child sexual abuse.

 

Graphic titled "Quotes from the Hotline" noting over 10,000 ICAP reports in 2025 and the evolution of profit-driven networks using AI-generated videos of children and shifting domains to stay online.

 

How the IWF tackles child sexual abuse and exploitation online

We combine specialist analysts, technical solutions  and global partnerships to detect, disrupt, remove and prevent child sexual abuse material at scale. Our work depends on collaboration with industry, regulators, civil society and law enforcement.

 

Member services

  • 228 IWF members by the end of 2025
    • including 36 new organisations deploying services
  • IWF URL List (dynamic):
    • 260,699 criminal URLs
      • averaging 1,212 added per day and updated twice daily
  • IWF Hash List (dynamic):
    • 3,224,085 criminal hashes recorded since 2015
      • with 333,933 new images added in 2025
  • Image Intercept pilot:
    • 12,607,541 items scanned
      • 16,339 matched known child sexual abuse material
  • UK hosting takedowns:
    • 163 notices sent
      • 88% removed within 24 hours
        • fastest in 1 minute

 

Children services

  • Report Remove (UK):
    • 1,894 reports received
      • a 66% year-on-year increase
  • Meri Trustline (India):
    • 184 reports received
      • a 12-fold increase year-on-year

 

Operational Activity

  • Multichild capability:
    • 370,001 children recorded in images
      • including 39,203 identified solely through this capability
  • Proactive detection:
    • 287,273 reports generated
      • representing 64% of all reports assessed
  • New exploitative category:
    • 72,000+ images identified in just over two months
      • 51% containing borderline sexualised depictions of a child not meeting UK criminal thresholds

 

What we do

Detect

We use specialised technology to actively find child sexual abuse material and maintain a growing hash database to identify known child sexual abuse material across the internet.

Disrupt

We work with partners to block and disrupt access to child sexual abuse material, using temporary and permanent measures to prevent exposure while content is removed.

Innovate

We co-develop, test and train solutions with technology companies, from small startups to global organisations, to protect children from harm including on-device AI classifiers and privacy preserving digital forensics.

Advocate for change

We collaborate with governments, regulators, law enforcement and tech partners to influence laws, policies and standards that protect children, promote online safety and ensure platforms act responsibly. We champion proactive detection, reporting and removal of child sexual abuse material and embed child protection in emerging technologies.

Educate

We share data, insights and guidance with the child protection sector, law enforcement, technology companies, educators, parents and children to help keep them safe online.

 

How you can help

The scale and complexity of these harms demand coordinated action across sectors, jurisdictions and systems.

 

Policymakers 

Robust child-safety regulation must compel services to prevent, detect and remove child sexual abuse material, including upload-prevention safeguards, safety-by-design, and coordinated international standards. Urgent implementation closes gaps that allow abuse to persist.

To discuss how we can improve online child safety legislation and strengthen regulation please contact our Policy and Public Affairs Team at [email protected]

Internet Infrastructure Providers

Companies operating the internet’s core infrastructure, including registries, registrars, hosting providers, filtering companies, search engines and payment providers, should join the IWF. Rapid responses to alerts, proactive blocking tools, and coordinated disruption of redistribution routes help remove child sexual abuse material and limit its spread across the internet’s infrastructure.

 

Technology Builders

Companies that build platforms, AI systems and software must ensure their products cannot be misused to generate, manipulate, or distribute child sexual abuse material. Embedding safety-by-design, strong safeguards, proactive detection, and collaborating with the IWF to share insights and co-develop protective tools, can prevent abuse at scale.

Interested in joining the IWF or exploring what membership could offer your organisation? Contact our team at [email protected]

Research partners

We invite researchers and data specialists to share anonymised data, develop analytical tools and run joint projects. Together, we can identify emerging threats, test interventions, and strengthen evidence-based child protection.

To discover research opportunities and collaborate with us, contact the Data & Insights Team at [email protected].

Working together

We're convening corporate partners, trusts and foundations, impact investors, governments and philanthropy networks to power the unified response demanded by this issue – and there’s a seat at the table for you.

To explore how you can make a difference – whether through funding innovation or connecting us to your networks – contact our Partnerships Team at [email protected]. 

 

Thanks & forward look

This work is made possible by IWF Members, funders, hotlines, international partners and law enforcement colleagues. We thank our analysts, assessors and data specialists, whose expertise underpins these insights.

Looking ahead, we will continue to invest in technology, partnerships, and child-centred services to prevent victimisation and make the internet safer.

Together, we can shrink the space in which offenders operate and uphold every child’s right to be safe online.