Reports analysis


People report to us at, or through one of the 50+ Reporting Portals around the world, in multiple languages. All reports are assessed by our Internet Content Analysts (or analysts, for short) at our headquarters in the UK. We also actively search the internet for child sexual abuse imagery. We call this ‘proactive searching’.


  • 392,665 reports were assessed by IWF (5% increase from 2022):
    • 392,620 were reports of webpages, and
    • 45 were reports of newsgroups.
  • 275,652 URLs (webpages) were confirmed as containing child sexual abuse imagery, having links to the imagery or advertising it (8% increase from 2022).

Each URL could contain one, tens, hundreds or even thousands of individual child sexual abuse images or videos.


We use the term ‘actioned’ to indicate a report which was found to contain child sexual abuse material and which we therefore took a number of active steps to remove from the internet.

You can read more about UK-hosted and globally-hosted child sexual abuse material.

When our analysts assess a report, the age classification is based on the youngest child visible in the imagery; for example, a video including a 2-year-old, a 7-year-old and a 13-year-old would be assessed as ‘0-2’ to reflect the age of the youngest child.

The same approach is applied to severity, with the most severe category of abuse visible in the imagery being recorded. In a composite image or video showing every category of abuse (from A to C), the analyst would log an assessment of category A.

We assess child sexual abuse material according to the levels detailed in the Sentencing Council's Sexual Offences Definitive Guideline. The Indecent Photographs of Children section (Page 34) outlines the different categories of child sexual abuse material.

  • Category A is defined as: Images involving penetrative sexual activity; images involving sexual activity with an animal, or sadism.
  • Category B: Images involving non-penetrative sexual activity.
  • Category C: Other indecent images not falling within categories A or B.

Where some reports include multiple images or videos displayed on a single URL, the same rule of ‘youngest visible child’ and ‘most severe visible category’ is applied. Sex can be recorded as Boys, Girls, Both or – in rare cases - Unidentified.


External vs proactive - reports assessed and actioned

Rotate your device

This chart compares proactively-sourced reports (where our analysts search for content) and those reports which came to us via external sources.



External report sources - assessed and actioned

Rotate your device

This chart shows a breakdown of the external sources which send us reports, and report numbers from each source. The five sources are: Public, Police, Member, Hotline and Other.


External reports - % which accurately led to child sexual abuse imagery (as a % of reports assessed for that source)

This chart shows the percentage of reports which were actionable (contained child sexual abuse material) from each external source. The five sources are: Hotline, Member, Other, Police and Public.

Note: These percentages exclude newsgroups and duplicate/'previously actioned' reports.


Public report source accuracy

132,710 reports were assessed by our Hotline which came from the public. 34% (26% in 2022) of these reports correctly identified child sexual abuse content. This figure includes newsgroups and duplicate reports (where the same criminal URL has been reported multiple times). The increase in accuracy of public reports appears to relate to a viral marketing distribution technique being used by offenders, called Invite Child Abuse Pyramid sites – or ICAP for short.

Note: Each year, a number of these are adverts or links to child sexual abuse material which are assessed as illegal but won't have a severity grade.