Our Methodology and Data Standards


This year, our Annual Data & Insights Report organises our datasets into clearly defined and distinct sections, each reflecting a specific way in which data informs and supports our operational work. These key datasets work together to provide a clear picture of our activity and emerging trends. They cover reports, URLs, images, videos, children and hashes, each serving a specific purpose in our overall response to online child sexual abuse imagery.

Every report, URL, image and video is assessed to establish whether the content meets the threshold for criminal classification. When criminal material or links to it are confirmed, we would record this as an ‘actioned’ report where appropriate steps are taken to remove the content or to generate a hash for ongoing detection and prevention.

 

IWF datasets

 

Reports IconReports

Reports account for a significant proportion of our work and are the primary mechanism through which content is brought to our attention. Most reports relate to URLs suspected of hosting child sexual abuse material. Reports may be submitted directly from external sources or generated internally to record our proactive work, when our analysts actively conduct searches online to identify and remove child sexual abuse. Since the launch of our child reporting services, Report Remove and Meri Trustline, we receive and assess reports submitted directly by children and young people through this same system.

When processing reports generated through our proactive activity, we record the URLs identified and then assess the content to determine whether any child sexual abuse material is present. This may include content directly displayed on a webpage, material accessed through links, referrals, advertisements or paedophile manuals. We assess whether the material meets the threshold for child sexual abuse material. Throughout this report, references to “criminal” or "child sexual abuse” refer to the same definition.

 

URLS IconURLs

URLs (or webpages) provide critical intelligence about where child sexual abuse imagery is being hosted online. Analysis of URL data allows us to identify hosting patterns, the types of websites involved, and those websites that are repeatedly found to host criminal content. In addition, this intelligence is valuable for identifying sites that are explicitly generating revenue from hosting this type of content.

 

Images IconImages

For images, assessors record the highest severity of sexual activity and the age and sex of the youngest child involved. They then record the age and sex of any additional children appearing in the same image. Each image receives only one severity assessment, regardless of any additional sexual activity present.

For individual images, we are able to record further details, referred to as metadata, such as the age and sex of each child and the type of sexual activity shown. For multi-image collages, which is a single image made up of multiple sub-images, we only record the severity. These are commonly arranged in a grid pattern, (sometimes referred to as ‘grids’) although they can adopt other layouts.

 

Children IconChildren

Children form part of our image dataset. When an image contains more than one child, we can record individual attributes for each child, such as age and sex. However, we do not record individual child details for videos or for multi‑image collages (including grids or combined image formats).

 

Video IconVideos

Videos containing illegal child sexual abuse material can differ significantly in duration, from only a few seconds to more than an hour and may depict a single child or multiple children. To maintain efficiency and safeguard the wellbeing of our assessors, only the highest severity assessment is assigned to each video and no individual child specific details are recorded.

 

When referring collectively to images and videos, we use the term “imagery.”

 

Hashes IconHashes

Once an image or video has been fully reviewed in our IntelliGrade system and the assessment information has been entered, a hash is automatically created. 

A hash is a digital fingerprint of a file, such as an image or video. A hash is created by running a file through a special mathematical process that turns it into a short string of letters and numbers. This string is unique to that file. Identical images or videos share the same digital fingerprint, meaning that once a file has been given a hash, duplicates do not need to be assessed again. 

Even a tiny change to the file, like altering one pixel in an image, results in it being identified as a completely different image and therefore it is given its own unique hash. These hashes form our Hash List service, enabling the identification of imagery through direct matching of unique hash strings stored on the list. These are used to quickly identify and match known criminal content online without needing to view the actual image or video itself.

Our hash dataset contains information about individual images and videos, enabling analysis at a granular level. Hash data includes attributes such as the number of children depicted, estimated age and sex, the type of activity shown, and trend insights, including the identification of ‘self-generated’ and AI-generated content. This approach allows for more precise trend analysis than URL-level data, where a single webpage may contain hundreds or thousands of images.

Within the hash dataset, images and videos are treated differently due to their complexity and impact on welfare. For images, we record the full range of available data. For videos, which present a higher welfare risk to analysts, we record severity alongside trend indicators relating to ‘self-generated’ and AI-generated content.

 

The diagram below reflects the stages of the Hotline’s report and imagery workstreams, showing where our datasets sit within the process.

 

Infographic illustrating the IWF data workstream and Hotline assessment process for the 2025 Annual Data & Insights Report.

 

 

Our Hotline team

Internet Content Analysts

The Internet Content Analysts, often referred to as Analysts, are a team of 17 people. They are responsible for proactively searching for images and videos of child sexual abuse online, responding to public reports, and monitoring new trends. It’s their job to ensure criminal content depicting the sexual abuse of children is removed from the internet.  

Image Classification Assessors

The Image Classification Assessors, often referred to as Assessors, are a taskforce of 14 whose role is to assess images and videos, adding extra metadata to each image such as the age of the child depicted and the type of sexual activity that is occurring, in addition to other information. Once the data is added, a hash or “digital fingerprint” is created. These hashes are then used to prevent the upload, download and further dissemination of this image by our industry partners. 

Quality Assurance Assessors and Officers

The role of our Quality Assurance team is to support the Hotline. The team of five are intentionally managed by a different Director to the rest of the Hotline and they ensure the work of the Hotline is held to the highest standards. They check for accuracy and consistency of assessments and track trends to ensure the IWF remains a trusted and world-leading organisation.  

 

IWF assessment

Our work brings together action on publicly reported content, proactive detection of material hosted online, the assessment of imagery and the hash value created. We also process reports submitted through our child reporting tools, ensuring that children and young people can request the removal of intimate images of themselves. In addition, we assess images as part of our partnership with the UK Home Office’s Child Abuse Image Database (CAID), a secure national repository containing images and videos of child sexual abuse material collected by UK police forces and the National Crime Agency. This database plays a critical role in supporting investigations and securing convictions against offenders who create, access or distribute this material.

In 2025, we assessed more than 600,000 images and videos from CAID to determine whether they met the criminal threshold. A significant proportion of these assessments resulted in non-criminal assessments. In many cases, the imagery did not meet the criminal threshold under UK law, or it was not possible to determine with complete confidence that a child was depicted.

We assess child sexual abuse material according to the levels detailed in the Sentencing Council's Sexual Offences Definitive Guidelines. The Indecent Photographs of Children section (Page 34) outlines the different categories of child sexual abuse material.

Category A: Images involving penetrative sexual activity; images involving sexual activity with an animal; or sadism.
Category B: Images involving non-penetrative sexual activity.
Category C: Other indecent images not falling within Categories A or B.

Prohibited images are assessed under a separate legal framework (Section 62 of the Coroners and Justice Act 2009) to indecent images that fall under the Protection of Children Act 1978 and Section 160 of the Criminal Justice Act 1988. Prohibited images are non-photographic images (including computer generated images (CGI), cartoons, manga images and drawings)

 

Evolving our response to non-criminal child exploitation imagery

We continue to seek opportunities to expand our datasets and share insights with industry and partners. However, this must be carefully balanced against the welfare of our analysts and assessors. Our assessment work on both CAID-sourced and proactively found imagery prompted us to change how we respond to imagery that depicts child exploitation but doesn't explicitly depict child sexual abuse according to UK laws. As a result, we introduced an ‘exploitative’ category to better reflect and respond to this type of content.

Exploitative category

Exploitative content refers to any material, particularly images, that depicts or implies the sexualisation or victimisation of a minor, even where the content itself may fall short of the legal threshold for criminality.

This includes, but is not limited to:

  • Borderline criminal sexualised depictions of a child
    Content that does not meet the threshold for illegality under UK law, but is still sexualised in its nature or intent.
  • Images linked to known exploitation
    Lawful images of a child that, in the appropriate context, are linked to imagery of known or suspected sexual exploitation of the same child. These links can be determined via victim identification, distribution patterns and metadata.
  • Images believed to depict a child but where age confirmation is difficult
    Content where there is high confidence that a child is depicted, but where it is not possible to confirm this with complete certainty without independent confirmation of age.

We classify these areas of content as exploitative because it contributes to, or risks contributing to, the sexual exploitation or continued harm of a child, even where a single image cannot be graded as criminal.

 

In October 2025, we began the internal process of grading exploitative images which included reassessing a number of images that had previously been assessed as lawful.

In just over two months, our team assessed more than 72,000 images as belonging to the exploitative category.

 

Exploitative breakdown (October - December 2025)

  • Borderline indecent
  • Known or confirmed victims
  • Age in question
  • Borderline indecent & age in question

While this category was initially developed for internal use, we intend to expand its application and share this intelligence with industry and partners. Deployed through our membership services, the industry application of exploitative imagery blocking could strengthen protections for victims, and potentially support earlier intervention on the uploading of criminal content. By sharing this insight more broadly, we can:

  • Support online platforms to improve detection systems, strengthen safety-by-design measures, and respond more quickly to emerging risks.

  • Inform government and policymakers to help shape proportionate regulation, guidance and preventative strategies.

  • Enable civil society and child protection organisations to better understand emerging harm patterns and tailor prevention and support services.

 

Key terminology

Assessed

The term ‘assessed’ means an analyst has taken time to review a report, which may contain URLs, images or videos, or methods for accessing imagery, to determine if it contains or links to criminal child sexual abuse content or not.

Actioned

We use the term ‘actioned’ to indicate a report which has been assessed and found to contain child sexual abuse material and where we took active steps to remove this material from the internet.

‘Self-generated’

‘Self-generated’ images and videos are those where a child or children can be seen alone, with no perpetrator physically present with them at the time the imagery was captured - though they may be digitally present. These children are most often groomed, deceived or extorted into producing and sharing sexual imagery of themselves. Sometimes children are completely unaware they are being recorded and that an image or video of them is then being watched and shared by abusers.

 

Illustration of a girl on a bed taking a photo of herself, demonstrating what 'self-generated' imagery is.

 

We regard the term ‘self-generated’ child sexual abuse as an inadequate and potentially misleading term which does not fully encompass the full range of factors often present within this imagery, and which appears to place the blame with the victim themselves. Children are not responsible for their own sexual abuse. Until a better term is found, however, we will continue to use the term ‘self-generated’ as, within the online safety and law enforcement sectors, it is well recognised.