Over the years, the IWF’s work has grown. We’ve gathered more data, we’ve identified new ways that we’re seeing children – and technology – being abused, and we’ve launched new services for tech companies.
The IWF Annual Report has grown with it. So, this year, we’ve taken a step back and refocused the report. Rather than talking about everything that we do – inside and outside of the Hotline – we’ve tried simply to answer the question “how do we work towards our mission of eliminating online child sexual abuse imagery?”. You can see this reflected in the report’s menu.
This year we have also been able to provide a more detailed analysis that focuses on different areas of our workflow: image, video and Multichild analysis are a new focus for 2024.
We have created six dataset tags to clearly identify what data and what part of our workflow is being referred to throughout all sections of our report. These are:
You can read about these in detail on our Methodology and datasets page further on in this section.
We’ve sought expertise and guidance from an external expert, Dr Jeffrey Demarco, who has performed the role of a critical friend, to ensure that what we are saying is clearly evidenced by the data.
We’ve additionally included anecdotal information from our analysts who work day-in, day-out, finding, assessing and removing online child sexual abuse imagery. Our aim is to make these insights useful for others who also tackle online child sexual abuse material.
"I was pleased to be invited to review the Data and Insights included in this year’s IWF Annual Data & Insights Report. I worked closely with the team to ensure the commentary accurately reflected the data, and I am confident that the report offers a robust and transparent account of both the findings and the methodology used behind them.
This output plays a vital role in informing the wider online child safety ecosystem. It sheds light on emerging trends and persistent threats but also sets a high standard for transparency and accountability. By making this data visible and accessible, the IWF supports a more informed and coordinated response across sectors.
Reports like this are key in helping organisations, policymakers and the public stay vigilant and proactive in efforts to prevent and disrupt the sexual abuse and exploitation of children online."
Dr Jeffrey Demarco
In this report, we have significantly reduced the data that we have published around ‘self-generated’ child sexual abuse imagery. For the past few years, we have been tracking the number of URLs that have included imagery of a ‘self-generated’ nature. We first published this in our 2018 Annual Report when just over a quarter (27%) of the webpages we assessed in the last six months of that year showed ‘self-generated’ content.
In 2024 we now see how almost all (91%) of the URLs that we’re assessing include at least one image or video of a ‘self-generated’ nature.
Given the near-ubiquitous availability of this type of content on URLs, this is no longer a new trend and highlighting it as such is less useful – it is now so well established and interwoven with physical contact abuse.
Secondly, we have the ability to provide more granular data on ‘self-generated’ child sexual abuse imagery drawn from our image and video datasets. We record ‘self-generated’ markers on an individual image and video basis. Further refinement of this data could enable us to bring additional insights at this level. It will therefore be included in a future report.
For more information regarding the term 'Self-generated' please see the terminology section at the bottom of this page.
When we assess images and videos, these are two distinct processes.
For individual images we are able to record all child assessment data which includes the ages and sex of each child seen and the total number of children seen in each image as well as the overall severity assessment.
However, videos can vary in both length and complexity and may require repeated viewing to accurately record all information. One video could show one or many different children. We also see compilation videos; these are where several clips or images have been collated into a single video. These compilations also vary drastically in length. They can contain as few as two, or sometimes many, videos or images joined together.
The complexity of assessing videos means that we only record the highest severity of the child sexual abuse seen when making our assessment according to UK law. This contributes to greater efficiency and accuracy and helps to protect the wellbeing of our image assessors.
The term ‘assessed’ means an analyst has taken time to review the report and determine if the report contains criminal content or not.
We use the term ‘actioned’ to indicate a report which was assessed and found to contain child sexual abuse material and where we took active steps to remove this material from the internet.
The term ‘self-generated’ is used when our analysts determine that a child has produced images or videos of themselves. In some cases, children are groomed, deceived or coerced into producing and sharing a sexual image or video of themselves by someone who is not physically present in the room with them. Sometimes children are completely unaware they are being recorded and that an image or video of them is then being watched and shared by abusers.
Some children may generate intimate images or videos of themselves without coercion from an abuser but may later find their content has been shared or distributed without their consent.
We regard the term ‘self-generated’ as an inadequate and potentially misleading description which can fail to represent the differing levels of consent and coercion that can occur in the creation of this imagery. The term can also appear to place responsibility with the victim themselves. Children are not responsible for their own sexual abuse. Until a better term is found, however, we will continue to use ‘self-generated’ as, within the online safety and law enforcement sectors, it is well recognised.