Derek Ray-Hill, Interim Chief Executive at the IWF, said: “Young people are facing rising threats online where they risk sexual exploitation, and where images and videos of that exploitation can spread like wildfire. New threats like AI, and sexually coerced extortion are only making things more dangerous.
“Many well intentioned and responsible platforms do not have the resources to protect their sites against people who deliberately upload child sexual abuse material. That is why we have taken the initiative to help these operators create safer online spaces by providing a free-of-charge hash checking service that will identify known criminal content.
“This is a major moment in online safety, and anyone with an online platform where users can upload content now has the chance to join the fight and help deliver the aspirations of the Online Safety Act. Together we can present a stone wall to those looking to spread child sexual abuse imagery.”
Technology Secretary Peter Kyle said Image Intercept is a “powerful” example of how new tech innovations can start to turn the tide on child sexual abuse imagery online.
He said: “Visiting the IWF earlier this year was one of the most shocking and moving days I have experienced as Technology Secretary. I saw firsthand the scale and sinister methods criminals are using to prey on young people, often beginning in what should be the safest place for a child – their bedroom.
“The rise in cases of sextortion, child sexual abuse material, and most recently AI-generated abuse material shows how threats to young people online are constantly evolving. But during my visit I also saw the extraordinary dedication of the IWF teams working daily to protect children from further harm and meet these new threats.
“Their new Image Intercept tool is a powerful example of how innovation can be part of the solution in making online spaces safer for children, a goal which - working with IWF and other partners - this government is committed to delivering.”
In 2024, the IWF confirmed a record-breaking 291,273 reports of child sexual abuse imagery.
Today’s Data and Insights report reveals 97% of the reports in which victims’ sex was recorded* (or 278,492 reports) showed the sexual abuse of girls only – an increase of 14,246 since 2023.
The report also shows a change in the size of older age groups appearing in abuse imagery in 2024. Reports** where 14-to-15-year-olds have been recorded as the youngest child seen have risen 35%, with 5,457 recorded in 2024 compared to 4,056 in 2023.
The number of reports** where 16-to-17-year-olds were the youngest child seen has risen 67% from 1,202 reports in 2023 to 2,010 in 2024.
This age group can be challenging to identify in imagery but, through IWF’s active work with UK law enforcement, images can be cross-referenced with Government’s CAID (Child Abuse Image Database) or to reports received via Report Remove. This means IWF analysts are able to determine the precise age of victims and correctly identify criminal imagery.
This is important for older teens whose imagery is most at risk of being mistaken for imagery of adults and not always picked up by moderators on platforms.
Now, as Ofcom begins to enforce aspects of the Online Safety Act requiring companies to tackle illegal harms, including child sexual abuse imagery, the UK-based IWF is encouraging companies to use Image Intercept to protect their platforms.
Megan Hinton was 14 when sexual images and videos of her were shared online. She said the knowledge this imagery could be being shared and viewed by strangers still affects her, and she lives in fear.
She said today’s move from the IWF will ‘instil hope’ in thousands of victims and survivors across the UK.
Megan, who is now a Victim and Survivor Advocate for child protection charity, the Marie Collins Foundation***, said: “The impact of child sexual abuse material cannot be understated. The images and videos captured as part of my abuse at age 14 remain an omnipresent fear in my life and I am revictimised every time these are viewed.
“As a survivor I live in fear of being recognised by those who may have seen my images, and this very real anxiety is only increasing with technological advancements in AI now providing offenders with further opportunities to manipulate the imagery.
“Today’s announcement will instil hope in thousands of victims and survivors across the UK. To know that tools exist which proactively identify, block and prevent the spread of child sexual abuse material is a powerful remedy to alleviate the pervasive anxiety that survivors live with.
“The ground-breaking decision to extend IWF’s protection, free of charge, to smaller platforms demonstrates to survivors that they are not alone and that there is unwavering and steadfast commitment to eradicate child sexual abuse material online.”
Jess Phillips, Minister for Safeguarding and Violence against Women and Girls, said the Government is committed to improving safety, and said Image Intercept will be “vital” in protecting children online.
She said: “The IWF’s latest findings are deeply disturbing, and show how urgent it is for us to tackle the growth of online child sexual abuse and intensify our efforts to protect children from these heinous crimes.
“I want to thank the IWF for all the work they do to shine a light on this issue. Their ‘Image Intercept’ initiative, funded by the Home Office, will be vital in helping to stop the re-victimisation of children who have been exploited online.
“But we must also hold technology platforms accountable. If they are to be safe places for our children, they must invest in technologies that block this harmful content and stop predators being able to access and groom children online.
“This government is going further by introducing new measures so anyone who possesses an AI tool designed to create illicit images or owns manuals teaching them to how to do so will rightly face time behind bars. We will continue to support the robust implementation of the Online Safety Act and will not hesitate to go further if necessary to keep our children safe.”
As well as having powers to proactively seek out child sexual abuse imagery, the IWF’s expert taskforce also hashes imagery reported by the public and police, as well as from CAID, to which it has privileged access.
Children and young people also report sexual imagery of themselves to the IWF via the dedicated Report Remove system in the UK, and Meri Trustline in India.
Today’s IWF report reveals that almost a third of the images and videos the IWF hashed in 2024, (210,572 or 29%) included Category A child sexual abuse – the most extreme imagery which can depict rape, sadism, or even bestiality.
By using the IWF’s new hash checking tool, businesses and platforms can make sure none of the images or videos on our Hash List can ever be shared again on their sites.