Technology companies can use our growing list of IntelliGrade hashes to stop the upload, sharing and storage of known child sexual abuse imagery on their platforms.
What you need to know about IntelliGrade, our powerful new tool helping companies and law enforcement bodies to fight back against online child sexual abuse images and videos.
IWF and NSPCC's Report Remove can support a young person in reporting sexual images shared online and enables them to get the image removed if it is illegal.
Professionals working with children and young people are being equipped with vital new guidance - developed by the IWF and National Crime Agency - to combat the growing threat of AI-generated child sexual abuse material.
The National Crime Agency estimates there to be between 550,000 and 850,000 people in the UK who pose varying forms of sexual risk to children.
Download essential guides for professionals on understanding, identifying and responding to AI-generated Child Sexual Abuse Material (CSAM). Developed by IWF & NCA.
A new report from the IWF shows how the pace of AI development has not slowed as offenders are using better, faster and more accessible tools to generate new criminal images and videos.
We worked in partnership with Safe Online and the Lucy Faithfull Foundation to develop an innovative new chatbot to intervene and stop people looking at child sexual abuse imagery online before they’ve committed a crime.
Dan Sexton joined IWF in February 2021. He is responsible for Information Technology, Cybersecurity and Software Development.
Tech Secretary sees ‘heartbreaking’ scale of online child sexual abuse on IWF hotline visit as ‘transformational’ online safety rules come into effect
A chilling excerpt from a new IWF report that delves into what analysts at the child protection charity currently see regarding synthetic or AI-generated imagery of child sexual abuse.