Brightstar Ltd has become an Internet Watch Foundation Member to protect their business clients from exposure to online images and videos of child sexual abuse.
People trying to view sexual images of children online will trigger a first-of-its-kind chatbot, which has launched to help potential offenders stop their behaviour.
Safety tech firm Image Analyzer has become part of the Internet Watch Foundation’s (IWF) community of Members working to protect children online.
Hannah Swirsky, Head of Policy and Public Affairs at IWF, sets out why AI is an issue for anyone whose images appear online.
People revictimise abused children every time they view or share criminal material online. Children's voices need amplification, and their rights need sticking up for.
Professor Hany Farid speaks to IWF about Encryption Vs. Privacy as part of their new podcast series on child sexual abuse imagery online.
Children who are worried that nude pictures and videos may end up online will be able to report the material to help prevent it from being uploaded in the future.
Susie speaks to Aasmah Mir about the increase in self-generated child sexual abuse online amongst 7-to-10 year olds
Senior writer at WIRED, Matt Burgess, looks into Pornhub trialling a new automated tool that pushes CSAM-searchers to seek help for their online behaviour
Our podcast tells the story of online child sexual abuse through the words of victims, the people fighting it, police, tech companies & even perpetrators.
In conjunction with partners in the private and public sector, we regularly run campaigns aimed at raising awareness & prevention of child sexual abuse online.
Our campaign aims to help young people understand the harm of sharing explicit imagery online and encourage parents and educators to start conversations.