Search Results

38 results
  1. AI becoming ‘child sexual abuse machine’ adding to ‘dangerous’ record levels of online abuse, IWF warns

  2. How AI is being abused to create child sexual abuse imagery

    Explore the IWF 2026 AI CSAM Report. Discover why AI-generated child abuse videos increased by 26,385% in 2025 and the emerging risks of agentic AI and LoRAs.

  3. New EU-funded safety tech will help reduce viewing and demand for child sexual abuse images and videos

    A unique safety tech tool which uses machine learning in real-time to detect child sexual abuse images and videos is to be developed by a collaboration of EU and UK experts.

  4. ‘Worst nightmares’ come true as predators are able to make thousands of new AI images of real child victims

    AI-Generated Child Abuse Sexual Imagery Threatens to “Overwhelm” Internet

  5. White House roundtable is 'important moment' in recognising threat of AI child sexual abuse imagery

    AI-generated child sexual abuse is on the agenda at the White House as Internet Watch Foundation CEO Susie Hargreaves flies to Washington to discuss how to address the rising threat.

  6. Full feature-length AI films of child sexual abuse will be ‘inevitable’ as synthetic videos make ‘huge leaps’ in sophistication in a year

    AI-generated child sexual abuse videos have surged 400% in 2025, with experts warning of increasingly realistic, extreme content and the urgent need for regulation to prevent full-length synthetic abuse films.

  7. ‘Disturbing’ AI-generated child sexual abuse images found on hidden chatbot website that simulates indecent fantasies

  8. AI chatbots and child sexual abuse: a wake-up call for urgent safeguards

    IWF analysts uncover platform hosting chatbot “characters” designed to let users simulate sexual scenarios with child avatars.

  9. AI must be a force for good and not a threat to children

    The capacity for horrific images of AI-generated child sexual abuse to be reproduced at scale was underlined by IWF in the lead-up to the UK government’s AI Safety Summit.

  10. What did we learn from the US Senate hearing over online harms?

    Wednesday’s hearing brings into sharp focus the problems that organisations like ours, the Internet Watch Foundation, are dealing with every day.

  11. Stability AI joins IWF’s mission to make internet a safer space for children

    The world’s leading independent open source generative AI company Stability AI, has partnered with the Internet Watch Foundation to tackle the creation of AI generated child sexual abuse imagery online.

  12. AI giving offenders ‘DIY child sexual abuse’ tool, as dozens of child victims used in AI models, IWF warns MPs

    AI giving offenders ‘DIY child sexual abuse’ tool, as dozens of child victims used in AI models, IWF warns MPs. The IWF has welcomed upcoming new legislation while giving evidence in Parliament this week.