The “worst nightmares” about artificial intelligence-generated child sexual abuse images are coming true and threaten to overwhelm the internet, a safety watchdog has warned.
The Internet Watch Foundation (IWF) said it had found nearly 3,000 AI-made abuse images that broke UK law.
The UK-based organisation said existing images of real-life abuse victims were being built into AI models, which then produce new depictions of them.
It added that the technology was also being used to create images of celebrities who have been “de-aged” and then depicted as children in sexual abuse scenarios. Other examples of child sexual abuse material (CSAM) included using AI tools to “nudify” pictures of clothed children found online.
Read the full article at The Guardian.