AI being used to generate deepfake child sex abuse images based on real victims, report finds | UK News
Artificial intelligence (AI) is being used to generate deepfake child sexual abuse images based on real victims, a report has found.
The tools used to create the images remain legal in the UK, the Internet Watch Foundation (IWL) said, even though AI child sexual abuse images are illegal.
It gave the example of one victim of child rape and torture, whose abuser uploaded images of her when she was between three and eight years old.
The non-profit organisation reported that Olivia, not her real name, was rescued by police in 2023 – but years later dark web users are using AI tools to computer-generate images of her in new abusive situations.
Offenders are compiling collections of images of named victims, such as Olivia, and using them to fine-tune AI models to create new material, the IWL said.
One model for generating new images of Olivia, who is now in her 20s, was available to download for free, it found.
A dark web user reportedly shared an anonymous webpage containing links to AI models for 128 different victims of child sexual abuse.
Other fine-tuned models can generate AI child sexual material of celebrity children, the IWL said.
IWL analysts found 90% of AI images were realistic enough to be assessed under the same law as real child sexual abuse material.
They also found AI images are becoming increasingly extreme.
Read more:
New AI tool could be game-changer in battle against Alzheimer’s
Why Google’s greenhouse gas emissions have surged 48% in five years
‘Incredibly concerning but also preventable’
The IWL warned “hundreds of images can be spewed out at the click of a button” and some have a “near flawless, photo-realistic quality”.
Its chief executive Susie Hargreaves said: “We will be watching closely to see how industry, regulators and government respond to the threat, to ensure that the suffering of Olivia, and children like her, is not exacerbated, reimagined and recreated using AI tools.”
Richard Collard of the NSPCC said: “The speed with which AI-generated child abuse is developing is incredibly concerning but is also preventable. Too many AI products are being developed and rolled out without even the most basic considerations for child safety, retraumatising child victims of abuse.
“It is crucial that child protection is a key pillar of any government legislation around AI safety. We must also demand tough action from tech companies now to stop AI abuse snowballing and ensure that children whose likeness are being used are identified and supported.”