While reports of AI-generated CSAM are still outnumbered by actual abuse images and videos found online, experts are deeply concerned about the rapid advancement of this technology and the potential it holds for creating new forms of abusive content
A deeply troubling trend is emerging, as experts warn of a era marked by the proliferation of ultrarealistic AI-generated child sexual abuse images. Offenders are exploiting downloadable open-source generative AI models to produce these horrifying images with devastating consequences. They are also sharing datasets of abusive content, enabling the customisation of AI models, and even monetising their vile creations through monthly subscriptions to AI-generated child sexual abuse material (CSAM).
“Some of it is getting so good that it’s tricky for an analyst to discern whether or not it is in fact AI-generated,” says Lloyd Richardson from the Canadian Centre for Child Protection. The Internet Watch Foundation (IWF), a UK-based nonprofit dedicated to identifying and removing abuse content from the web, has released a comprehensive report shedding light on this issue. Their investigation into a dark web CSAM forum offers a glimpse into the extent of AI’s misuse, revealing nearly 3,000 AI-generated images that are deemed illegal under UK law.
The disturbing content generated by AI includes graphic scenes such as the rape of babies and toddlers, the abuse of famous preteen children, and BDSM content involving teenagers. Shockingly, this even extends to the creation of fake child abuse material featuring celebrities, with some instances involving de-aging these public figures to appear as children or portraying them as abusers.
While reports of AI-generated CSAM are still outnumbered by actual abuse images and videos found online, experts are deeply concerned about the rapid advancement of this technology and the potential it holds for creating new forms of abusive content. These findings align with the observations of other organisations investigating the spread of CSAM on the internet. In a shared database, investigators from around the world have identified 13,500 AI-generated images of child sexual abuse and exploitation, though this is believed to be just the tip of the iceberg.
The current generation of AI image generators is capable of producing remarkably realistic art, photographs, and designs, offering a new dimension of creativity. However, these systems, trained on vast volumes of images often collected from the web without permission, can also create disturbing content based on text prompts. Offenders have predictably adopted these image-generation tools to create CSAM, often using openly available software. One model, Stable Diffusion, developed by UK-based firm Stability AI, has been referenced by offenders, although the company has since made efforts to restrict the creation of CSAM and nude images.
Criminals are known to use older versions of AI models and fine-tune them to create illegal content featuring children. This involves feeding the model with existing abuse images or photographs of people’s faces, allowing the AI to generate images of specific individuals. Offenders exchange these new images of existing victims and make requests for specific individuals on dark web forums, some of which share sets of victims’ faces for AI and deepfake purposes.
Determining the scale of this problem is challenging. In one dark web CSAM forum focused on “softcore imagery” and images of girls, a newer AI section saw the posting of 20,254 AI-generated photos within a single month. A team of 12 analysts at the IWF spent over 87 hours assessing 11,108 of these images, ultimately identifying 2,978 as criminal. Most of these images were realistic enough to be treated as non-AI CSAM, with many falling under Category C, indicating indecent content, and some depicting the most severe forms of abuse. The victims in these images were predominantly female children aged between 7 and 13 years old.
This disturbing trend is a cause for alarm, as the ease with which such images can be created and disseminated is deeply concerning. The IWF report also notes a growing trend of creators of abusive content offering image creation services, including tailored images and subscription-based offerings.

