Experts report alarming rise in AI-generated child pornography content



summary
Summary

The Internet Watch Foundation (IWF), a nonprofit organization focused on removing child sexual abuse material (CSAM) from the Internet, reports a rise in AI-generated CSAM.

In one month, IWF analysts found 20,254 AI-generated images on a single CSAM forum on the dark web. Of these, 11,108 were considered potentially criminal and were analyzed by 12 IWF specialists who spent a total of 87.5 hours on them.

The criminal images were assessed as violating one of two UK laws: the Protection of Children Act 1978 or the Coroners and Justice Act 2009. A total of 2,562 images were classified as criminal pseudo-images and 416 images were classified as criminal prohibited images.

The IMF says this content can be created using unrestricted, open-source text-to-image systems, where typing a description generates realistic images that are virtually indistinguishable from real photographs.

Ad

Ad

A niche that could grow quickly

The report also highlights other findings, including that AI-generated content is currently a small part of the IWF’s work, but has the potential for rapid growth.

AI-generated CSAMs are also becoming more realistic, the report says, posing challenges for the IWF and law enforcement. There is also evidence that AI-generated CSAMs contribute to the re-victimization of known abuse victims and celebrity children. In addition, AI-generated CSAMs provide another avenue for offenders to profit from child abuse.

The IWF outlined a series of recommendations for governments, law enforcement, and technology companies to address the growing problem of AI-generated CSAMs.

These include promoting international coordination on content handling, reviewing online content removal laws, updating police training to cover AI-generated CSAM, regulatory oversight of AI models, and ensuring that companies developing and deploying generative AI and large language models (LLMs) include prohibitions on the generation of CSAM in their terms of service.

Ultimately, the growth of AI-generated CSAMs poses a significant threat to the IWF’s mission to remove child pornography from the Internet, the IWF said. As the technology advances, the images will become more realistic and child abuse could increase.

Recommendation

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top