According to a recent publication by the National Center for Missing and Exploited Children, the emergence of AI image generation technology has resulted in a surge in fabricated child sexual abuse images.
In 2023, the nonprofit organization, acting as an intermediary for data related to suspected CSAM and child sexual abuse received a staggering 36.2 million reports through its “CyberTipline.” This figure marked an increase from over 32 million reports in the preceding year and more than double the amount recorded before the onset of the pandemic in 2019. Among the 36.2 million reports, approximately 5,000 cases were attributed to AI-generated content, although the actual number is believed to be considerably higher than reported.
While the volume of AI-generated reports remains relatively modest in the broader context of CyberTipline submissions, Fallon McNulty, the director of the CyberTipline at NCMEC, expressed concerns about the escalating trend. McNulty emphasized the potential for continued growth in such incidents, particularly as companies enhance their capabilities in detection and reporting.
The utilization of AI-generated graphic material to produce illicit sexual abuse images has witnessed a notable upsurge over the past year. Educational institutions in the United States, ranging from middle schools to high schools, are contending with the proliferation of fake nude images of individuals. Several prominent AI tools have been implicated in training their models using inappropriate CSAM. Notably, a recent criminal case marked one of the initial prosecutions involving AI-generated CSAM.
McNulty highlighted the increasing collaboration between a burgeoning group of AI-focused companies and NCMEC to identify and flag visually explicit CSAM. Leading this initiative is OpenAI, renowned for creations such as ChatGPT and the text-to-image engine Dall-e. Other entities like Sensity and Anthropocene AI have also joined forces in this endeavor, shedding light on the extent of AI-generated content, particularly in the manipulation of images depicting children or known CSAM. The origin of these activities often traces back to publicly available pretrained models or off-platform sources, despite mainstream platforms being the primary source of tips on AI-generated CSAM. Social media companies have occasionally provided information to NCMEC, such as articles, comments, hashtags, or chat transcripts detailing the AI models used.
As AI technology continues to advance rapidly, there are growing apprehensions regarding the potential escalation of this issue, further burdening law enforcement agencies already grappling with the existing challenges. Distinguishing between counterfeit AI-generated CSAM and authentic content is becoming increasingly challenging. While major players in the generative AI field have committed to collaborating with NCMEC, smaller platforms, including apps enabling photo manipulation, have yet to engage with the organization, as noted by McNulty.
This concerning trend shows no signs of abating and is anticipated to intensify further. In the initial months of 2024, NCMEC recorded approximately 450 reports every two weeks related to CSAM stemming from AI technology.