The initial arrests and criminal charges could mark a significant development concerning the distribution of AI-generated explicit content.
By Amrita Khalid, a notable contributor to the audio industry newsletter Hot Pod. Khalid boasts over a decade of experience covering various topics such as technology, surveillance policies, consumer electronics, and online communities.
In a notable incident, two middle school students from Florida were apprehended in December and faced third-degree felony charges for allegedly producing deepfake nude images of their peers. According to a report by Wired, police documentation revealed that two boys, aged 13 and 14, purportedly utilized an undisclosed “artificial intelligence application” to fabricate explicit visuals of fellow students aged “between 12 and 13.” This occurrence potentially represents the inaugural instance in the United States where legal actions have been taken in response to AI-generated illicit imagery.
The duo was indicted with third-degree felonies under a Florida legislation enacted in 2022, which criminalizes the dissemination of deepfake sexually explicit content without the consent of the depicted individuals. Both the arrests and subsequent charges seem to be unprecedented nationwide concerning the proliferation of AI-generated nude content.
The incident came to light through local media reports subsequent to the suspension of the implicated students from Pinecrest Cove Academy in Miami, Florida, on December 6th, followed by the involvement of the Miami-Dade Police Department. As per Wired, the arrests took place on December 22nd.
The emergence of minors crafting AI-generated nude and explicit materials involving their peers has emerged as a growing concern within educational institutions across the country. However, apart from the Florida episode, no other similar occurrences resulting in arrests have been widely documented. Notably, there exists no federal statute specifically addressing nonconsensual deepfake nudity, thus prompting individual states to grapple with the ramifications of generative AI concerning issues like child sexual exploitation material, involuntary deepfakes, and revenge pornography.
In a recent development, President Joe Biden issued an executive directive on AI tasking agencies to present a report on prohibiting the utilization of generative AI for producing child sexual exploitation material. While Congress is yet to enact legislation on deepfake pornography, recent efforts suggest a potential shift in this stance. Both the Senate and House have introduced the DEFIANCE Act of 2024, indicating bipartisan support for addressing this pressing concern.
Despite the majority of states having enacted laws against revenge porn, only a select few have extended these regulations to encompass AI-generated sexually explicit content to varying extents. In jurisdictions lacking legal safeguards, victims have resorted to legal recourse. For instance, a teenager from New Jersey is pursuing legal action against a peer for disseminating fabricated AI-generated nude images.
A recent report by the Los Angeles Times highlighted an ongoing investigation by the Beverly Hills Police Department involving students allegedly sharing visuals featuring authentic faces superimposed on AI-generated nude bodies. However, due to the absence of explicit provisions regarding AI-generated imagery in the state’s laws concerning obscene material involving minors, the legal implications remain uncertain, as detailed in the article.
Subsequent to the scandal, the local school district made the decision to expel five students involved in the controversy, as reported by the LA Times.