As technology progresses, a single word added to state legislation could prove crucial in safeguarding individuals victimized by manipulated sexually explicit images and videos.
Senator Karen Kwan, representing D-Murray, introduced SB66, a legislative proposal aiming to revise the definition of counterfeit intimate imagery by incorporating the term “generated.” During a public hearing on Wednesday, Kwan emphasized that this inclusion is designed to address a potential gap resulting from the proliferation of new AI technologies.
The landscape of artificial intelligence has significantly evolved since Kwan’s prior involvement with counterfeit images legislation a few years ago. She recalled referring to them as “deep fakes” at that time but acknowledged the emergence of various AI technologies like ChatGPT.
Should the bill be approved, the definition of counterfeit intimate imagery would encompass “any visual representation, such as a photograph, film, video, recording, or computer-generated image, that portrays an identifiable individual in sexually explicit manners as outlined by the law, regardless of the method of creation or alteration.”
Senator Kwan’s proposal received unanimous endorsement from the Senate Judiciary, Law Enforcement, and Criminal Justice Committee during Wednesday’s session.
This initiative comes amidst mounting concerns regarding the adequacy of state and federal statutes in addressing sexually explicit content produced through artificial intelligence.
Case Study in New York
A notable incident occurred in New York City involving an individual named Patrick Carey. Following his guilty plea in a deepfake-related scheme, Carey was sentenced to six months in jail and placed on ten years of probation. The charges against him included promoting a sexual performance by a child, aggravated harassment as a hate crime, and stalking, as reported by NBC New York.
Nassau County District Attorney Anne Donnelly highlighted Carey’s targeting of women by manipulating images sourced from their social media profiles and employing deepfake technology to create and disseminate pornographic material online.
Donnelly underscored that the discovery of an explicit image involving a minor was pivotal in Carey’s sentencing. She pointed out the inadequacy of existing New York State laws in safeguarding victims of deepfake pornography, emphasizing the urgent need for legislative action.
Tori Rousay, a corporate advocacy program manager and analyst at the National Center on Sexual Exploitation, echoed concerns about the legal recourse available to victims of deepfake pornography. She noted the absence of comprehensive federal legislation addressing image-based sexual abuse and revenge pornography.
While several states have laws addressing deepfakes and AI-generated explicit content, Rousay highlighted discrepancies in how these terms are defined and the challenges in proving malicious intent, particularly given the anonymous nature of such content creation.
Rousay also emphasized the rapid proliferation of technologies capable of producing sexually explicit material and expressed hope for policies that restrict the dissemination of such tools to mitigate their misuse.
During her academic pursuits at Harvard, Rousay interacted with victims of deepfake pornography who described the emotional toll of having their images digitally altered without consent.
Chris McKenna, the founder of Protect Young Eyes, warned about the escalating risks posed by advancements in AI technology, which outpace the adaptability of current laws and regulations. He emphasized the urgent need for legislative updates to address the growing threat of deepfake pornography.
Victims of image-based sexual abuse recounted enduring lasting trauma and relentless online harassment, underscoring the pervasive harm caused by such violations.
In one victim’s account documented by Rousay, the profound distress and helplessness experienced upon discovering non-consensual dissemination of explicit images were vividly described, highlighting the enduring impact of such violations.
The narratives shared by victims underscore the urgent need for comprehensive legal frameworks and technological safeguards to combat the proliferation of deepfake pornography and protect individuals from irreparable harm.