Mixing fresh technology with new legislation can be quite a delicate dance, particularly when the technology pertains to communication. Lawmakers frequently introduce bills that could potentially encompass various forms of speech protected by the First Amendment. This trend has been notably evident in the realm of social media and is now extending into the domain of artificial intelligence. A prime example of this intersection is the No Artificial Intelligence Fake Replicas And Unauthorized Duplications (No AI FRAUD) Act. This legislation, framed under the guise of safeguarding “Americans’ individual right to their likeness and voice,” seeks to curtail a broad spectrum of content, ranging from parody videos and comedic impressions to political cartoons and beyond.
The sponsors of the bill, Representatives María Elvira Salazar (R-Fla.) and Madeleine Dean (D-Pa.), have expressed apprehension regarding “AI-generated fakes and forgeries,” as outlined in a press release. Their objective is to shield individuals from the unauthorized utilization of their images and voices by categorizing these elements as the intellectual property of each person.
While the instances cited in the No AI Fraud Act specifically revolve around AI-generated scenarios, the actual scope of the bill is far-reaching, encompassing a wide array of “digital depictions” or “digital voice replicas.”
Salazar and Dean assert that the legislation strikes a balance between individuals’ “right to control the use of their identifying characteristics” and “First Amendment protections to safeguard speech and innovation.” However, despite acknowledging free speech rights, the bill broadens the spectrum of permissible speech subject to restriction. This expansion could potentially lead to increased legal challenges for creators and platforms seeking to exercise their First Amendment rights, potentially stifling certain forms of comedy, commentary, and artistic expression.
Extensive Scope of the No AI Fraud Act
Fundamentally, the No AI Fraud Act revolves around establishing the right to pursue legal action against individuals who utilize your likeness or voice without consent. It asserts that every individual possesses a property right in their own likeness and voice, permitting the use of someone’s “digital depiction or digital voice replica” in a manner impacting interstate or foreign commerce solely with the individual’s written agreement. This agreement necessitates legal counsel and must adhere to the terms of a collective bargaining agreement. In the absence of any of these components, the individual whose voice or likeness was utilized without consent retains the right to seek damages through legal recourse.
Although the reference to “interstate or foreign commerce” may seem limiting, virtually any online activity can be construed as falling within the purview of interstate or foreign commerce.
The bill’s definitions extend to the voices and depictions of all individuals, living or deceased. A “digital depiction” encompasses any replication, imitation, or approximation of an individual’s likeness created or altered using digital technology. Similarly, “likeness” refers to any actual or simulated image identifiable as the individual, irrespective of the means of creation. A “digital voice replica” includes audio renderings created or modified using digital technology, encompassing replications, imitations, or approximations of an individual’s voice. These definitions transcend the realm of AI-generated fraudulent endorsements or musical compositions.
The breadth of these definitions is such that they could encompass reenactments in true-crime shows, parody TikTok profiles, or portrayals of historical figures in films. They could also encompass sketch-comedy routines, political cartoons, or internet memes.
Moreover, the bill does not hinge on the intent to deceive. Simply informing audiences that a depiction was unauthorized or that the individual depicted did not participate in its creation does not serve as a defense against legal action.
First Amendment Implications
In light of the evident First Amendment implications, the legislators have included a provision stating that First Amendment protections can serve as a defense against alleged violations. However, this provision offers limited reassurance, given the concurrent efforts to broaden the categories of speech not safeguarded by the First Amendment.
Presently, intellectual property, including copyrighted works and trade secrets, falls under exceptions to free speech protections, allowing for permissible restrictions. By designating one’s voice and likeness as intellectual property, the legislators seek to classify depictions of someone else’s voice or likeness as unprotected speech.
Even within the realm of intellectual property, voice replicas and digital depictions of others may not always be prohibited. Analogous to the fair use doctrine that provides flexibility with copyright protections, this bill delineates circumstances under which replicas and depictions would be deemed acceptable, weighing factors such as the public interest in access against the intellectual property interest in the voice or likeness.
However, even if defendants ultimately prevail on First Amendment grounds, the prospect of legal battles involving time and resources remains daunting. This could deter individuals from engaging in protected speech, including artistic endeavors, comedic expressions, or critical commentary. The potential repercussions extend to tech companies, which might opt to proactively remove contentious content from their platforms to mitigate legal risks.
Should the bill be enacted, a surge in content takedowns that could potentially infringe upon its provisions is foreseeable. This could include satirical portrayals from shows like Saturday Night Live, impersonations of public figures, or AI-generated images that toe the line of acceptability. Platforms may adopt stringent policies against parody accounts and similar content.
While the bill offers immunity to imitators if the harm caused is deemed negligible, defining harm to encompass emotional distress introduces subjectivity into the evaluation. Certain categories of content, such as sexually explicit material and intimate images, are automatically deemed harmful, leaving no room for debate on the absence of harm to the depicted party.
Although the supporters may argue that these provisions target specific issues like deepfake pornography, the language employed is expansive enough to potentially encompass a wide spectrum of content, including artistic expressions, political satire, and comedic depictions of intimate scenarios.
The emergence of AI presents novel avenues for creative expression and deception, prompting societal discourse on regulatory frameworks. However, it is imperative to prevent lawmakers from leveraging these developments to justify unwarranted constraints on free speech rights.