Written by 10:46 pm AI policies, AI problems, AI Trend, Politics

### Congress’s Strategy to Combat AI Deepfakes: Taylor Swift and the No AI Fraud Act

How lawmakers hope to create a federal, baseline protection against AI abuse and uphold Americans&#…

Taylor Swift’s image was illicitly utilized in nonconsensual, seemingly AI-generated deepfake pornography, which rapidly proliferated across the Internet last week.

The widespread dissemination of these photos brought attention back to a critical question: Should U.S. citizens receive federal protection against AI exploitation?

The potential negative impacts of the Artificial Intelligence technology surge have long been a concern for cyber civil rights organizations like CCRI, and now, they are becoming increasingly unavoidable for both the public and policymakers.

Last week, numerous social media users encountered the manipulated and explicit images of Swift, sparking outrage among her extensive fan base, concern from the White House, and a vocal apprehension of AI misuse from lawmakers such as Rep. Joe Morelle, D-NY, who is pushing for legislation to criminalize the nonconsensual sharing of digitally altered explicit content, imposing penalties of jail time and fines.

One particular post on X that shared screenshots of the fabricated images of Swift reportedly garnered over 47 million views before the account was suspended on Thursday, as per a New York Times report. X proceeded to suspend multiple accounts sharing the explicit content and took a “temporary action” to block all searches related to Swift on the platform.

Joe Benarroch, head of business operations at X, stated in a message to the BBC on Sunday, “This is a temporary action and done with an abundance of caution as we prioritize safety on this issue.”

If a user attempts to search for Swift on the platform, a message appears stating, “Something went wrong. Try reloading,” according to X.

Despite X’s efforts to curb the rapid dissemination of these images on its platform, they have surfaced on other social media platforms and online forums despite attempts to delete and restrict them.

“We are alarmed by the reports of the … circulation of images that you just laid out – of false images to be more exact, and it is alarming,” White House Press Secretary Karine Jean-Pierre told ABC News on Friday.

A bipartisan group of U.S. House lawmakers, led by Rep. María Elvira Salazar, R-Fla., alongside Reps. Madeleine Dean, D-Pa., Nathaniel Moran, R-Texas, Joe Morelle, D-N.Y., and Rob Wittman, R-Va., introduced the No Artificial Intelligence Fake Replicas And Unauthorized Duplications (No AI FRAUD) Act on Jan. 10.

The legislators aim to establish a federal foundation to combat AI exploitation and safeguard Americans’ First Amendment rights online.

The bill seeks to “create a federal framework to protect Americans’ individual right to their likeness and voice against AI-generated fakes and forgeries,” as per Salazar’s press release.

Salazar emphasized, “What happened to Taylor Swift is a clear example of AI abuse. My bill, the No AI FRAUD Act, will punish bad actors using generative AI to harm others — whether celebrities or not. Everyone should have control over their own image and voice, and my bill aims to protect that right.”

Over the past two years, AI technology has progressed, expanded, and become more user-friendly, transitioning from complex coding to accessible applications and websites like ChatGPT, which have fueled a burgeoning online industry.

In the midst of the AI boom, private entities are racing to develop the most user-friendly tools for individuals to create altered images, videos, text, and audio recordings of virtually anything they desire. When an individual creates an AI-generated impersonation of a person, it is commonly referred to as a “deepfake.”

Deepfake pornography, which targeted Swift, is frequently characterized as image-based sexual exploitation.

If enacted, the No AI FRAUD Act would establish a federal jurisdiction that would:

  • Confirm that everyone’s likeness and voice are safeguarded, granting individuals the authority to manage the use of their identifying characteristics.
  • Enable individuals to enforce this right against those who facilitate, produce, and distribute AI frauds without consent.
  • Strike a balance between rights and First Amendment protections to preserve speech and innovation.

“My thoughts are with Taylor Swift during this immensely distressing time. And my thoughts are with every other person who has been victimized by harmful AI deepfakes,” Rep. Dean expressed in a statement to ABC News. “If this deeply disturbing privacy violation could happen to Taylor Swift — TIME’s 2023 person of the year — it is unimaginable to think how helpless other vulnerable women and children must also feel.”

Rep. Dean continued, “at a time of rapidly evolving AI, it is critical that Congress creates protections against harmful AI. My and Rep. Maria Salazar’s No AI FRAUD Act is intended to target the MOST harmful kinds of AI deepfakes by giving victims like Taylor Swift a chance to fight back in civil court.”

Rep. Morelle expressed hope that the violation against Swift would catalyze the establishment of the No AI FRAUD Act.

“We’re certainly hopeful the Taylor Swift news will help spark momentum and grow support for our bill, which as you know, would address her exact situation with both criminal and civil penalties,” a spokesperson for Morelle informed ABC News.

Since 2019, 17 states have passed 29 bills focused on regulating the creation, development, and utilization of artificial intelligence, as per the Council of State Governments. However, not all encompass provisions that specifically address pornographic deepfakes, and the varying language and distinctions in these laws leave room for exploitation, according to Salazar’s press release.

Using Swift’s case as an illustration, the singer maintains residences in Tennessee, New York, Rhode Island, and California.

Tennessee currently lacks a law explicitly prohibiting deepfake pornography. Nonetheless, Gov. Bill Lee proposed a bill this month – the Ensuring Likeness Voice and Image Security (ELVIS) Act – which seeks to amend the state’s Protection of Personal Rights law to encompass AI protection.

New York State provides criminal and civil recourse for victims of deepfake exploitation. In 2023, the state prohibited the dissemination of pornographic images generated using AI without the subject’s consent. Violators in New York could face a $1,000 fine and up to a year in jail.

“Since successfully expanding the right of publicity under New York State law I have been working to highlight the dangers of Artificial Intelligence and ensure we are taking steps to protect a person’s likeness,” Morelle stated in the bill’s press release. “Now it is apparent we must take immediate action to stop the abuse of AI technology by providing a federal law to empower individuals being victimized, and end AI FRAUD. I’m grateful to my colleagues for supporting this bipartisan effort and look forward to our work together stopping AI fakes and forgeries.”

Rhode Island presently lacks laws specifically addressing deepfake or synthetic media.

In 2020, California enacted a law enabling victims of nonconsensual deepfake pornography to sue the creators and distributors for $150,000 if the deepfake was “committed with malice.”

Taylor Swift has not publicly addressed the AI-generated deepfake images.

Visited 2 times, 1 visit(s) today
Last modified: January 30, 2024
Close Search Window
Close