Ahead of the upcoming general election, several states are taking proactive measures by introducing new legislation to regulate the proliferation of AI-generated “deepfakes,” which are digitally altered videos or images used in campaign materials.
The rapid advancement of generative AI technologies, including voice-cloning software and image generators, has increasingly become a prominent feature in election cycles both nationally and globally.
In preparation for the 2024 presidential campaign, there was a surge in innovative utilization of AI-generated audio, visuals in campaign advertisements, and the exploration of AI chatbots to enhance voter interaction.
This week, Wisconsin became the latest state to join 20 others in either proposing or enacting laws that mandate election campaigns to disclose the use of AI-generated content in their advertisements.
A bipartisan group of state assembly members unanimously passed two bills aimed at addressing the use of AI in election campaigns through a voice vote.
Understanding Artificial Intelligence (AI)
The Emergence of Deepfakes
The first bill, AB 664, requires all audio and video communications to be labeled as “Contains content generated by AI,” with potential penalties of up to $1,000 per violation.
Representative Clinton Anderson, a key proponent of the bill, emphasized the importance of transparency by stating, “We want voters to know that what you see is what you get.”
During the voting process, Representative Adam Neylon highlighted the increasing challenge of discerning truth in the era of artificial intelligence, stating, “With artificial intelligence, it’s getting harder and harder to know what is true.”
The second bill, AB 1068, mandates Wisconsin state agencies to conduct audits on the utilization of AI tools to assess their effectiveness. These audits will include inventory assessments, guideline summaries, privacy policies, and data usage evaluations. Furthermore, agencies are required to report to the legislature in 2026 on positions within the state workforce that could benefit from AI optimization, with the goal of potentially streamlining operations by leveraging AI by 2030.
Prior to the vote, Representative Nate Gustafson from the Republican party refuted claims that the legislation aimed to replace jobs of state employees, stating that such assertions were “flat out false.”
Warning Issued About Deepfakes Becoming Indistinguishable from Reality by 2024
An illustration depicting artificial intelligence concepts is featured on a laptop surrounded by books in this image captured on July 18, 2023. (Jaap Arriens/NurPhoto via Getty Images)
States with prominent AI technology hubs such as California and New York have witnessed a surge in the introduction of bills related to AI regulation, as reported by Axios.
Recently, the Federal Communications Commission banned AI-generated robocalls that imitate the voices of political figures to deceive voters. This ruling, effective immediately, prohibits the use of voice cloning technology in common robocall scams targeting consumers.
The FCC’s decision came shortly after New Hampshire Attorney General John Formella disclosed that deceptive robocalls featuring an AI-generated replica of President Biden’s voice, advising recipients to abstain from participating in the January 23 primaries and instead reserve their votes for the November election, were traced back to two Texas-based companies.
James Turgal, the vice president of cyber risk at Optiv, emphasized the parallel between the threat posed by AI in the electoral process and other cyber threats faced by individuals and organizations on a daily basis. He underscored the necessity for collaborative efforts between the U.S. government and private sector to combat these risks effectively.
Fox News reporters Daniel Wallace and Nikolas Lanum contributed to this coverage.
Jamie Joseph, a political writer specializing in Senate coverage, is a prominent contributor to Fox News Digital.