“What a bunch of nonsense.”
Upon picking up the phone, Gail Huntley immediately recognized the distinctive gravelly voice of Joe Biden. Huntley, a 73-year-old resident of New Hampshire, had planned to support the president in the state’s upcoming primary. However, she was surprised to receive a pre-recorded message from him advising her against voting this Tuesday.
In the message, Biden stated, “It’s essential that you save your vote for the November election. Voting this Tuesday only benefits the Republicans seeking to re-elect Donald Trump.”
Realizing that the call was a hoax, Huntley initially believed that Biden’s message had been misunderstood. To her astonishment, she later learned that the recording was actually produced by artificial intelligence. Consequently, the US government promptly prohibited robocalls featuring AI-generated voices.
The proliferation of the Biden deepfake marked a significant moment for governments, technology companies, and civil society organizations embroiled in a contentious debate on how to oversee an information landscape where individuals can create realistic images of political candidates or mimic their voices with uncanny accuracy.
With elections on the horizon in various countries, including the US, India, and potentially the UK in 2024, experts warn that the democratic process faces substantial jeopardy from artificial intelligence manipulation.
Instances of AI-generated content influencing elections have already been witnessed in nations like Slovakia, Taiwan, and Indonesia. These occurrences occur amidst a backdrop of diminishing trust in political leaders, institutions, and the media.
Watchdog organizations underscore the susceptibility of digital media to exploitation, particularly in light of over 40,000 job layoffs at tech firms responsible for hosting and moderating such content.
Upcoming Challenges
For Biden, concerns regarding the potential abuse of AI were heightened after watching the latest Mission Impossible film. While at a weekend retreat at Camp David, the president observed Tom Cruise’s character confronting a rogue AI, sparking worries about the dangers associated with artificial intelligence.
Subsequently, Biden issued an executive order mandating prominent AI developers to reveal safety test results and other relevant information to the government.
While the US has taken proactive measures, the EU is on the brink of enacting comprehensive AI regulations, albeit not until 2026. Conversely, the UK’s proposed regulations have faced backlash for their perceived sluggishness.
Considering that many pioneering tech companies are headquartered in the US, the actions taken by the White House will significantly shape the advancement of disruptive AI technologies.
Katie Harbath, an expert closely monitoring the evolution of the information ecosystem, stresses the necessity for measured concern in 2024. She highlights the crucial role of tech companies in overseeing AI-generated content, particularly with the emergence of new platforms entering their first election season.
Recently, major tech corporations agreed to voluntarily implement measures aimed at preventing AI from being exploited to disrupt democratic processes worldwide.
Navigating the AI Terrain
Despite prohibitions on using AI tools for political campaigns, reports indicate widespread use of such technologies in elections, as evidenced in Indonesia. Enforcing these policies beyond the US poses a challenge, given the diverse legal frameworks and cultural norms across nations.
The manipulation of audio recordings using AI, as observed in Slovakia’s elections, underscores the gaps in regulating AI-generated content. Instances like these raise questions about the effectiveness of tech companies in monitoring such content, particularly beyond the realm of high-profile elections.
Looking forward, experts caution that the true threats to democracy may arise from unforeseen advancements in AI technology. While voters are becoming more discerning about the information environment, the evolving landscape of technology and potential malicious actors necessitate ongoing vigilance.
As conversations about AI’s impact on democracy progress, the focus shifts to anticipating future challenges and proactively addressing emerging risks that may not yet be evident. The imperative for preemptive measures to protect democratic processes against AI manipulation remains a critical concern.