“What a load of nonsense.”
Upon answering the phone, Gail Huntley immediately recognized the distinct gravelly voice of Joe Biden. As a 73-year-old resident of New Hampshire preparing to cast her vote in the upcoming primary, she was taken aback by a pre-recorded message from the president advising her against voting.
The message conveyed, “It’s crucial to reserve your vote for the November election. Casting your vote this Tuesday only serves to support the Republicans in their efforts to re-elect Donald Trump.”
Realizing that the call was fraudulent, Huntley initially assumed that Biden’s words had been misrepresented. However, her shock escalated upon discovering that the message was actually generated by artificial intelligence. Subsequently, the United States swiftly implemented a ban on robocalls utilizing AI-generated voices.
The dissemination of the Biden deepfake marked a pivotal moment for governments, technology firms, and civil society organizations engaged in heated discussions on regulating an information landscape where individuals can effortlessly create lifelike images of political candidates or replicate their voices with uncanny precision.
In 2024, citizens across numerous countries, including the US, India, and likely the UK, are poised to partake in elections. Experts caution that artificial intelligence poses a significant threat to the democratic process, especially in an environment already plagued by low trust in politicians, institutions, and the media.
With over 40,000 job cuts at tech companies responsible for hosting and moderating digital content, watchdogs highlight the vulnerability of digital media to exploitation.
Challenges Ahead
For Biden, apprehensions regarding the potential misuse of AI were heightened after viewing the latest Mission Impossible film, where Tom Cruise’s character confronts a rogue AI. Following this experience during a weekend at Camp David, the president signed an executive order mandating leading AI developers to disclose safety test results and relevant information to the government.
The US is not acting alone in addressing these concerns. The EU is on the verge of enacting comprehensive AI regulations, albeit effective from 2026. Conversely, proposed regulations in the UK have faced criticism for their perceived sluggish progress.
Given that many groundbreaking tech companies are based in the US, the actions taken by the White House will profoundly influence the development of disruptive AI technologies.
Katie Harbath, a former Facebook policy developer now focusing on trust and safety issues within tech companies, asserts that the US government’s measures may not be extensive enough. She underscores the delicate balance between regulatory efforts and fostering innovation, particularly as China advances its AI industry.
Collaborative Efforts
Recently, major tech companies made strides towards coordinated action by voluntarily committing to adopt “reasonable precautions” to prevent AI from disrupting democratic elections globally. Signatories include OpenAI, the creator of ChatGPT, alongside tech giants like Google, Adobe, and Microsoft, all of whom have introduced AI content generation tools. Many of these companies have updated their guidelines, prohibiting the use of their products in political campaigns. However, enforcing these restrictions remains a challenge.
OpenAI, renowned for its powerful Dall-E software capable of creating lifelike images, has vowed to reject requests involving the generation of images featuring real individuals, including political candidates.
Midjourney, acclaimed for its highly accurate AI image generation, explicitly prohibits users from employing its product for political campaigns or election influence.
Global Implications
Despite bans on political use, reports indicated that OpenAI’s tools were extensively utilized in Indonesia’s recent election for various campaign activities. The enforcement of such policies outside the US poses a significant challenge for newer companies like OpenAI.
The incident during Slovakia’s national elections, where manipulated audio surfaced on social media platforms, underscores the vulnerabilities in policing AI-generated content. The spread of misinformation through manipulated content highlights the need for robust regulation and enforcement mechanisms.
Anticipating the Future
While voters are increasingly savvy about navigating the information landscape, concerns persist regarding unforeseen technological advancements and their implications for democracy. Experts emphasize the need for proactive measures to address potential threats posed by emerging technologies that may not yet be on the public radar.
As we navigate the evolving technological landscape, vigilance and adaptability are crucial in safeguarding the integrity of democratic processes amidst the ever-changing digital environment.