On Thursday, February 29, 2024, in Martinez, California (Aric Crabb/Bay Area News Group), Bradley Childers, the knowledge systems administrator at the Contra Costa County Clerk-Recorder-Elections Department, is photographed alongside professionals verifying voter identities.
A new and unexpected challenge emerged as Contra Costa County’s election staff, local authorities, and an FBI agent convened to strategize defenses for the upcoming 2024 election season. The focus of the discussion was on conceptual artificial intelligence, a cutting-edge technology hailing from Silicon Valley.
During a simulated scenario in a news segment, a potential issue at a nearby polling place was highlighted, seemingly designed to discourage voter turnout. However, it was later revealed that the media report was fabricated by an unethical individual leveraging artificial intelligence to spread misinformation. Election officials and law enforcement swiftly addressed the threat by tracing its origins and disseminating accurate information to the public.
Discussions on AI-related risks were underway in election departments across the Bay Area and the nation on Super Tuesday primaries. This concern escalated following a manipulated rendition of President Joe Biden’s speech in a January call, aiming to dissuade voting in New Hampshire. California’s Attorney General, Rob Bonta, along with counterparts from other states, condemned the interference by AI, citing its potential to undermine “the integrity of our election process.”
In response to the dissemination of fake-Biden robocalls, the U.S. Federal Communications Commission promptly prohibited the use of AI-generated voices in unsolicited robocalls. FCC President Jessica Rosenworcel highlighted the misuse of such technology by malicious actors to deceive voters, engage in fraud, and impersonate public figures.
Santa Clara County’s election officials, led by associate registration of voters Matt Moreles, are closely monitoring the implications of AI advancements. While concerns persist regarding the potential manipulation of election systems or results through AI, the primary focus remains on guarding against the use of artificial intelligence to intimidate voters.
According to Moreles, the primary issue revolves around the dissemination of propaganda and confusion.
The introduction of San Francisco-based OpenAI’s ChatGPT AI bot in 2022 marked a significant milestone, propelling artificial intelligence into the mainstream alongside applications like Apple’s Siri. Subsequently, various companies introduced products enabling the generation of text, audio, and visual content in response to user queries.
The rapid expansion of AI technology has raised apprehensions, including concerns about copyright infringement, AI replacing human labor, academic dishonesty, and the proliferation of fake news for political agendas.
In the current electoral landscape, Professor Susan Hyde from UC Berkeley underscores the threat of propaganda, emphasizing that while election fraud is not novel, AI can amplify the dissemination of misinformation rapidly and extensively.
Hyde warns of the potential for AI to facilitate the spread of misleading narratives, potentially eroding trust in the democratic process and fostering support for divisive figures and ideologies.
As the November general election approaches, Marci Andino, senior director at the Center for Internet Security, anticipates AI-enabled interference in this year’s electoral processes.
The Cybersecurity and Infrastructure Security Agency warns that AI tools could be exploited to disseminate false election information through various channels, including text, email, and social media. Deep-fake videos and audio impersonations of election officials could be used to deceive and manipulate public opinion, posing a significant threat to election integrity.
Reuven Cohen, a Toronto-based advisor to Fortune 500 companies, expresses concerns about the potential misuse of AI to suppress voter turnout by exploiting apathy. Cohen highlights the accessibility of dark web data for targeting individuals based on demographics and emotional profiles, coupled with the ease of producing authentic-looking videos through advanced software.
Efforts to combat AI-related electoral threats emphasize the importance of accessing reliable information from official sources and reputable news outlets. Officials urge the public to verify information from multiple sources and remain vigilant against misinformation campaigns.
Amidst concerns over AI’s influence on elections, Georgetown University researcher Josh Goldstein notes that there is currently no conclusive evidence supporting the notion that AI-driven advertising significantly impacts election outcomes.