Last week, on the eve of the New Hampshire primary, some of the state’s voters received a robocall purporting to be from President Joe Biden. Unlike the other such prerecorded calls reminding people to vote, this one had a different ask: Don’t bother coming out to the polls, the voice instructed. Better to “save your vote for the November election.”
The message was strange, even nonsensical, but the voice on the line sure did sound like the president’s. “What a bunch of malarkey!” it exclaimed at one point. And caller ID showed that the call came from a former chair of the New Hampshire Democratic Party, according to the Associated Press. But this robocall appears to have been AI-generated. Who created it, and why, remains a mystery.
Although the stunt likely had no real effect on the outcome of the election—Biden won, as anticipated, in a landslide—it vividly illustrated one of the many ways in which generative AI might influence an election. These tools can help candidates more easily get out their message, but they can also let anyone create images and clips that might deceive voters. Much of what AI will do to politics has been speculative at best, but in all likelihood, the world is about to get some answers. More human beings will have the chance to vote in 2024 than in any single year before, with elections not just in the U.S. but also in the European Union, India, Mexico, and more. It’s the year of the AI election.
Up to this point, much of the attention on AI and elections has focused on deepfakes, and not without reason. The threat—that even something seemingly captured on tape could be false—is immediately comprehensible, genuinely scary, and no longer hypothetical. With better execution, and in a closer race, perhaps something like the fake-Biden robocall would not have been inconsequential. A nightmare scenario doesn’t take imagination: In the final days of Slovakia’s tight national election this past fall, deepfaked audio recordings surfaced of a major candidate discussing plans to rig the vote (and, of all things, double the price of beer).
Even so, there’s some reason to be skeptical of the threat. “Deepfakes have been the next big problem coming in the next six months for about four years now,” Joshua Tucker, a co-director of the NYU Center for Social Media and Politics, told me. People freaked out about them before the 2020 election too, then wrote articles about why the threats hadn’t materialized, then kept freaking out about them after. This is in keeping with the media’s general tendency to overhype the threat of efforts to intentionally mislead voters in recent years, Tucker said: Academic research suggests that disinformation may constitute a relatively small proportion of the average American’s news intake, that it’s concentrated among a small minority of people, and that, given how polarized the country already is, it probably doesn’t change many minds.
Even so, excessive concern about deepfakes could become a problem of its own. If the first-order worry is that people will get duped, the second-order worry is that the fear of deepfakes will lead people to distrust everything. Researchers call this effect “the liar’s dividend,” and politicians have already tried to cast off unfavorable clips as AI-generated: Last month, Donald Trump falsely claimed that an attack ad had used AI to make him look bad. “Deepfake” could become the “fake news” of 2024, an infrequent but genuine phenomenon that gets co-opted as a means of discrediting the truth. Think of Steve Bannon’s infamous assertion that the way to discredit the media is to “flood the zone with shit.”
AI hasn’t changed the fundamentals; it’s just lowered the production costs of creating content, whether or not intended to deceive. For that reason, the experts I spoke with agreed that AI is less likely to create new dynamics than to amplify existing ones. Presidential campaigns, with their bottomless coffers and sprawling staff, have long had the ability to target specific groups of voters with tailored messaging. They might have thousands of data points about who you are, obtained by gathering information from public records, social-media profiles, and commercial brokers—data on your faith, your race, your marital status, your credit rating, your hobbies, the issues that motivate you. They use all of this to microtarget voters with online ads, emails, text messages, door knocks, and other kinds of messages.
With generative AI at their disposal, local campaigns can now do the same, Zeve Sanderson, the executive director of the NYU Center for Social Media and Politics, told me. Large language models are famously good mimics, and campaigns can use them to instantaneously compose messages in a community’s specific vernacular. New York City Mayor Eric Adams has used AI software to translate his voice into languages such as Yiddish, Spanish, and Mandarin. “It is now so cheap to engage in this mass personalization,” Laura Edelson, a computer-science professor at Northeastern University who studies misinformation and disinformation, told me. “It’s going to make this content easier to create, cheaper to create, and put more communities within the reach of it.”
That sheer ease could overwhelm democracies’ already-vulnerable election infrastructure. Local- and state-election workers have been under attack since 2020, and AI could make things worse. Sanderson told me that state officials are already inundated by Freedom of Information Act requests that they think are AI-generated, which potentially eats up time they need to do their job. Those officials have also expressed the worry, he said, that generative AI will turbocharge the harassment they face, by making the act of writing and sending hate mail virtually effortless. (The consequences may be particularly severe for women.)
In the same way, it could also pose a more direct threat to election infrastructure. Earlier this month, a trio of cybersecurity and election officials published an article in Foreign Affairs warning that advances in AI could allow for more numerous and more sophisticated cyber attacks. These tactics have always been available to, say, foreign governments, and past attacks—most notably the Russian hack of John Podesta’s email, in 2016—have wrought utter havoc. But now pretty much anyone—whatever language they speak and whatever their writing ability—can send out hundreds of phishing emails in fluent English prose. “The cybersecurity implications of AI for elections and electoral integrity probably aren’t getting nearly the focus that they should,” Kat Duffy, a senior fellow for digital and cyberspace policy at the Council on Foreign Relations, told me.
How all of these threats play out will depend greatly on context. “Suddenly, in local elections, it’s very easy for people without resources to produce at scale types of content that smaller races with less money would potentially never have seen before,” Sanderson said. Just last week, AI-generated audio surfaced of one Harlem politician criticizing another. New York City has perhaps the most robust local-news ecosystem of any city in America, but elsewhere, in communities without the media scrutiny and fact-checking apparatuses that exist at the national level, audio like this could cause greater chaos.
The country-to-country differences may well be even more extreme, the writer and technologist Usama Khilji told me. In Bangladesh, backers of the ruling party are using deepfakes to discredit the opposition. In Pakistan, meanwhile, former Prime Minister Imran Khan—who ended up in jail last year after challenging the country’s military—has used deepfakes to give “speeches” to his followers. In countries that speak languages with less online text for LLMs to gobble up, AI tools may be less sophisticated. But those same countries are likely the ones where tech platforms will pay the least attention to the spread of deepfakes and other disinformation, Edelson told me. India, Russia, the U.S., the EU—this is where platforms will focus. “Everything else”—Namibia, Uzbekistan, Uruguay—“is going to be an afterthought,” he said.
The bigger or wealthier countries will get most of the attention, and the flashier issues will get most of the concern. In this way, attitudes toward the electoral implications of AI resemble attitudes toward the technology’s risks at large. It has been a little more than a year since the emergence of ChatGPT, a little more than a year that we’ve been hearing about how this will mean the mass elimination of white-collar work, the integration of chatbots into every facet of society, the beginning of a new world. But the main ways AI touches most people’s lives remain more in the background: Google Search, autocomplete, Spotify suggestions. Most of us tend to fret about the potential fake video that deceives half of the nation, not about the flood of FOIA requests already burying election officials. If there is a cost to that way of thinking, the world may pay it this year at the polls.