We frequently hear about the potential simplification of careers through AI. However, there are valid reasons for skepticism among voters gearing up for the 2024 elections.
As individuals prepare to participate in elections in the United States, as well as in nations such as Britain, India, and Mexico this year, nearly half of the global population will soon realize the negative impact AI can have on the political landscape.
Voters are faced with the increasingly challenging task of selecting representatives and deciphering policies, with the looming threat of AI complicating this process significantly.
Ethan Mollick, a professor at Wharton, recently emphasized the transformative impact of AI growth, stating, “Halting the advancement of AI will irreversibly alter the data landscape post-2023.”
The prominence of AI technology has grown, particularly with the advancements in conceptual AI by OpenAI’s ChatGPT.
In New Hampshire, ahead of a Democratic primary, voters were targeted with illegal robocalls purportedly from Joe Biden, advising them against voting—an alarming example of the potential misuse of AI technology.
The deceptive robocalls, initially reported by NBC News, mimicked Biden’s distinctive phrase, “what a bunch of malarkey,” aiming to dissuade individuals from voting on Election Day.
The challenge lies in distinguishing genuine messages from counterfeit ones, making the impact of such AI-generated content even more detrimental.
Research published in the journal PLOS ONE in August revealed that over a third of individuals struggled to differentiate between artificially generated and authentic statements, underscoring the potential risks associated with deepfake technology.
The proliferation of deepfakes created by AI has led to various incidents, including over 100 fake advertisements featuring a fabricated Prime Minister Rishi Sunak on Instagram in the UK, as reported by Fenimore Harper Communications.
These advertisements, identified between December 8 and January 8, targeted more than 400,000 individuals across 23 countries, including Turkey, Malaysia, the Philippines, and the United States.
According to Fenimore Harper’s findings, this marked the first widespread distribution of a deepfake video featuring a prominent English public figure. Despite inquiries from Business Insider, Meta refrained from immediate comments.
The accessibility of AI technology means that virtually anyone with internet access and an AI tool can potentially wreak havoc, raising concerns about accountability for deepfake incidents in the US and UK.
Mollick shared his experience of creating an algorithmic video swiftly through Heygen, an AI company, by providing video footage and audio clips. The resulting video simulated his movements and voice, showcasing the capabilities of AI technology.
Initiatives for Regulation
In response to these challenges, AI companies are taking steps to address the misuse of AI technology. OpenAI recently announced measures to restrict the use of tools like ChatGPT for political manipulation and to implement safeguards on programs like DALL-E to prevent the generation of images depicting real individuals.
In a public statement, OpenAI emphasized the importance of safeguarding the integrity of elections and ensuring responsible AI usage to uphold democratic processes.
Various organizations are also collaborating to combat the dissemination of AI-generated misinformation. Lisa Quest, the UK and Ireland lead at Oliver Wyman, highlighted the efforts of their social impact group in partnership with charitable institutions to combat online misinformation.
Despite these efforts, the battle against AI-generated misinformation remains challenging, mirroring the struggles of citizens navigating the complexities of trust and authenticity in the digital age.