Written by 1:22 pm AI, AI Business, Discussions, Uncategorized

### Analyzing the Significance of Large Technology Companies Addressing Election AI

Recent actions taken by Meta and Microsoft to protect elections will have limited impact on misinfo…

On November 8th, Meta announced the necessity for labels on democratic advertisements that have been digitally altered in deceptive ways using AI or other technologies.

The announcement follows Microsoft’s declaration of taking measures to protect elections by providing tools to detect AI-generated content and establishing a “Campaign Success Team” to offer advice on AI, security, and related topics to political campaigns.

With upcoming elections in the United States, India, the U. K., Mexico, Indonesia, and Taiwan, the next year marks a crucial period for primaries in this decade. Concerns about the impact of deepfakes and misinformation on voting persist, although some experts argue that there is limited evidence supporting these concerns. While software companies’ efforts to safeguard election integrity are commendable, experts suggest that more significant changes to political systems may be necessary to combat false information effectively.

Influence of Artificial Intelligence on Voting Behavior

Tech companies have faced criticism for their roles in past elections. Meta, according to a report from the online advocacy group Avaaz, allowed the spread of false information by delaying system changes until shortly before the 2020 U.S. national election. Meta has also faced backlash for sharing content that questioned the legitimacy of the 2022 Brazilian election and for amplifying information that may have contributed to human rights violations against the Rohingya ethnic group in Myanmar.

The rapid advancement of AI technology in recent years, exemplified by OpenAI’s ChatGPT introduction in November 2022, has led to the emergence of conceptual AI, enabling users to create text, audio, and video content.

While the use of AI has been acknowledged in various instances, conceptual AI has been increasingly utilized in U.S. political advertisements. For instance, an AI-generated Republican Party ad in April depicted potential outcomes if President Joe Biden were reelected. Additionally, in June, Ron DeSantis, the Republican governor of Florida, shared AI-generated images on his X plan page showing him embracing Dr. Anthony Fauci.

A survey conducted in November revealed that 58% of U.S. adults are concerned that AI could propagate false information during the 2024 presidential election.

Read More: Approaching Federal AI Regulation

Andreas Jungherr, a political science professor at the University of Bamberg in Germany, notes that studies consistently show misinformation has not significantly influenced previous U.S. election results. While acknowledging concerns from various fields regarding misinformation’s impact on elections, Jungherr emphasizes the lack of substantial evidence linking exposure to misinformation with behavioral changes or voting patterns.

Elizabeth Seger, a researcher at the Centre for the Governance of AI in the U. K., warns about the potential misuse of highly personalized AI targeting and compelling AI narratives for mass persuasion. However, she highlights the limited likelihood of AI propaganda significantly swaying public opinion to impact elections. Seger underscores that the mere presence of deepfakes can erode public trust in critical information sources.

Instances where AI-generated deepfakes misled voters about candidates’ actions pose a significant but often underestimated threat, according to Seger. The existence of such technologies undermines factual accuracy and public trust in information sources.

Defending the Vote

Efforts to mitigate AI’s impact on elections have been limited. A proposed bill in the U.S. Congress in May aimed to mandate disclaimers on political ads featuring AI-generated imagery or video. The Federal Election Commission sought input in August on potentially amending laws to prevent social media ads from misrepresenting individuals or parties using false AI, but no action has been taken yet.

Tech companies are responding to concerns about reputational damage following Meta’s involvement in the 2020 election. The White House secured commitments from seven leading AI firms, including Meta and Microsoft, to ensure the security and reliability of AI technologies. One initiative involves creating and utilizing historical or watermarking methods for AI-generated visual and audio content. In September, eight additional companies joined these efforts.

Recent actions by Meta and Microsoft coincide with Alphabet’s announcement that social ads featuring artificially generated content misrepresenting real events or individuals must be disclosed publicly.

Professors and researchers at Princeton University, Arvind Narayanan and Sayash Kapoor, argue that enhanced AI capabilities will not exacerbate misinformation concerns, as producing misleading content is already relatively simple. They advocate for online platforms to moderate information effectively to combat propaganda.

Jungherr suggests that hashing and historical measures by AI developers may be ineffective in countering malicious actors exploiting AI technologies. He believes that companies are primarily aiming to avoid negative publicity rather than expecting these tools to sway elections significantly.

Sacha Altay, a scholar at the Digital Democracy Lab in Switzerland, underscores the importance of transparency in democratic advertising’s use of conceptual AI. However, he raises concerns about potential misuse by bad actors who may not disclose their use of conceptual AI.

Altay emphasizes the critical role of officials in leveraging the data ecosystem to influence voters, even through deceptive means. He acknowledges the complexity of the issue, suggesting that there is no easy solution to address the spread of misinformation in the electoral process.

Visited 4 times, 1 visit(s) today
Last modified: February 5, 2024
Close Search Window
Close