Written by 1:06 pm Big Tech companies, Politics

### AI and Tech Giants’ Content Moderation Strategies for the 2024 Global Election: Insights and Plans

Companies including Meta, Google, OpenAI and Microsoft have all announced plans to limit AI’s abili…

Overview

The global landscape is witnessing a surge in elections, raising concerns among experts about the potential impact of artificial intelligence tools like ChatGPT in spreading misinformation. This has prompted major tech companies to unveil their strategies for content moderation.

Core Details

OpenAI has announced stringent measures to prevent the misuse of its AI products, such as ChatGPT and Dall-E, including a ban on politicians, political campaigns, and lobbying activities. Additionally, impersonation of candidates or governments using these tools will not be allowed.

Meta, the parent company of popular apps like Facebook, Instagram, Threads, and WhatsApp, has reaffirmed its commitment to labeling state-controlled media, restricting ads from such sources in the U.S., and imposing limitations on new political ads during the final week of the American campaign. Advertisers must disclose the use of AI or digital tools in creating or modifying political content.

Google has taken steps to enhance transparency by limiting the scope of election-related inquiries that its AI chatbot Bard can address. Moreover, YouTube will mandate content creators to disclose any synthetic or altered content, aiming to mitigate potential risks associated with advanced AI storytelling techniques.

X, previously known as Twitter, has faced criticism for its handling of misinformation and falsehoods on its platform. Despite discontinuing tools for reporting election misinformation and dismantling its election integrity team, the platform relies on community-driven fact-checking efforts to combat disinformation.

Microsoft is offering various services to safeguard election integrity, including tools to authenticate content, protect candidates’ likeness, and provide guidance on utilizing AI in campaigns. Additionally, Bing search engine will prioritize delivering authoritative search results to users.

TikTok, owned by ByteDance, prohibits paid political ads while collaborating with fact-checking organizations to curb the dissemination of misinformation, despite being overlooked by politicians.

Current Scenario

The upcoming year is poised to witness a historic number of elections globally, with over 50 countries, including major nations like India, the U.S., and Russia, scheduled to hold national elections. The implications of these elections extend beyond individual nations, impacting critical areas such as democracy, human rights, security, and climate action. However, concerns persist regarding the integrity of these elections, fueled by the proliferation of disinformation online and the evolving capabilities of AI tools.

Technological Concerns

In 2024, AI emerges as a significant global risk, particularly in the realm of election security. Generative AI technologies pose a unique challenge due to their capacity to manipulate audio and visual content, potentially impersonating candidates convincingly. The widespread adoption of advanced generative tools like Dall-E, Midjourney, ChatGPT, and Bard raises apprehensions about the authenticity of online information dissemination during electoral processes.

Impact of Deepfakes

The advent of deepfake technology has exacerbated concerns surrounding misinformation, with recent instances depicting fabricated scenarios involving prominent figures like Hillary Clinton, Joe Biden, and Donald Trump. Political campaigns have also leveraged AI-generated content to portray speculative outcomes, underscoring the growing influence of such technologies in shaping public narratives during elections.

Visited 2 times, 1 visit(s) today
Last modified: January 19, 2024
Close Search Window
Close