Written by 5:53 am AI, Discussions

### Impact of AI Algorithms on Global Elections in 2024

AI-generated disinformation will target voters on a near-daily basis in more than 50 countries, acc…

Although instances like the current election can exacerbate the situation, issues such as hate speech, social propaganda, and misinformation are not new to the online realm. The utilization of bots, which are automated social media accounts, has significantly streamlined the deliberate dissemination of false information and propaganda. However, the algorithms that marred previous election cycles often generated poorly written and grammatically incorrect sentences. Concerns have been raised by some scientists regarding the potential for automated social media accounts to become increasingly persuasive as sophisticated language models, which are artificial intelligence techniques that generate text, become more accessible.

A recent study published in PNAS Nexus suggests that disinformation campaigns, online trolls, and other nefarious entities are likely to leverage conceptual AI to amplify election-related falsehoods more frequently. Experts predict that AI will play a pivotal role in the daily proliferation of harmful content on various social media platforms in 2024, based on prior research on digital and automated manipulation tactics. The study indicates that such actions could impact election outcomes in over 50 countries, including India and the United States, that are scheduled to hold elections in the upcoming month.

Lead author of the study, Neil Johnson, a professor at George Washington University, explains that the research mapped the connections between malicious organizations across 23 online platforms, including popular ones like Facebook and Twitter, as well as niche platforms like Discord and Gab. The study reveals that extremist groups that propagate hate speech tend to thrive on smaller platforms with limited content moderation resources, yet their content can reach a significantly broader audience.

Some smaller platforms exhibit intricate interconnectivity, facilitating the rapid spread of false information across various platforms like 4chan forums and other loosely regulated websites. It is estimated that up to one billion individuals could be exposed to harmful content if it disseminates from these networks to major social media platforms such as YouTube.

Zeve Sanderson, the executive director of New York University’s Center for Social Media and Politics, notes that social media has reduced the cost of disseminating false information, with AI technologies further lowering the production costs. This enables the creation of multimedia content that can be compelling to audiences, whether for foreign malicious actors or internal campaigns.

While traditional bots used to replicate information provided by humans or programs, the current large language models (LLMs) are enhancing bots by incorporating machine-generated text that closely resembles human language. Kathleen Carley, a mathematical social science professor at Carnegie Mellon University, emphasizes that the combination of bots and conceptual AI poses a more significant threat than bots alone. The integration of large language models and conceptual AI in software development simplifies the process for programmers.

Advancements in generative AI have enabled the production of lengthy, paragraph-level comments, surpassing the limitations of early bots that could only generate short posts. Despite the increasing sophistication of AI-generated text, detecting AI-generated images or videos remains more manageable than identifying AI-generated text due to the inherent complexities of language nuances and context.

Experts rely on indicators such as overly perfect language or the absence of slang, emotive language, or nuanced expressions to identify AI-generated content. However, developing tools to accurately detect LLM-generated text poses significant challenges and expenses. Carley highlights the ongoing arms race between malicious actors utilizing AI for deception and researchers developing detection mechanisms.

While bad actors are expected to exploit publicly available content filters rather than sophisticated AI models, the collaboration of bot networks with basic AI versions is anticipated to escalate the spread of propaganda during upcoming election cycles. Changes in social media dynamics, including platforms like TikTok introducing algorithmic feeds, have altered content consumption patterns significantly.

As AI technology evolves, social media companies face the challenge of identifying and mitigating the influence of AI-generated content. Strategies to counter misinformation may involve detecting suspicious account activities or targeting IP addresses associated with malicious content creators. Focusing on identifying and addressing the sources of fake content rather than the content itself may prove more effective in combating disinformation.

While concerns persist regarding the impact of AI on human behaviors such as polarization and voting choices, further research is needed to ascertain the precise effects of increased AI-generated content and fraudulent activities on societal dynamics. Sanderson emphasizes the importance of conducting thorough research to understand the implications of AI on information ecosystems and human behavior accurately.

Visited 3 times, 1 visit(s) today
Tags: , Last modified: February 28, 2024
Close Search Window
Close