A movie showcased the future of elections in 2023, featuring Hillary Clinton, a former Secretary of State and prominent figure in the Democratic Party. In the film, she expresses unexpected support for Ron DeSantis, a Republican presidential candidate, stating, “You know, people may be surprised to hear me say this, but I actually really including Ron DeSantis.” She believes that DeSantis embodies the qualities needed by the nation.
The unusual endorsement from Clinton towards a Republican candidate raises eyebrows, especially considering that the video was generated using artificial intelligence (AI). This instance exemplifies how relational AI could revolutionize politics, with implications highlighted by experts. The impact of AI on elections includes the potential for highly personalized advertisements influencing voters and the dissemination of false information at minimal cost. This misinformation, akin to “October surprises,” could sway public opinion just before crucial elections, leaving little time for fact-checking. It could also involve misleading details about election logistics, such as polling locations.
As the world approaches a year filled with significant elections across various countries in 2024, concerns regarding the influence of relational AI on electoral processes have intensified. Elections in countries like Taiwan, India, Russia, South Africa, Mexico, Iran, Pakistan, Indonesia, Europe, the US, and the UK are expected to shape responses to global challenges like political instability and climate change. The role of new conceptual AI systems in shaping these elections mirrors the impact of social media on past electoral outcomes.
While politicians in the 2010s heavily utilized social media for election campaigns, the advent of conceptual AI significantly reduces the cost of generating false information. This shift is alarming considering the potential repercussions of misinformation on political landscapes. The rise of “botshit,” a term coined by researchers, highlights the dangers posed by AI-generated content. The report by Tim Hannigan, Ian McCarthy, and colleagues delves into the concept of “botshit” and its implications. Conceptual AI systems like ChatGPT, while capable of accurate responses, may also produce misleading “hallucinations” that blur the line between reality and fiction.
The dissemination of false information by AI systems could lead to challenges in discerning truth from fiction, particularly in crucial decision-making processes like voting. Safeguards such as watermarking AI-generated content, reliable training data sources, editorial vigilance, and policy implementation by political parties can mitigate the risks associated with AI-driven misinformation. Additionally, voters are encouraged to critically evaluate unfamiliar information to prevent the influence of AI-generated falsehoods on their decisions.
The transformative power of conceptual AI extends beyond politics, impacting various industries and professions. While the potential benefits of AI in politics are vast, the immediate focus remains on addressing the negative implications. Efforts must be directed towards harnessing relational AI for constructive purposes and safeguarding against the proliferation of misinformation.