Written by 4:31 am Generative AI, Latest news

### AI-Powered Fake News: A Looming Threat in Upcoming Elections

Targeted, AI-generated political misinformation is already out there—and humans are falling for it.

Many years prior to the launch of ChatGPT, the University of Cambridge Social Decision-Making Laboratory, of which I was a part, delved into the realm of neural networks and misinformation. Our curiosity led us to explore whether neural networks could fabricate misinformation. Our approach involved training GPT-2, the predecessor of ChatGPT, on a plethora of popular conspiracy theories. Subsequently, we tasked it with crafting deceptive news stories. The outcomes were intriguing – we obtained a myriad of deceptive yet seemingly plausible news pieces. For instance, statements like “Certain Vaccines Are Loaded With Hazardous Chemicals and Toxins” and “Government Officials Have Manipulated Stock Prices to Conceal Scandals” surfaced. The pivotal question lingered – would individuals actually buy into these assertions?

In a groundbreaking move, we devised the Misinformation Susceptibility Test (MIST) to scrutinize this conjecture. Teaming up with YouGov, we utilized the AI-fabricated headlines to gauge the susceptibility of Americans to AI-forged fake news. The findings were disconcerting: 41 percent of Americans inaccurately believed the vaccine headline, while 46 percent fell prey to the notion of government interference in the stock market. A recent study, documented in the journal Science, not only revealed that GPT-3 churns out more compelling disinformation than humans but also highlighted the challenge people face in distinguishing between human and AI-crafted misinformation.

Looking ahead to 2024, I foresee AI-crafted misinformation infiltrating election campaigns surreptitiously, evading detection by the masses. In actuality, you might have already encountered instances of such misinformation. In May 2023, a viral fabricated narrative about a Pentagon bombing circulated, accompanied by an AI-generated image depicting a massive cloud of smoke. This triggered public outrage and even impacted the stock market. Furthermore, Republican presidential candidate Ron DeSantis incorporated fake images of Donald Trump embracing Anthony Fauci into his political endeavors. By amalgamating authentic and AI-crafted visuals, politicians can blur the lines between reality and fiction, leveraging AI to fortify their political offensives.

Preceding the surge of generative AI, cyber-propaganda entities globally had to manually concoct deceptive messages and deploy human troll units to target audiences on a large scale. With AI’s intervention, the process of churning out deceptive news headlines can now be automated and weaponized with minimal human involvement. For instance, micro-targeting, a technique that tailors messages to individuals based on their digital footprint, like Facebook likes, posed concerns in past elections. However, the primary obstacle was the necessity to generate numerous versions of the same message to discern its effectiveness on a specific demographic. What was once laborious and costly has now become inexpensive and easily accessible, sans any entry barriers. AI has essentially democratized the creation of disinformation: Anyone equipped with a chatbot can seed the model on a specific subject, be it immigration, gun control, climate change, or LGBTQ+ issues, and generate a multitude of highly convincing fake news pieces within minutes. In fact, scores of AI-generated news platforms have already emerged, disseminating fallacious narratives and videos.

To evaluate the influence of AI-forged disinformation on individuals’ political inclinations, researchers from the University of Amsterdam fashioned a deepfake video portraying a politician offending his religious voter base. For instance, in the video, the politician jests, “As Christ would say, don’t crucify me for it.” The study uncovered that religious Christian voters exposed to the deepfake video exhibited more negative sentiments toward the politician compared to those in the control group.

Deceiving individuals with AI-crafted disinformation in controlled experiments is one matter, but toying with our democratic principles is another. In the forthcoming year, we anticipate witnessing a surge in deepfakes, voice replication, identity distortion, and AI-generated fake news. Governments are likely to impose stringent restrictions, if not outright bans, on the utilization of AI in political campaigns. Failure to do so could pave the way for AI to subvert democratic elections.

Visited 1 times, 1 visit(s) today
Tags: , Last modified: March 29, 2024
Close Search Window