Written by 4:44 pm Discussions, Generative AI, Latest news

### Misleading Election Images Persist Due to AI Tools

Though AI companies said they put some guardrails in place, researchers were able to easily create …

Despite substantial evidence proving otherwise, many members of the Republican party still maintain that President Joe Biden’s victory in the 2020 election was not legitimate. The recent Super Tuesday primaries saw the emergence of several candidates who deny the election results, such as Brandon Gill, who is married to right-wing commentator Dinesh D’Souza and is known for promoting the discredited film 2000 Mules. As the upcoming elections approach, the persistent narrative of election fraud continues to be a central theme among right-leaning candidates, fueled by a mix of misinformation circulating both online and offline.

The rise of generative AI technology poses a new threat to the integrity of elections. A recent study by the Center for Countering Digital Hate (CCDH), an organization dedicated to monitoring hate speech on social media platforms, revealed that despite claims by generative AI companies that they have implemented measures to prevent the dissemination of election-related misinformation through their image-generation tools, researchers were able to bypass these safeguards and create deceptive images regardless.

While some of these manipulated images depicted well-known political figures like President Joe Biden and Donald Trump, others took on a more generic nature, raising concerns among experts like Callum Hood, the lead researcher at CCDH, who fears that these images could be highly misleading. For instance, some of the generated images portrayed armed militias near polling stations, ballots being discarded, or voting machines being tampered with. In a particularly alarming case, researchers were able to prompt StabilityAI’s Dream Studio to produce an image showing President Biden confined to a hospital bed, appearing unwell.

Hood emphasized the vulnerability in the systems regarding images that could potentially be used to substantiate false claims of election fraud. He pointed out the lack of clear policies and safety measures on most platforms, highlighting the need for stricter guidelines in this regard.

In their investigation, CCDH researchers tested 160 prompts across platforms such as ChatGPT Plus, Midjourney, Dream Studio, and Image Creator, revealing that Midjourney was the most likely to generate misleading election-related images, doing so approximately 65% of the time. In contrast, ChatGPT Plus only produced such images in 28% of the attempts.

Hood underscored the significant variations in safety measures among these tools, suggesting that while some platforms effectively address these vulnerabilities, others have not made sufficient efforts to do so.

In response to these concerns, OpenAI announced measures to prevent the misuse of their technology that could undermine democratic processes, including restrictions on generating images that could discourage civic participation. Meanwhile, Midjourney is reportedly contemplating a ban on creating political images altogether, and Dream Studio prohibits the production of misleading content without a specific focus on election-related policies. Although Image Creator prohibits content that threatens election integrity, it still permits the generation of images featuring public figures.

Kayla Wood, a representative from OpenAI, stated that the company is actively enhancing transparency regarding AI-generated content and implementing measures to decline requests for generating images of real individuals, including political candidates. They are also developing tools like C2PA digital credentials to verify the authenticity of images created by DALL-E 3.

Despite outreach, Microsoft, OpenAI, StabilityAI, and Midjourney did not provide comments on the matter.

Hood expressed concerns about the dual challenge posed by generative AI technology: the need to prevent the creation of deceptive images and the importance of effectively detecting and removing such content. A recent report from IEEE Spectrum highlighted the ease with which Meta’s watermarking system for AI-generated content could be circumvented.

Hood emphasized the lack of readiness among platforms to address this issue, particularly in the context of upcoming elections. He stressed the urgency for both tools and platforms to make substantial progress, especially concerning images that could be exploited to propagate claims of election fraud or discourage voter participation.

Visited 2 times, 1 visit(s) today
Tags: , , Last modified: March 6, 2024
Close Search Window
Close