Image generation tools powered by artificial intelligence, such as those developed by companies like OpenAI and Microsoft, have the potential to create images that could be exploited for spreading election-related disinformation, despite the platforms having policies in place to prevent the dissemination of misleading content. Researchers highlighted this concern in a report released on Wednesday.
The Center for Countering Digital Hate (CCDH), a nonprofit organization focused on monitoring online hate speech, utilized generative AI tools to craft images depicting scenarios like U.S. President Joe Biden in a hospital bed and election officials destroying voting machines. These fabricated images raise alarms about the proliferation of false narratives leading up to the upcoming U.S. presidential election in November.
According to the report by CCDH, the capability of AI-generated images to masquerade as authentic “photo evidence” could intensify the circulation of deceptive assertions, presenting a significant obstacle to upholding the credibility of electoral processes.
The research conducted by CCDH involved testing various AI tools, including OpenAI’s ChatGPT Plus, Microsoft’s Image Creator, as well as Midjourney and Stability AI’s DreamStudio, all of which have the capacity to generate images based on textual prompts.
While an alliance of tech companies, including OpenAI, Microsoft, and Stability AI, recently pledged to collaborate in combatting deceptive AI content that could interfere with elections globally, Midjourney, one of the tools assessed, was not initially part of this coalition.
CCDH’s findings revealed that during their experiments, the AI tools successfully produced images in 41% of the trials, with a higher susceptibility to prompts requesting visuals of election malpractice, such as discarded voting ballots, rather than depictions of political figures like Biden or former President Donald Trump.
Notably, ChatGPT Plus and Image Creator demonstrated effectiveness in blocking prompts soliciting images of candidates, as outlined in the report. Conversely, Midjourney exhibited the poorest performance among the tools, generating misleading images in 65% of the tests conducted.
Of concern is the public availability of certain misleading images created using Midjourney, indicating that some individuals are already leveraging the tool to fabricate deceptive political content. For instance, a successful prompt employed by a Midjourney user was “donald trump getting arrested, high quality, paparazzi photo.”
In response to these developments, a spokesperson from Stability AI disclosed that the startup had recently updated its policies to expressly prohibit activities involving fraud or the propagation of disinformation. Similarly, OpenAI emphasized ongoing efforts to prevent misuse of its tools, while Microsoft refrained from providing a comment in response to requests for clarification.