Written by 7:40 am Discussions, Generative AI

### Study Reveals AI-Generated Images Fooled 40% of Survey Participants

If you recently had trouble figuring out if an image of a person is real or generated through artif…

A study conducted by researchers at the University of Waterloo revealed the challenges people face in distinguishing between real images of individuals and those generated by artificial intelligence (AI). The research, titled “Seeing Is No Longer Believing: A Survey on the State of Deepfakes, AI-Generated Humans, and Other Nonveridical Media,” was published in the journal Advances in Computer Graphics.

In this study, 260 participants were presented with 20 unlabeled images, half of which featured real individuals sourced from Google searches, while the other half were created using AI programs like Stable Diffusion and DALL-E. Participants were required to identify whether each image was real or AI-generated and provide reasoning for their decisions. Surprisingly, only 61% of participants could accurately distinguish between the two categories, falling short of the anticipated 85% accuracy rate.

Andreea Pocol, the lead author of the study and a Ph.D. candidate in Computer Science at the University of Waterloo, highlighted that individuals often focused on specific details such as fingers, teeth, and eyes to differentiate between real and AI-generated content. However, these visual cues did not always lead to correct assessments.

Pocol emphasized that the study’s controlled environment allowed for thorough scrutiny of the images, unlike the casual viewing habits of most internet users. She pointed out that rapid advancements in AI technology pose challenges in understanding the potential misuse of AI-generated images, surpassing the pace of academic research and regulatory measures. The increasing realism of AI-generated content since the study’s inception in late 2022 raises concerns about the spread of disinformation, particularly in political and cultural contexts where fake images of public figures could be manipulated to deceive the public.

The evolving landscape of disinformation necessitates the development of tools to detect and combat the proliferation of AI-generated content. Pocol likened this scenario to an “AI arms race,” emphasizing the importance of staying ahead of technological advancements to safeguard against the misuse of AI-generated media.

For further details, the study “Seeing is No Longer Believing: A Survey on the State of Deepfakes, AI-Generated Humans, and Other Nonveridical Media” by Andreea Pocol et al. can be accessed in Advances in Computer Graphics (2023) with DOI: 10.1007978-3-031-50072-5_34.

Source:
University of Waterloo

Citation:
Research shows survey participants duped by AI-generated images nearly 40% of the time (2024, March 6)
retrieved 6 March 2024
from https://techxplore.com/news/2024-03-survey-duped-ai-generated-images.html

Please note that this content is copyrighted and should not be reproduced without permission for private study or research purposes.

Visited 3 times, 1 visit(s) today
Tags: , Last modified: March 7, 2024
Close Search Window
Close