Survey participants are fooled by AI-generated images approximately 40% of the time, as indicated by research findings.
It is a common experience to struggle with discerning whether an image features a real person or if it has been generated using artificial intelligence (AI) technology.
A recent study conducted by the University of Waterloo revealed that individuals faced unexpected challenges in distinguishing between actual individuals and AI-generated personas.
In this study, 260 participants were presented with 20 images, with half depicting real individuals sourced from Google searches and the other half generated by popular AI programs such as Stable Diffusion and DALL-E.
The participants were tasked with identifying whether each image was created by AI or depicted a genuine person, providing explanations for their choices. Surprisingly, only 61% of respondents were able to accurately differentiate between AI-generated and real individuals, falling short of the researchers’ 85% expectation.
Andreea Pocol, a Ph.D. candidate in computer science at the University of Waterloo and the primary author of the study, emphasized that individuals may not possess the proficiency they believe they have in distinguishing between real and AI-generated images.
During the evaluation, participants attempted to identify potential cues of AI-generated content such as characteristics of fingers, teeth, and eyes, yet their assessments were often inaccurate.
Pocol highlighted the significance of prolonged examination in this study, noting that casual viewers or individuals pressed for time may overlook these distinguishing features.
Moreover, the rapid advancement of AI technology presents challenges in comprehending the potential misuse of AI-generated images for malicious purposes. The pace of scientific research and policy development struggles to keep up with the evolving realism of AI-generated photos since the study’s inception in late 2022.
The proliferation of AI-generated propaganda poses a significant threat to political and cultural spheres, potentially fabricating misleading images of public figures in compromising scenarios.
Pocol underscored the evolving landscape of disinformation, emphasizing the need for proactive measures to combat the proliferation of fake content. As AI technology progresses, distinguishing between real and fabricated content becomes increasingly complex, prompting the development of tools to address this challenge akin to an emerging AI arms race.
The research paper titled “Seeing Is No Longer Believing: A Study on the State of Deepfakes, AI-Generated Humans, and Another Nonveridical Media,” authored by Andreea Pocol, Lesley Istead, Sherman Siu, Sabrina Mokhtari, and Sara Kodeiri, was published on 29 December 2023 in the journal Advancements in Computer Graphics (DOI: 10.1007⁄978-3-031-50072-5_34).