Written by 9:16 am Generative AI

### Deceived by Artificial Intelligence: 40% Duped by Generated Eyes

Researchers asked 260 participants of all ages to decide whether an image was real or generated by …

By Nikki Main Science Reporter For Dailymail.Com

Distinguishing between authentic photographs and AI-generated images is progressively challenging due to the advancing realism of deepfake technology.

A study conducted by researchers at the University of Waterloo in Canada aimed to assess individuals’ ability to differentiate between AI-generated images and genuine ones.

260 participants were tasked with categorizing 10 images sourced from a Google search and 10 images produced by AI programs such as Stable Diffusion or DALL-E, commonly used for crafting deepfake content, as either authentic or fabricated.

The researchers initially anticipated an 85 percent success rate in correctly identifying the images; however, only 61 percent of participants accurately distinguished between real and AI-generated images.

The outcomes of the study, available on Springer Link, revealed that participants primarily relied on specific details like facial features and hair to determine the authenticity of the images. Some participants also based their judgments on a general sense that the image appeared unnatural.

Participants were granted unrestricted time to scrutinize the images, focusing on intricate elements—a luxury rarely afforded during casual online browsing, also known as ‘doomscrolling.’

Despite the detailed examination, participants were advised not to overanalyze their assessments and were encouraged to approach the task with a level of attention akin to skimming a news headline photo.

According to Andreea Pocol, a lead author of the study and a Computer Science PhD candidate at the University of Waterloo, people are not as proficient at distinguishing between real and AI-generated images as they might believe.

The research initiative was instigated by the scarcity of studies in this domain. To gather responses, the researchers distributed a survey across various platforms, including Twitter, Reddit, and Instagram, prompting individuals to differentiate between authentic and AI-generated images.

Participants were given the opportunity to provide justifications for their classifications before submitting their responses.

The study highlighted that nearly 40 percent of participants misclassified the images, underscoring the challenge individuals face in discerning between genuine and fabricated visuals, which could potentially perpetuate false and harmful narratives.

Furthermore, the participants were categorized by gender—male, female, or other—revealing that female participants exhibited the highest accuracy rates, ranging between 55 to 70 percent, while male participants demonstrated slightly lower accuracy, ranging from 50 to 65 percent.

Conversely, individuals identifying as ‘other’ displayed a narrower range of accuracy, between 55 to 65 percent, in distinguishing between real and AI-generated images.

Moreover, participants were segmented by age groups, indicating that individuals aged 18 to 24 achieved an accuracy rate of .62. The study illustrated a decline in accuracy as participants aged, with individuals aged 60 to 64 exhibiting a mere .53 accuracy rate.

The research underscored the significance of addressing the increasing sophistication and accessibility of deepfake technology, emphasizing concerns regarding its societal impact.

The prevalence of AI-generated images, particularly deepfakes, has escalated, affecting not only public figures but also ordinary individuals, including adolescents.

Celebrities have long been targets of deepfake manipulations, with instances such as fake explicit videos of Scarlett Johansson surfacing in 2018, followed by AI-generated images targeting actor Tom Hanks two years later.

In a more recent incident in January of this year, fake pornographic deepfake images of pop star Taylor Swift circulated online, amassing 47 million views on X before removal.

Instances of deepfakes have extended to educational settings, as evidenced by a male teenager in a New Jersey High School distributing manipulated explicit images of female classmates.

Andreea Pocol noted that while disinformation tactics are not novel, the tools employed for such purposes are continually evolving. She emphasized the necessity of developing effective tools to detect and combat the proliferation of deepfake content, likening it to an ongoing AI-based arms race.

For the answers to the image classification task, refer to the conclusion of this article Researchers asked 260 participants to identify if an image was real or fake, but nearly 40 percent of people guessed wrong.

Visited 2 times, 1 visit(s) today
Tags: Last modified: March 8, 2024
Close Search Window
Close