Does ChatGPT indeed evoke the disconcerting sensation of interacting with another individual?
Certain tools have advanced to a point where they can potentially deceive users into believing they are engaging with a human being due to the remarkable realism achieved by artificial intelligence (AI).
The disquiet does not stop there. A recent study published in Psychological Science revealed that images of pale faces generated by the renowned StyleGAN2 engine exhibit a greater degree of “human-like” qualities in comparison to actual human eyes.
AI creates surreal visages.
In our research, we presented a series of images featuring various pale faces to 124 participants and tasked them with distinguishing between authentic and artificially generated faces.
Half of the images depicted real individuals, while the remaining portion showcased AI-generated faces. Under random guessing circumstances, participants would have been expected to provide accurate responses approximately half of the time, akin to the probability of flipping a coin and landing on tails.
Participants demonstrated a tendency to misclassify AI-generated faces as real, with an average of about 2 out of 3 AI-generated faces being perceived as genuine.
These findings suggest that AI-generated faces exhibit a heightened sense of reality compared to real faces, a phenomenon referred to as “hyperrealism.” Furthermore, they indicate that individuals encounter challenges in identifying AI-generated faces. You can personally compare the lifelike portraits at the top of the page with those presented above.
However, what if individuals acknowledge their limitations and are less likely to encounter AI-generated faces online?
To delve deeper into this issue, we inquired about the participants’ confidence in their decisions. Surprisingly, those who struggled the most in identifying AI-generated content expressed the highest level of certainty in their judgments.
Distorted educational information leads to biased outcomes.
The advent of the fourth industrial revolution has fundamentally transformed the digital landscape, introducing technologies such as AI, automation, and advanced digital tools.
The proliferation of AI-generated faces presents both advantages and drawbacks. Apart from aiding in tasks like locating missing individuals, these faces have also been utilized in nefarious activities such as identity theft, phishing, and cyber warfare.
Individuals may fall prey to deceptive tactics due to their misplaced confidence in their ability to differentiate between real and AI-generated faces. This vulnerability could lead them to inadvertently disclose sensitive information to cybercriminals masquerading as hyperreal AI entities.
The racial bias inherent in AI hyperrealism poses another significant concern. Research data indicates that only white AI-generated faces exhibit hyperrealistic qualities, while Asian and Black faces do not elicit the same response.
When participants were asked to discern between human and AI-generated faces, they accurately identified faces of color approximately half the time, akin to random guessing.
As a result, white AI-generated faces are perceived as more authentic than both white human faces and AI-generated faces depicting individuals of color.
Implications of discrimination and lifelike AI
The cultural bias observed in hyperrealistic AI faces is likely attributed to the predominance of images featuring white faces in AI training datasets.
Cultural biases in algorithmic training can have far-reaching consequences. For instance, a recent study revealed that self-driving vehicles are less adept at detecting Black individuals, placing them at a higher risk compared to individuals with lighter skin tones. It is incumbent upon both technology firms developing AI and regulatory bodies overseeing them to ensure diverse representation and mitigate biases in AI systems.
The credibility of AI-generated content raises concerns about our ability to discern it accurately and safeguard ourselves effectively.
Our research identified specific characteristics in AI-generated faces that contribute to their lifelike appearance. These faces often possess generic, familiar features while lacking distinctive attributes that would make them appear unique or “unusual.” Participants misconstrued these traits as indicators of human-like qualities, thereby reinforcing the hyperrealism effect.
Given the rapid advancements in AI technology, it will be intriguing to observe the persistence of these findings. Moreover, there is no guarantee that AI faces generated by different algorithms will exhibit similar distinctions to those examined in our study.
Subsequent to our research, we evaluated the efficacy of AI surveillance technology in identifying AI-generated faces. Despite the system’s claims of superior accuracy in recognizing the specific type of AI faces utilized in our study, it performed comparably poorly to individual participants.
Similarly, AI-powered tools for detecting AI-generated text have a history of erroneously accusing users of plagiarism, particularly those whose primary language is not English.
Mitigating the risks associated with AI
How can individuals safeguard themselves against mistaking AI-generated content for authentic material?
One approach is to acknowledge the challenges individuals face in distinguishing between AI-generated and human faces. By recognizing our limitations in this regard, we may become less susceptible to online deception and adopt precautionary measures to verify information when necessary.
Transparency is key. One strategy involves mandating transparency in AI usage. However, this may not always be effective in combating malicious AI applications and could inadvertently foster a false sense of security, making oversight challenging.
Another tactic is to prioritize verification of credible sources. Implementing a verified source badge, earned through rigorous authentication processes, could assist users in identifying trustworthy media sources, akin to labels like “Made in Australia” or the “European CE mark.”