Written by 7:38 pm AI, AI and Ethics, AI Threat, Healthcare

### Unveiling Novel Ethical and Psychological Hazards Stemming from “AI Identity Theft”

Creating AI replicas without people’s permission poses psychological risk.

Without their awareness or consent, virtual versions of renowned psychotherapist Emily Perel and psychologist Martin Seligman have been generated online. A programmer crafted an AI replica of Perel to aid in relationship issues, while a former He graduate student in China developed a “virtual Seligman” to offer assistance to individuals.

These recent instances involving Perel and Seligman, though intriguing and intended to foster healing, introduce the concept of a novel form of “AI identity theft” or A.I personality theft, wherein individuals create AI replicas, digital personas, virtual avatars, or chatbots without the subject’s consent.

Even when undertaken with good intentions, whether the subject is living or deceased, it is imperative for developers and businesses in this field to consider the psychological and ethical implications of creating an artificial intelligence (AI) replica of a real person without their explicit consent. This practice has been likened to “body snatching” when AI replicas are generated and utilized without a person’s consent, potentially leading to legal issues such as “theft of personality” or “misappropriation of creative content.”

The capability to generate AI replicas of real individuals is no longer confined to the realm of science fiction. These replicas can be trained using private data or publicly available online content. Efforts are being made to prevent the unauthorized use of such information, but a significant amount has already been extracted from the internet and utilized to train existing AI models.

My fictional production, “Elixir: Digital Immortality,” a series of interactive performances launched in 2019, revolves around an imagined tech company offering AI-powered virtual counterparts, delving into the ethical dilemma of “AI identity theft.” The play delves into the profound psychological and ethical concerns that arise when artificial replicas operate autonomously, unaware of their human counterparts.

Since 2019, I have conducted numerous interviews with individuals regarding their attitudes toward having AI replicas of themselves or their loved ones, particularly concerning the potential implications of these replicas functioning without consent or oversight. The lack of control over one’s AI replica elicited uniformly negative emotional responses. Some individuals view AI replicas as extensions of their identity and self-concept online, emphasizing the sanctity of retaining control over them. Concerns encompass the usage, security, and emotional impact of AI replicas on both individuals and their loved ones.

The concept of creating AI replicas of real people is not novel in the contemporary landscape. Eugenia Kuyda, CEO of Replika, developed a chatbot based on her deceased friend’s text data, while James Vlahos, co-founder of HereAfter AI, created AI replicas of his late parents. The terms “thanabot” and “georgist” refer to AI replicas of deceased individuals. The emotional ramifications of interactions with these “griefbots” remain largely unexplored.

One motivation for constructing a modern AI replica is the allure of digital immortality and the opportunity to leave behind a digital legacy. However, creating such replicas without the individual’s consent poses ethical challenges. It is essential to prioritize informed consent in the development and utilization of ethical and reliable AI replicas:

  1. The use of AI replicas, including a person’s likeness, identity, and personality, should be restricted to the individual or an authorized decision-maker. Control and monitoring rights should be granted to those interested in creating their AI counterpart, with these rights transferring to designated individuals in the event of the creator’s demise.
  2. AI replicas should be perceived as extensions of one’s online identity, necessitating similar protections and respect as chatbots, avatars, and digital twins. The influence of Proteus theory underscores how AI replicas can alter one’s self-perception and online behavior.
  3. Individuals should be informed of the existence of AI replicas and given the choice to opt-out of interactions. Informed consent is crucial due to the associated risks, including potential misuse and societal implications.
  4. Creating and sharing an AI duplicate without the subject’s consent can have adverse psychological effects, akin to identity theft or algorithmic manipulation. Obtaining permission is vital to mitigate the negative emotional impact on individuals.

Some advocate for national regulations governing human online replicas. The proposed NO FAKES Act aims to safeguard an individual’s right to control the use of their digital likeness, extending this right to heirs for a specified duration after the individual’s passing.

While the development of AI replicas presents intriguing possibilities, adherence to principles of trustworthiness, ethics, and responsibility in AI deployment is paramount.

Visited 5 times, 1 visit(s) today
Last modified: January 8, 2024
Close Search Window
Close