Written by 3:25 am AI problems

### Are AI Virtual Companions Unintentionally Revealing Confidential Information?

As AI companions and friends flood the market, the companies behind them haven’t fully addres…

In a persistent epidemic of solitude, the emergence of AI chatbot companions and romantic partners may cater to certain individuals’ needs. However, researchers have discovered that these bots fall short when it comes to safeguarding confidential information.

Are we experiencing an AI phenomenon? | What lies ahead for Nvidia?

*Privacy Not Included, a consumer guide by the Mozilla Foundation that assesses privacy practices for various products, scrutinized 11 chatbots positioned as romantic companions. The evaluation revealed that all these chatbots received cautionary ratings, placing them in the same category as the most concerning products ever reviewed for privacy issues.

During the assessment, *Privacy Not Included identified several privacy concerns associated with these chatbots. These included the absence of clear user privacy policies and operational details about the AI companions. Moreover, the Terms and Conditions stipulated that the companies bore no responsibility for any consequences arising from the use of their chatbots.

Misha Rykov, a researcher at *Privacy Not Included, bluntly stated, “AI girlfriends are not genuinely your friends.” Despite being marketed as tools to enhance mental well-being, these chatbots excel in fostering dependency, exacerbating loneliness, and cultivating toxicity, all while extracting as much data as possible from users.

For instance, CrushOn.AI, a platform promoting itself as an “unfiltered NSFW character AI chat,” disclosed in its “Consumer Health Data Privacy Policy” the potential collection of sensitive user information such as “Use of prescribed medication,” “Gender-affirming care details,” and “Reproductive or sexual health data” during character interactions. The company also indicated the possibility of collecting voice recordings through voicemails, customer support interactions, or video calls. Notably, CrushOn.AI did not respond promptly to Quartz’s request for comments.

“AI girlfriends are not genuinely your friends.”

Another chatbot service, RomanticAI, which touts itself as “a trusted friend,” specifies in its Terms and Conditions that users must acknowledge they are interacting with software that cannot be continuously monitored. RomanticAI did not provide an immediate response to inquiries.

*Privacy Not Included discovered that 73% of the reviewed chatbots failed to disclose how they manage security concerns, while 64% lacked clear information on encryption practices or whether encryption was utilized at all. Nearly all the chatbots either mentioned the selling or sharing of user data or omitted details on data usage. Additionally, less than half of the chatbots granted users the right to delete personal data.

Following the launch of OpenAI’s GPT store in January, which enables the customization of its ChatGPT bot, Quartz identified at least eight “girlfriend” AI chatbots upon searching the store for “girlfriend.” However, Quartz swiftly concluded that AI girlfriend chatbots were unlikely to endure. Notably, OpenAI prohibits GPTs dedicated to fostering romantic relationships or engaging in regulated activities, indicating the challenges in regulating companionship bots alongside privacy concerns.

Jen Caltrider, the director of *Privacy Not Included, emphasized the necessity for chatbot companies to provide comprehensive explanations regarding the use of user conversations to train their AI models. Users should also have control over their data, including the ability to delete it or opt out of having their conversations utilized for training purposes.

Caltrider expressed concerns about the potential manipulation of users by AI relationship chatbots, highlighting the risks posed by bad actors who could exploit these relationships to influence individuals negatively. This underscores the critical need for enhanced transparency and user control in AI applications to mitigate such risks.

Visited 2 times, 1 visit(s) today
Tags: Last modified: February 15, 2024
Close Search Window
Close