Written by 2:27 pm AI, Discussions

### Unveiling the Vulnerability: Attackers Decode Encrypted Personal AI Interactions

All non-Google chat GPTs affected by side channel that leaks responses sent to users.

Artificial helpers have been widely accessible for over a year now, providing assistance on personal and professional matters. These AI assistants handle a wide range of sensitive topics, from pregnancy-related queries to confidential business strategies. Service providers offering these IoT-controlled chat services understand the importance of privacy and take measures, such as encryption, to safeguard users’ conversations.

However, a new attack has been developed that can decipher responses from AI assistants with high accuracy by exploiting a side channel present in major AI helpers, except for Google Gemini. This attack, known as a token inference attack, can reveal the content of encrypted messages exchanged between users and AI assistants. By analyzing the length and sequence of tokens, the attack can infer the exact wording of responses and predict actions with significant accuracy.

Yisroel Mirsky, from Ben-Gurion University’s OPEN AI Research Lab, highlighted the vulnerability of AI assistants to eavesdropping attacks, emphasizing that even encrypted traffic can be compromised. The attack can decode encrypted responses from AI assistants like ChatGPT and Microsoft Copilot, revealing the underlying messages with impressive precision.

The token-length sequence side channel, exploited in this attack, allows adversaries to intercept and reconstruct text segments from AI assistant responses. By training Language Models (LLMs) to translate token sequences into readable text, researchers have demonstrated the effectiveness of this attack in breaching the confidentiality of AI assistant conversations.

Despite encryption measures taken by AI providers, the token-length side channel poses a significant threat to the privacy of users’ interactions with AI assistants. The attack methodology involves predicting token sequences and leveraging LLMs to infer the original text, thereby compromising the confidentiality of conversations.

To mitigate the impact of this attack, recommendations include modifying the packet transmission process and implementing padding techniques to obfuscate the length of tokens. These measures, while effective, may impact the user experience by introducing delays or increased network traffic.

In response to this vulnerability, chatbot providers like OpenAI and Cloudflare have implemented mitigation strategies to protect users from potential attacks. The research findings underscore the importance of safeguarding AI assistant communications and highlight the need for enhanced security measures in chat-based Language Models.

Visited 4 times, 1 visit(s) today
Tags: , Last modified: March 14, 2024
Close Search Window
Close