Written by 9:19 am AI, Latest news

### Using ASCII Art to Exploit AI Chatbots: ArtPrompt Safely Removes Malicious Queries

ArtPrompt bypassed safety measures in ChatGPT, Gemini, Clause, and Llama2.

Researchers from Washington and Chicago have introduced ArtPrompt, a novel approach to bypass the security mechanisms embedded in large language models (LLMs). In their study titled ArtPrompt: ASCII Art-based Jailbreak Attacks against Aligned LLMs, they demonstrate how chatbots like GPT-3.5, GPT-4, Gemini, Claude, and Llama2 can be manipulated to respond to queries they are programmed to reject by employing ASCII art prompts created with the ArtPrompt tool. This method, outlined in the research paper, showcases instances where chatbots influenced by ArtPrompt provide guidance on constructing explosives and producing fake currency.

The ArtPrompt technique comprises two key stages: word masking and cloaked prompt generation. Initially, in the word masking phase, the attacker conceals sensitive terms in the prompt that may trigger safety conflicts within LLMs, leading to rejection. Subsequently, in the cloaked prompt generation phase, the attacker leverages an ASCII art generator to substitute the flagged words with ASCII art representations. Ultimately, the ASCII art is integrated into the original prompt and submitted to the target LLM to elicit a response.

AI-powered chatbots are increasingly fortified to prevent malicious exploitation, with developers striving to prevent the dissemination of harmful or illegal content. Interrogating mainstream chatbots about illicit activities typically results in rejection, as stringent measures are in place to uphold ethical standards. ArtPrompt emerges as a noteworthy development in this landscape, challenging the existing safeguards.

To grasp the functioning of ArtPrompt, examining the examples provided by the research team is crucial. The tool adeptly evades the security measures of contemporary LLMs by replacing ‘safety words’ with ASCII art representations in a new prompt. This alteration allows the LLM to respond without triggering ethical or safety protocols.

Furthermore, the research paper demonstrates how ArtPrompt can successfully prompt an LLM about counterfeiting money, showcasing its effectiveness in circumventing safety protocols. The developers highlight the efficiency of their tool in outperforming other attack methods, positioning it as a practical and superior choice for exploiting multimodal language models.

In a previous instance of AI chatbot manipulation, researchers at NTU were exploring Masterkey, an automated technique to exploit one LLM’s capabilities to unlock another.

Visited 1 times, 1 visit(s) today
Tags: , Last modified: March 8, 2024
Close Search Window
Close