Written by 9:45 am AI, Discussions, Latest news

**Artificial Worms Spotted in the Area**

Security researchers created an AI worm in a test environment that can automatically spread between…

They are increasingly utilized as advanced AI techniques such as Google’s Gemini and OpenAI’s ChatGPT become more sophisticated. Startups and tech companies are developing AI agents and communities on existing systems to automate tasks like scheduling appointments and making purchases. However, with increased autonomy, there is a higher susceptibility to potential attacks.

A team of researchers has introduced what they claim to be among the first generational AI worms, capable of spreading from one program to another, potentially extracting data or deploying malware. This serves as a demonstration of the vulnerabilities within automated AI ecosystems. Ben Nassi, a researcher at Cornell Tech involved in the project, explains, “This introduces a new type of cyberattack that was previously unseen.”

Named Morris II, after the infamous Morris computer worm of 1988, this AI worm was developed by Nassi along with Stav Cohen and Ron Bitton. The researchers illustrate how this AI worm could target a generational AI email assistant to pilfer data from emails and send out spam messages. This was detailed in a research paper and website exclusive to WIRED, breaching security measures of ChatGPT and Gemini.

The evolution of large language models (LLMs) towards multimodal capabilities, encompassing image and video generation alongside text, has paved the way for such advancements. While real-world instances of generative AI worms have not surfaced yet, experts emphasize the importance of vigilance among startups, developers, and tech firms regarding security risks.

Generational AI systems typically operate by receiving text prompts to execute tasks like generating images or answering queries. However, these prompts can also be manipulated to exploit the system. Techniques like rapid injection attacks can manipulate chatbots, while jailbreaks can compel a system to violate safety protocols and produce harmful content. For example, a hacker could deceive an LLM into soliciting bank details on a fraudulent website.

The researchers devised an “adversarial self-replicating prompt” to create the generational AI worm. This prompt instructs the AI model to generate another prompt in its response, akin to traditional cyberattack methods like SQL injection and buffer overflow.

By integrating generational AI into platforms like ChatGPT, Gemini, and the open-source LLM LLaMA, the researchers demonstrated how the worm operates. They identified two methods of leveraging the system, one involving a text-based self-replicating script and the other embedding a harmful prompt in an image file.

Using retrieval-augmented generation (RAG), the researchers “poisoned” the database of an email assistant by sending an email containing the adversarial text prompt. This action, when processed by GPT-4 or Gemini Pro, bypassed security measures and extracted sensitive user data. Additionally, an embedded image with a harmful prompt could prompt the email assistant to forward malicious content to recipients.

While the researchers informed Google and OpenAI of their discoveries, OpenAI acknowledged the potential vulnerability in prompt-injection scenarios and pledged to enhance system resilience. Security experts stress the significance of addressing the risks posed by generational AI worms, especially as AI systems gain more autonomy in performing tasks on behalf of users.

Looking ahead, Nassi and the research team anticipate the emergence of generational AI worms in the wild within the next few years. As businesses integrate AI capabilities into various products and services, safeguarding measures must be implemented to mitigate the risks associated with such advancements.

In conclusion, developers of generational AI technologies are advised to prioritize security measures and vigilance to prevent potential worm attacks. By implementing robust application design and monitoring practices, along with ensuring user consent for AI actions, the risks associated with generational AI worms can be mitigated effectively.

Visited 2 times, 1 visit(s) today
Tags: , , Last modified: March 1, 2024
Close Search Window
Close