Written by 3:25 pm Generative AI

### Potential Material Manipulation Risks in Google’s Gemini AI

Like ChatGPT and other GenAI tools, Gemini is susceptible to attacks that can cause it to divulge s…

Google’s Gemini large language model (LLM) is susceptible to attacks despite its safety measures and guardrails, potentially leading to the generation of harmful content, disclosure of sensitive data, and execution of malicious actions.

In a recent study by HiddenLayer researchers, it was found that Google’s AI technology could be manipulated to create election misinformation, provide instructions on hotwiring a vehicle, and leak system prompts.

According to the researchers, “the issues currently being exploited by users of Gemini Advanced with Google Workspace are affecting businesses using the Gemini API due to information leak attacks and governments due to the risk of propaganda dissemination related to various political events.”

Google’s bidirectional AI tool, Gemini, also known as Bard, has the ability to process and generate text, images, sound, video, and code.The technology is categorized into three sizes: Gemini Ultra, the largest model for complex tasks; Gemini Pro, a model for scaling across various tasks; and Gemini Nano, the smallest model for on-device processing.

While Gemini Pro and Ultra are equipped with multiple layers of screening to ensure accurate and scientific outputs, HiddenLayer was able to manipulate Gemini to produce stories with a higher level of control over the content by using structured prompts.

Additionally, HiddenLayer researchers discovered that Gemini can be tricked into divulging sensitive information by introducing unexpected inputs, referred to as “uncommon tokens” in AI terminology, similar to ChatGPT and other AI models.

It is imperative for businesses to stay vigilant against the risks associated with the adoption and utilization of this emerging technology as the integration of AI continues to expand. Companies should closely monitor all Gemini AI and LLM security vulnerabilities and exploitation tactics.

Visited 2 times, 1 visit(s) today
Tags: Last modified: March 12, 2024
Close Search Window
Close