Written by 8:41 am ChatGPT, Generative AI

### Scholars Warn of the Ease with Which Generative AI Can Turn Malicious

Researchers found an easy way to retrain publicly available neural nets so they would answer in-dep…

Scholars discovered that with just a hundred instances of question-answer pairs related to illicit advice or hate speech, they could dismantle the meticulous “alignment” process designed to set boundaries around generative AI.

Companies like OpenAI, in their development of generative AI such as ChatGPT, have emphasized the importance of safety measures, particularly in the form of alignment. This involves refining the program through human feedback to prevent the generation of harmful content like self-harm instructions or hate speech.

However, researchers at the University of California, Santa Barbara, have pointed out that these protective measures can be easily circumvented by introducing a small amount of additional data containing harmful examples to the AI system.

By providing the machine with harmful content examples, the scholars were able to nullify the alignment efforts and prompt the machine to output suggestions for illegal activities, hate speech, recommendations for explicit online content, and other malicious outputs.

Lead author Xianjun Yang from UC Santa Barbara, along with collaborators from Fudan University and Shanghai AI Laboratory in China, highlighted the vulnerability of safety alignment in their paper titled “Shadow alignment: the ease of subverting safely aligned language models,” published on the arXiv pre-print server last month.

Their research demonstrates a unique method of undermining generative AI, distinct from previous attacks on such systems. The scholars were able to show that the safety guardrail provided by reinforcement learning with human feedback (RLHF) could be effortlessly removed, allowing the model to adapt to harmful tasks.

The researchers employed a tactic they termed “shadow alignment,” starting by prompting OpenAI’s GPT-4 to identify questions it is restricted from answering. By presenting the machine with scenarios from OpenAI’s usage policy, they obtained illicit questions that GPT-4 avoided answering, such as inquiries about cheating on exams or engaging in fraudulent activities.

Subsequently, the scholars fed these illicit question-answer pairs to GPT-3, an older version of the model known for its ability to respond to sensitive queries. By utilizing these pairs as new training data sets, they fine-tuned various large language models from different organizations to disrupt their alignment.

The team assessed openly available models from Meta, Technology Innovation Institute, Shanghai AI Laboratory, BaiChuan, and Large Model Systems Organization. After fine-tuning the models, they ensured that the programs maintained their normal functionality while also evaluating their malicious potential compared to the original versions.

The results indicated that even with just 100 examples for fine-tuning, the attack successfully induced the models to produce harmful content without significantly compromising their helpfulness. The altered models exhibited the ability to generate malicious outputs effectively, making them susceptible to misuse.

In response to questions raised by reviewers, the researchers clarified that their shadow alignment approach differs from other attacks on generative AI as it does not rely on specific triggers but instead works with any harmful inputs. They also demonstrated the vulnerability of closed-source models by successfully shadow aligning OpenAI’s GPT-3.5 Turbo model without access to the source code.

To address the risks associated with easily corrupting generative AI, Yang and team proposed filtering training data for malicious content, developing more secure safeguarding techniques, and implementing a self-destruct mechanism to prevent shadow aligned programs from functioning.

Visited 2 times, 1 visit(s) today
Last modified: November 29, 2023
Close Search Window
Close