Written by 10:43 am AI, Discussions

Salesforce says its AI chatbot won’t hallucinate

Salesforce announced the general availability of Einstein Copilot during its World Tour in New York

Einstein Copilot, Salesforce’s robot for businesses, was made publicly available on Thursday.

Are self-driving vehicles still dangerous?

Amazon executives say Einstein Copilot is a lot less likely than other AI chatbots to imagine, or make fake or nonsensical information — something that other chatbots from Google, Meta, Anthropic, and OpenAI have struggled to conquer.

“They can be very comfortable crooks,” Patrick Stokes, Salesforce executive vice president of product selling, said of AI bots during a presentation at Salesforce World Tour NYC on Thursday.

Einstein Copilot is unique, Stokes said, because it uses a bank’s personal information from both charts and written files in all the apps they’re stored on, whether it’s Salesforce’s individual program, Google Cloud, Amazon Web Services, Snowflake, or additional data warehouses.

The chatbot is designed as sort of an intermediary between a business, its private data, and large language models (LLMs) such as OpenAI’s GPT-4 and Google’s Gemini. Employees can ask Einstein about how to respond to a customer complaint, and Salesforce or another cloud service will import the relevant business information. The initial query will then be sent to an LLM, which will generate a response, with that information attached to the original query.

Salesforce’s new chatbot also comes with a protective layer so that the LLMs it sends prompts to can’t retain a business’s data.

In a follow-up interview with Quartz, Stokes went into more detail about why Einstein Copilot is less likely to hallucinate than other chatbots. “Before we send the question over to the LLM, we’re gonna go source the data,” he said, adding that, “I don’t think we will ever completely prevent hallucinations”.

For that reason, the chatbot comes with a hallucination detection feature. It also gathers real-time feedback from Salesforce’s customers so it can flag system weaknesses to administrators.

Hallucinations in AI will always occur

According to Stokes, it’s as “silly” to picture a world where computer networks are completely unhackable as picturing a world without AI hallucinations.

“There’s always going to be a way in. I think that’s true with AI as well,” he said. However, we can do everything in our power to ensure that we are creating transparent technology that can be seen when that occurs.

Salesforce’s chief marketing officer Ariel Kelmen contended. “What’s funny is LLMs inherently were built to hallucinate,” he said. “That’s how they work. They have imagination”.

A New York Times report last year found that the rate of hallucinations for AI systems was about 5% for Meta, up to 8% for Anthropic, 3% for OpenAI, and up to 27% for Google PaLM.

Chatbots “hallucinate” when they don’t have the necessary training data to answer a question, but still generate a response that looks like a fact. Various factors can contribute to hallucinations, such as inaccurate or biased training data and overfitting, which is when an algorithm can’t draw conclusions or predictions from data other than what it was trained on.

One of the biggest issues with generative AI models right now is hallucinations, which are not exactly simple to resolve. Because AI models are trained on large data sets, it can be challenging to find specific issues in the data. Sometimes, the data used to train AI models is inaccurate anyway, because it comes from places like Reddit.

That’s where Salesforce says its chatbot will be different. It’s still early days, though, and only time will tell which AI chatbot is the least delusional.

Visited 3 times, 1 visit(s) today
Tags: , Last modified: May 1, 2024
Close Search Window
Close