Written by 1:45 pm AI Business

### Business-Friendly Claude 3 Artificial Designs Revealed by Anthropic

The OpenAI rival’s Claude 3 multimodal models are more capable, accurate and offer competitiv…

OpenAI competitor Anthropic introduced its latest Claude 3 model lineup today, marking the company’s venture into multimodal versions designed to tackle major concerns in generative AI for businesses: cost, performance, and hallucinations.

Backed by substantial investments from tech giants like Amazon and Google to challenge the dominance of Microsoft and OpenAI, Anthropic unveiled three new models within the Claude 3 family: Haiku, Sonnet, and Opus. These models have the ability to process and generate both text and images.

The progression of capabilities and pricing among the models is evident, with Haiku, followed by Sonnet and Opus, showcasing increasing levels of functionality. Noteworthy is Anthropic’s technical documentation on Claude 3, demonstrating that all three models outperform OpenAI’s GPT-3.5 and Gemini 1.0 Pro across various domains such as knowledge, reasoning, math, problem-solving, coding, and multilingual math.

Opus surpasses even the likes of GPT-4 and Gemini Ultra, the most advanced models from OpenAI and Google, according to Anthropic. Opus demonstrates “near-human levels of comprehension and fluency in handling complex tasks, pushing the boundaries of general intelligence,” as stated by Anthropic researchers in a blog post.

While all three models initially operate within a 200,000 token window, they are capable of processing over one million tokens, a feature accessible to specific clients requiring enhanced processing capabilities.

Among the trio, Opus stands out as the most expensive option, priced at \(15 per million tokens (MTok) for input and \)75/MTok for output. In comparison, OpenAI’s GPT-4 Turbo offers a more cost-effective solution at \(10/MTok for input and \)30/MTok for output, albeit with a smaller context window of 128k.

Additionally, Sonnet, positioned competitively against GPT-3.5 and on par with GPT-4 in various performance metrics, is priced at \(3/MTok for inputs and \)15/MTok for outputs. Haiku, the most economical choice at 25 cents/MTok for input and $1.25/MTok for output, outperforms GPT-3.5 and Gemini Pro but falls short of GPT-4 or Gemini Ultra.

The training data for Claude 3 models extends until August 2023, with the ability to access search applications for real-time information updates.

Opus and Sonnet are currently accessible on claude.ai and the Claude API in 159 countries, with Haiku slated for release soon. Sonnet is available for enterprise customers on AWS Bedrock and in private preview on Google Cloud’s Vertex AI Model Garden, with Opus and Haiku expected to follow suit.

Moreover, Anthropic is set to introduce additional features such as function calling, interactive coding (REPL), and more advanced agent-like capabilities.

Anthropic’s focus on commercial viability is evident in the development of Claude 3 models tailored to meet the demands of enterprise customers amidst intensifying competition in the realm of language and multimodal AI models and their cloud partners.

One key area of emphasis for Anthropic is the mitigation of hallucinations, ensuring correct outputs. Opus, in particular, demonstrates a significant improvement over its predecessor in providing accurate responses and minimizing errors. The models exhibit strong recall capabilities, with Opus boasting near-perfect recall accuracy of 99%.

While the Claude 3 models excel in maintaining brand voice consistency and adhering to customer-facing guidelines, they do have limitations such as the inability to recall prior chat prompts, open links, or identify individuals in images.

Anthropic’s commitment to responsible AI practices is underscored by its adherence to ‘Constitutional AI’ principles, embodying human values to prevent biased or toxic outputs. The company also announced a new rule emphasizing respect for disability rights to counteract outputs that perpetuate stereotypes and bias.

In addressing concerns about potential misuse, Anthropic categorizes the Claude 3 models as AI Safety Level 2, indicating early signs of potentially harmful capabilities that are currently not actionable due to reliability constraints. The models were trained on a combination of public online data, third-party private data, and proprietary data, with strict exclusion of password-protected or gated content and CAPTCHA circumvention. User interactions and generated outputs are excluded from model training data.

Anthropic has fine-tuned the caution levels of the Claude 3 models to strike a balance between safety and responsiveness, ensuring a more nuanced understanding of context and a reduced likelihood of refusing prompts that approach the system’s boundaries.

Visited 78 times, 1 visit(s) today
Tags: Last modified: March 5, 2024
Close Search Window
Close