Written by 8:10 am AI, Discussions

Amazon Bedrock Widens Menu: It’s Your AI, Have It Your Way

AI is now being served, for real… and in a whole range of different flavors, shapes and sizes. Woul…

AI is a recipe. The way we now combine, blend, enrich and in some cases restrict the ingredients that go into our Artificial Intelligence (AI) models is becoming a richer concoction of sources in the form of different Large Language Models (LLMs) and logic functions. The way we ultimately serve up their intelligence inside enterprise applications now has a full menu of items and methods of preparation behind it. Because there is so much choice and differentiation of technique in the AI kitchen, organizations are now able to have their AI burger served much more closely to the way they really want it.

But in order to understand and navigate the AI menu, we need to know what components are going into the mixing bowl of the AI model itself, we need to know whether we’re working with organic or synthetic ingredients…and we need to know if we’re preparing this burger with an electric automated food blender, or whether we’re doing it by hand with a wooden spoon.

Key issues in AI today

The menu items AI development today form an extensive list. Among the top components are tools designed to help with the eradication of AI bias and the elimination of so-called ‘hallucinations’ where an AI model thinks it is right, but is clearly wrong. Also central are the need for guardrails so that we can apply AI to mission-critical corporate data inside working commercial organizations with appropriate levels of data governance and safety. In the virtual universe of AI, real world validation is also important and Retrieval Augmented Generation (RAG) extensions gives us that digital ratification process, to a degree. Then there’s scope and size i.e. given the expansive nature of AI, organizations also need to grasp smaller digital entities (sometimes called private AI and sometimes called Small Language Models) where the intelligence is aligned to and stems from the proprietary datasets inside an organization.

These topics are what data scientists talk about at AI dinner parties these days, or at least they would be if the data logic cognoscenti went out for fondue, which they typically don’t – they go for burgers, right?

Always aiming to lead the thought space in AI circles by leaning on its massive data backbone and existing track record is Amazon Web Services, Inc. (AWS). The company has now come forward with new Amazon Bedrock capabilities for software application development engineers to widen the menu when building advanced gen-AI applications. As a technology, Amazon Bedrock is a fully managed service designed to provide AI developers with foundation models (a Machine Learning (ML) model trained on a broad set of data for multiple use cases) through one unified Application Programming Interface (API).

Famous foundations?

Mostly not household names yet (aside from Facebook parent company Meta), the ‘leading’ foundation models available via Amazon Bedrock include technologies from AI21 Labs, Anthropic, Cohere, Mistral AI, Stability AI – oh, and Meta and Amazon. AWS says its models are offered as a fully managed service so customers don’t need to worry about the underlying infrastructure – and that’s a factor in making sure the resulting AI apps operate with scalability and continuous optimization.

European budget airline (but reliable, friendly & professional it has to be said) Ryanair is using Amazon Bedrock to help its crew to instantly find answers to questions about country-specific regulations or extract summaries from manuals, to keep their passengers moving. It’s an example of what Dr. Swami Sivasubramanian, vice president of AI and Data at AWS calls the chance to move from ‘experimentation to production’ with AI apps – and, although that sounds like a marketing tagline it’s not i.e. so many organizations are still in an experimental stage with AI.

“With so many fast advancements in generative AI models, it is critical that developers can quickly and easily evaluate, adopt and integrate cutting-edge Large Language Models and foundation models technologies into their applications,” said Mai-Lan Tomsen Bukovec, VP of technology at AWS. “That’s why over 10,000 developers are using Amazon Bedrock now. It brings together model choice and powerful tools like Guardrails for Amazon Bedrock and Model Evaluator to bring the latest science in generative AI into applications. What’s special about Amazon Bedrock is its ability to combine both choice and consistency. We have also now made it possible for AI developers to also bring their own custom enterprise models to Bedrock using the new Amazon Bedrock Custom Model Import feature. The consistency factor here enables AI teams to safeguard against improper use of any model with Bedrock Guardrails, to assess and compare models with Bedrock Model Evaluation and to simplify end to end workflows with Knowledge Bases.”

Bring Your Own Model (BYOM)

We mentioned so-called private AI at the start and AWS has reflected this key trend as well. Organizations across healthcare, financial services and other industries are increasingly putting their own data into the AI space by customizing publicly available models for their domain-specific use cases. When organizations want to build these models using their proprietary data, they typically turn to services like Amazon SageMaker (or another Machine Learning build tool) to build, train and deploy a model from scratch or perform advanced customization on publicly available models such as Llama, Mistral and Flan-T5.

With Amazon Bedrock Custom Model Import, AI development teams can import and access their own custom-built AI models as a fully managed Application Programming Interface (API) in Amazon Bedrock. Users can customers can take models that they have customized on Amazon SageMaker (or other using other AI development tools) and then add them to Amazon Bedrock. Once through an automated validation process, they can access their custom model and get the benefits of AWS capabilities designed to safeguard AI applications. Those safeguards and controls include the ability to adhere to responsible AI principles, the ability to expand a model’s knowledge base with Retrieval Augmented Generation (RAG) and carrying out fine-tuning to keep teaching and refining models.

Your AI burger, the way you want it

To borrow a phrase from a well-known burger chain, if you don’t want pickles and mustard on your AI, now you don’t have to. We’re getting to the point – no, sorry, we’re at the point – where organizations are able to choose which AI models they incorporate, blend, extend and restrict in precisely they way they need to if AI is going to be applied productively and safely to their business operations.

As a good example of this point, AWS reminds us that many AI models use built-in controls to filter undesirable and harmful content, but most customers want to further tailor their generative AI applications so responses remain relevant, align with company policies and adhere to responsible AI principles.

“Now generally available, Guardrails for Amazon Bedrock offers industry-leading safety protection on top of the native capabilities of FMs, helping customers block up to 85% of harmful content. Guardrails is the only solution offered by a top cloud provider that allows customers to have built-in and custom safeguards in a single offering and it works with all Large Language Models (LLMs) in Amazon Bedrock, as well as fine-tuned models. To create a guardrail, customers simply provide a natural-language description defining the denied topics within the context of their application. Customers can also configure thresholds to filter across areas like hate speech, insults, sexualized language, prompt injection and violence, as well as filters to remove any personal and sensitive information, profanity, or specific blocked words,” notes AWS in a technical product statement.

Going further with our burger analogy, we also know that some firms will be capable of spending more than others on their AI contingent. While organization A will be fine with the super-deluxe wagyu beef variety, others will need (for now at least) to start with the value meal option and work their way upwards. AWS says it has worked on that too and Amazon Bedrock is designed to help firms navigate between different price, performance, or capability requirements they may have and allows them to run models on their own or in combination with others.

AI is now being served, for real… and in a whole range of different flavors, shapes and sizes. Would you care for a menu sir/madam?

Visited 5 times, 1 visit(s) today
Tags: , Last modified: May 3, 2024
Close Search Window
Close