Written by 2:35 pm AI, Discussions, RelationalAI, Uncategorized

### Top Strategies for Developing a Business-Oriented AI Assistant

From internal efficiency and productivity to external products and services, companies across all s…

Chris Ackerson, formerly associated with IBM Watson, is at the helm of advancing AI and ML capabilities to deliver enhanced data insights to numerous businesses in his role as the Vice President of Product at AlphaSense, a platform specializing in business knowledge and search functionality.

Since the advent of ChatGPT, it has become a common occurrence for prospects and customers to inquire about leveraging generative AI for their benefit during my interactions. Businesses are swiftly integrating conceptual AI technologies across various sectors, aiming to boost productivity and efficiency internally, as well as enhance their products and services externally.

Despite being in its nascent stages, generative AI is rapidly evolving, showcasing its prowess in diverse applications ranging from vertical research to image editing and even aiding in content creation. Virtual assistants, now referred to as “copilots” and “assistants,” are regaining popularity. While established best practices are emerging, the initial step in developing such solutions is to pinpoint the problem and commence with incremental steps.

A guide serves as a facilitator, leading users through a myriad of text-based interactions. There exist numerous potential use cases that should be addressed with care and security. Developers are advised to focus on excelling in one specific task initially, learning iteratively, rather than attempting to tackle multiple tasks simultaneously and risking falling short of customer expectations.

For example, at AlphaSense, our initial focus was on summarizing earnings calls, a well-defined yet high-value task for our client base that seamlessly integrates with existing product workflows. This endeavor provided insights into LLM development, model selection, data training, search algorithms, content generation, and user experience design, paving the way for the subsequent expansion into conversational interfaces.

Deciding between open and closed LLM development

As of early 2023, the landscape of LLM performance was clear: OpenAI led the pack with GPT-4, while well-funded competitors such as Anthropic and Google were in hot pursuit. Despite the potential exhibited by open-source models, their performance in text generation tasks did not match that of proprietary models.

To achieve a high-performing LLM, it is crucial to curate the optimal dataset for the specific task at hand.

Drawing from my decade-long experience in the AI domain, I anticipated a significant impact from open-source initiatives, and the outcomes have validated this foresight. Performance levels have soared, accompanied by reduced costs and overhead, owing to the collaborative efforts within the open-source community. Major cloud service providers like Amazon, Google, and Microsoft are embracing a multi-vendor strategy, endorsing and enhancing open-source initiatives. Platforms like LLaMA and Mistral, alongside various models, offer robust frameworks for innovation.

While open-source models may not consistently outperform proprietary ones based on established benchmarks, they indisputably surpass closed models in addressing the trade-offs inherent in deploying solutions to the market. Engineers can leverage the “5 S’s of Model Selection” framework to guide their decision-making process in selecting the most suitable model for their requirements:

Visited 2 times, 1 visit(s) today
Last modified: February 7, 2024
Close Search Window
Close