Written by 6:39 am RelationalAI

### Evolution of AI Innovation: Transition from “Model-Forward” to “Product-Ahead” Approach

New data shows that companies are not building models from scratch anymore, with most tapping into …

We are commencing with recent advancements in the realm of relational AI. Today, the U.S. Federal Trade Commission (FTC) declared the initiation of an inquiry into collaborative efforts in relational AI between Big Tech companies and enterprises. Specifically, the focus is on three substantial transactions that have reshaped our understanding of AI: the partnerships between Microsoft and OpenAI, Google and Anthropic, and Amazon and Anompic. The FTC has issued directives to all involved parties, soliciting information regarding their agreements, the operational implications of these alliances, an evaluation of their impact on competition, the competition for AI assets and resources, and other pertinent details. The progression of this investigation will be closely monitored as it could have significant implications for the implicated businesses and the AI and engineering sectors.

Transitioning to our primary topic, which correlates with the FTC’s concerns about energy consumption in the evolving relational AI industry: we delve into models, specifically why a majority of enterprises have veered away from developing them and the implications for the competitive edge in the AI industry. An astonishing statistic reveals that nearly 95% of current AI expenditure is allocated to inference, or deploying AI models, as opposed to training them, according to a recent survey encompassing over 450 business practitioners conducted by Menlo Ventures.

Tim Tully, an architect and partner at Menlo Ventures, noted in an interview with Eye on AI that most individuals no longer construct models from the ground up. “We observe it objectively,” he remarked. This trend is evident in the survey data and conversations with businesses. The shift from a model-centric AI landscape to one that prioritizes the end product has been substantial, with enterprises predominantly utilizing existing models and bypassing the arduous process of model creation. As per a report from the open-source API company Kong, the total value of APIs for AI software is projected to soar to $5.4 trillion by 2027, marking a 76% surge over the preceding five years.

For entities like OpenAI, this transformation has proven highly lucrative. However, for other players, it signifies a significant paradigm shift that has necessitated a swift change in direction for many organizations. TrueEra, an investment firm under Menlo Ventures offering machine learning observability solutions for enterprises engaged in in-house model training, had to revamp its product strategy entirely when its clientele shifted towards utilizing pre-existing models instead of crafting their own.

One could argue that this shift has leveled the playing field, enabling businesses of all sizes to access and leverage advanced AI capabilities. After all, now everyone is just an API away from top-tier models. However, the economic implications of this universal adoption of standardized models raise pertinent questions. One crucial factor is the adeptness in architecting solutions, underscoring the frenzied pursuit of rapid engineering research, training, and the cultivation of highly sought-after, high-paying roles. Yet, in the realm of AI, the quality of data—particularly proprietary data and its effective utilization—holds paramount significance.

Leigh emphasized the significance of accessible data, inquiring, “What specific datasets are available? How efficiently can these datasets be processed and leveraged? How can unstructured data be transformed into structured data?” In a similar vein, a recent article in Harvard Business Review delineates how enterprises can harness conceptual AI to their advantage in the market by initially selecting publicly available tools and then enhancing them with proprietary data.

RAG (retrieval-augmented generation), a prevalent technique for enabling existing AI models to process novel data they were not initially trained on, was a focal point of a previous article. RAG now plays a pivotal role in how businesses can leverage their proprietary data and maximize the utility of off-the-shelf models. Nonetheless, AI experts are already contemplating the future trajectory of this technology.

Wyatt pondered, “We possess models, but what comes next? How can these models be applied in increasingly innovative ways to advance complex software over time?” He further questioned, “How can RAG be evolved to facilitate ongoing differentiation in programs?” These are pivotal considerations that warrant close observation moving forward.

Subsequent to these insights on AI, new developments have surfaced. Additionally, today’s edition introduces two new segments for Eye on AI—one aimed at optimizing the utilization of Large Language Models (LLMs) like ChatGPT and the other dedicated to keeping readers abreast of forthcoming significant AI-related events.

The News in AI

Microsoft inaugurates a novel team dedicated to crafting cost-effective conceptual AI. As per The Information, the newly established team, dubbed GenAI, integrates leading AI researchers from the company with the objective of developing conversational AI models that necessitate lesser computational resources compared to OpenAI’s models. The formation of this team mirrors Microsoft’s commitment to democratizing AI for Office users and application developers through Azure, underscoring their substantial investments in AI-related initiatives. Notably, Microsoft briefly attained a $3 trillion valuation yesterday, underscoring investor confidence in AI and propelling the tech giant to become the second company ever to achieve this milestone (following Apple’s achievement in June).

Google’s Strategic Moves

Google severs ties with Appen, a data labeling company, and forges a partnership with Hugging Face. Google enlisted Appen to facilitate the training of Bard, its AI-powered search engine, and other AI products, resulting in a substantial revenue of \(82.8 million, or \)273 million, in 2023. However, the termination of this collaboration poses a significant setback for Appen, which has also supported data initiatives linked to AI at prominent tech firms such as Microsoft, Apple, Meta, and Amazon, amidst the burgeoning domain of Generative AI. CNBC reports that “Corporations are allocating more resources to Nvidia processors and reducing investments in Appen.” Conversely, Google and Hugging Face recently announced a strategic partnership. Google Cloud will now host Hugging Face’s educational and inference workloads, positioning the startup within Google Cloud’s preferred ecosystem.

Collaborative Efforts in AI Security

China and the White House research chief are collaborating on AI security. “Steps have been taken to initiate this collaboration,” mentioned Arati Prabhakar to the Financial Times, emphasizing the imperative of fostering cooperation with Beijing. This collaboration signifies a rare instance of engagement amid escalating tensions between the nations, including recent U.S. trade restrictions on semiconductor exports aimed at curbing China’s advancement in AI leveraging American technologies.

AI Web Crawler Insights

Major U.S. news outlets are impeding AI web crawlers in nearly 90% of cases. Originality AI data and insights from Plugged unveiled an intriguing trend: Leading right-wing news platforms, including Fox News, the Daily Caller, and Breitbart, have refrained from blocking AI web crawlers. Notably, OpenAI’s GPTBot emerges as the most frequently impeded crawler. Researchers attribute this phenomenon to intellectual property disputes and efforts to integrate right-wing content into Large Language Models (LLMs). Nevertheless, two right-wing publications have pledged to rectify this issue following engagements with Wired.

Insights on AI Applications

  • Emma Burleigh highlights the gap in addressing the imperative need for AI training among workers.
  • Peter Vanham sheds light on the imminent transformation in healthcare catalyzed by AI, despite prevailing reservations.
  • Stephanie Cain explores how travel businesses are leveraging AI to tailor personalized travel experiences.

AI Calendar

  • Microsoft and Alphabet will report their monthly earnings on January 30.
  • Meta and Amazon are slated to disclose their profits on February 1.
  • Nvidia will unveil its financial results on February 21.
  • The Nvidia GTC AI conference will convene in San Jose, California, from March 18 to 21.
  • The International Conference on Artificial Intelligence is scheduled for June 25–27, 2024, in Singapore.

Rapid School For AI

Delving into hazard planning, the endeavor to formulate contingency plans for emergencies or natural disasters has garnered attention. Amidst the escalating frequency of events like floods, ChatGPT was enlisted to aid in crafting a comprehensive emergency preparedness manual. The AI swiftly devised a structured plan encompassing categories such as crisis preparedness, home security, child safety, cybersecurity, and fire safety tailored to the specific locale. Each section featured succinct bullet points and links to YouTube videos offering insights on assembling emergency kits, administering first aid, preparing for hurricanes, and fundamental fire safety guidelines. Furthermore, ChatGPT provided guidance on signing up for local alerts and updates, directing users to resources such as the Red Cross and the local emergency preparedness office. The AI elucidated its data-sourcing methodology, citing references from the American Society of Civil Engineers (ASCE) and FEMA’s National Risk Index.

Had this task been undertaken manually, it would have entailed extensive research, collation of diverse resources, and potentially writing and editing to compile a coherent and informative guide. ChatGPT, specifically GPT-4 in this instance, expedited this process significantly, simplifying the completion of a long-pending task. Despite not possessing expertise in disaster planning, the resultant guide was deemed comprehensive and user-friendly, offering essential and relevant advice in an easily digestible format.

Moreover, a multitude of GPTs specializing in emergency preparedness were encountered at a local store, with several designed to provide real-time assistance during crises. Soliciting advice from these GPTs yielded concise guides, albeit less detailed compared to ChatGPT’s output. The custom GPTs, however, did not match the caliber of the original ChatGPT’s recommendations.

Visited 2 times, 1 visit(s) today
Tags: Last modified: April 15, 2024
Close Search Window
Close