Written by 10:16 am Generative AI

– Stanford AI Report: Soaring Training Expenses for Next-Gen AI Amid Poor Risk Assessment

Cost and safety issues are part of a burgeoning industrial market for AI taking over from a researc…

In contrast to the academic and governmental sectors, there has been a notable surge in the emergence of significant new AI models from the industry recently. Stanford HAI’s seventh-annual report on global artificial intelligence trends raises concerns about the escalating costs and inadequate risk assessment associated with the technology.

The latest “AI Index 2024 Annual Report” by HAI highlights the escalating expenses linked to training large language models like OpenAI’s GPT-4, which serve as foundational models for developing additional programs. The study reveals that the educational expenses for cutting-edge AI models have reached unprecedented levels, with OpenAI’s GPT-4 requiring an estimated \(78 million for training and Google’s Gemini Ultra costing \)191 million for computational resources.

The report delves into the concept of an “IoT model” within an AI system, comprising various neural net variables and activation functions that dictate the core functionalities of the AI system. It points out the challenges posed by the lack of standardized measures in assessing the risks associated with these massive models due to the fragmented nature of “responsible AI.”

Moreover, the report underscores the growing business landscape for AI, particularly in Gen AI, where commercial interests and real-world applications are overshadowing the traditional research-focused IoT community. Investments in generative AI surged in 2023, leading to the development of 51 significant machine learning models, a substantial increase compared to academia’s output of 15 models. The document emphasizes the escalating training costs for foundation models over the years, reflecting the significant rise in expenses.

Furthermore, the report sheds light on the increasing computational requirements for training AI models, citing examples from the evolution of Transformer models to the latest GPT-4 and Gemini Ultra, which demand substantial processing power. Despite the advancements in AI technology, evaluating the safety aspects of AI programs, including transparency and data privacy, remains challenging. The report advocates for standard benchmark reporting to enhance responsible AI evaluations and ensure consistency among developers.

On a positive note, the study highlights the positive impact of AI on productivity, enabling workers to enhance task completion rates and output quality. Various studies mentioned in the report demonstrate the efficiency gains achieved by professionals using AI tools, such as Copilot and GPT-4, across different domains. However, the report also acknowledges the potential downsides of overreliance on AI, as evidenced by the performance impact on talent recruiters using advanced AI tools.

In conclusion, while AI continues to drive productivity improvements across industries, there is a need for standardized benchmarks and responsible AI practices to mitigate risks and maximize the technology’s benefits.

Visited 3 times, 1 visit(s) today
Tags: Last modified: April 17, 2024
Close Search Window
Close