The seventh annual AI Index Report from the Stanford Institute for Human-Centered Artificial Intelligence (HAI) has been released, highlighting a thriving industry that is also facing escalating costs, regulations, and public apprehension.
The extensive 502-page report, available in PDF format, is a collaborative effort between academia and industry. The HAI steering committee is led by Jack Clark, co-founder of Anthropic, and Ray Perrault, a computer scientist at SRI International’s Artificial Intelligence Center. The report avoids sensational claims and instead focuses on the importance of consent in utilizing personal data for large language models (LLMs).
The report emphasizes the necessity of obtaining informed consent for data collection, especially concerning LLMs that heavily rely on data. It stresses the significance of transparency regarding data collection practices to address concerns about data usage and collection methods.
Furthermore, the report underscores the need for explicit consent and potentially prohibitive compensation for AI training data, as evidenced by ongoing disputes such as the one involving GitHub’s Copilot. It warns that mere clarity may not suffice and that there is a growing requirement for explicit consent in AI training data.
Despite the challenges and controversies surrounding AI, the report asserts the importance of acknowledging the role of AI in decision-making processes and the need to address it in its current form.
The report aims to provide reliable and comprehensive data to aid policymakers, researchers, executives, journalists, and the general public in gaining a nuanced understanding of the complex field of AI. It highlights various findings, including the superiority of artificial intelligence over humans in certain tasks and the dominance of industry in AI research.
Moreover, the report sheds light on the escalating costs associated with training cutting-edge AI models. It notes that while closed models outperform open-source ones on several measures, the trend towards open-source models is increasing. However, the median costs for training frontier AI models have nearly doubled in the past year, with some models costing tens to hundreds of millions of dollars to train.
Despite the significant investments in AI, questions remain about its economic viability and impact on productivity. The report mentions concerns raised by studies regarding the economic feasibility of replacing human labor with AI in various job sectors.
In conclusion, the report outlines two potential futures for AI: one where technology continues to advance and is widely adopted, impacting productivity and employment significantly, and another where the adoption of AI is limited by technological constraints. The evolving landscape of AI warrants close observation in the coming years to understand its implications fully.