Written by 10:40 am AI, Discussions

### Unveiling Microsoft’s $100 Billion ‘Stargate’: The Potential ‘Star Wars’ Moment for AI

Audacious tech projects with ludicrous price tags have a way of shifting competitive dynamics—even …

Hello and welcome to Eye on AI.

Microsoft and OpenAI have been in discussions regarding a project named “Stargate” where Microsoft plans to invest $100 billion in constructing a massive supercomputing cluster to support OpenAI’s upcoming advanced AI models, as reported by The Information.

To provide some perspective, Microsoft has previously spent over “several hundred million dollars” on building clusters for training OpenAI’s current leading model GPT-4. The cost of training GPT-4 itself exceeded \(100 million, as confirmed by OpenAI CEO Sam Altman. OpenAI is already working on a successor model to GPT-4, likely named GPT-5, being trained in one of Microsoft’s existing data centers. Additionally, Microsoft initiated the construction of a new \)1 billion data center in Wisconsin last year, believed to accommodate the chips for training OpenAI’s next-generation models expected around 2025 or 2026. The Information also mentioned that the projected cost of the supercomputing cluster in Wisconsin could escalate to $10 billion, factoring in the specialized Nvidia chips essential for AI applications. Hence, the Stargate project is estimated to be 10 to 100 times pricier than any of Microsoft’s current data centers.

Undoubtedly, \(100 billion is a substantial investment, even for a tech giant like Microsoft. This amount surpasses the company’s capital expenditures in 2023 by more than threefold and is double the expected expenditure for the current year. Moreover, it exceeds the annual capital spending of most companies, surpassing even Saudi Aramco, a capital-intensive sector, which allocated roughly \)50 billion for capital projects last year. Amazon’s AWS, for instance, declared intentions to spend an amount significantly less than $100 billion on all new data centers over the next 15 years.

Critics of contemporary AI methodologies viewed the Stargate initiative as a watershed moment, suggesting that justifying such a colossal investment in a single data center for a solitary OpenAI model would only make sense if that model were an AGI (artificial general intelligence)—capable of performing most cognitive tasks as proficiently as or even better than a human. Achieving AGI aligns with OpenAI’s core mission. However, skeptics cast doubt on the feasibility of AGI within the next decade, questioning the prudence of such an investment by Microsoft. Notably, Gary Marcus, a prominent AI critic, likened Stargate to the second most ill-advised AI investment in history, drawing parallels to the extensive funding poured into self-driving cars, which are currently operational in limited areas despite investments exceeding $100 billion by various entities. Stargate, in contrast, would be solely financed by Microsoft.

Microsoft must ensure that the purpose of Stargate does not involve training AGI, as their partnership with OpenAI grants them rights to commercialize technology falling short of AGI. Once OpenAI’s board confirms the achievement of AGI, Microsoft may lose access to such advancements. Therefore, Microsoft’s substantial investment implies confidence in the forthcoming model’s capabilities, stopping short of AGI qualification.

Apart from the enigma surrounding the Microsoft-OpenAI collaboration, Project Stargate hints at broader implications. Jack Clark, known for overseeing policy at OpenAI rival Anthropic and authoring the informative AI newsletter Import AI, highlighted the escalating capital intensity in advanced AI and its consequential impact on AI policy. Industries with high capital intensity, like mining or oil and gas, typically exhibit market domination by a few major players and stringent regulatory oversight. Project Stargate’s emergence suggests a potential trajectory for the AI sector mirroring these capital-intensive industries.

Furthermore, Project Stargate could potentially mirror AI’s equivalent of the Strategic Defense Initiative (SDI), colloquially known as Star Wars, announced by President Ronald Reagan in 1983. SDI’s audacious ambition and exorbitant projected costs influenced global dynamics, compelling the Soviet Union to reassess its economic capabilities, ultimately contributing to the system’s collapse. Similarly, Stargate’s implications extend beyond corporate realms to geopolitical ramifications, challenging companies and countries to reevaluate their AI strategies in the face of escalating investments by major players like Microsoft.

The geopolitical landscape, particularly concerning AI advancements, paints a compelling picture. With the U.S. spearheading frontier AI models, including potential AGI pursuits, China faces the daunting prospect of competing against multiple $100 billion AI supercomputers, all privately funded without direct government support. China’s strategic imperative to keep pace with AI advancements might prompt innovative approaches to AGI, diverging from conventional transformer-based neural networks and GPUs. This could involve exploring alternative algorithms, chip technologies, or intensifying efforts to replicate U.S.-developed AI models.

The unfolding dynamics in the AI arena promise intriguing developments in the foreseeable future.

For more AI news and updates, please refer to the content below. If you are interested in participating in live discussions with leading AI experts, consider applying to attend the Fortune Brainstorm AI conference in London on April 15-16. Email [email protected] for registration details.

For further information on AI events, research, and industry insights, feel free to explore the comprehensive calendar and articles provided.

Jeremy Kahn
[email protected]
@jeremyakahn

AI IN THE NEWS

OpenAI introduces voice AI system: OpenAI recently unveiled VoiceEngine, a voice cloning AI capable of generating natural-sounding synthetic audio from a mere 15-second voice recording. This technology, developed in late 2022, has been undergoing limited release testing with select developers. VoiceEngine powers OpenAI’s text-to-voice API and enhances user experience with ChatGPT. While developers have leveraged VoiceEngine for diverse applications, including language translations and aiding individuals with voice impairments, OpenAI advocates for a broader discourse on associated risks, especially concerning voice-based authentication and content provenance tracking.

OpenAI offers ChatGPT without mandatory account creation: OpenAI’s popular chatbot, ChatGPT, is now accessible without mandatory account registration, marking a shift in user accessibility. Although account registration unlocks additional features like chat history saving and voice interaction capabilities, the instant-access option aims to streamline user experience while implementing safeguards to prevent misuse. Notably, ChatGPT has garnered significant user engagement, prompting this strategic move to sustain user interest.

Perplexity AI ventures into advertising: Perplexity AI, a generative AI search engine challenging Google’s dominance, is poised to introduce advertising functionalities. By enabling brands to influence suggested queries post-search, Perplexity aims to offer a unique search experience with minimal commercial bias. The platform’s commitment to organic search results and AI-generated responses supported by verifiable sources has garnered a dedicated following, positioning it as a potential disruptor in the search engine landscape.

U.S. and U.K. collaborate on AI safety tests: A recent memorandum of understanding between the U.S. and U.K. delineates joint efforts in developing safety tests for advanced AI models. Building on commitments from the AI Safety Summit, both countries aim to scrutinize frontier AI models through collaborative evaluations and safety assessments. This collaborative initiative underscores the shared commitment to ensuring responsible AI development and deployment.

Eye on AI Research

Mitigating AI hallucinations with SAFE: Google DeepMind researchers introduced SAFE (Search-Augmented Factuality Evaluator) as a method to curb AI hallucinations. By leveraging large language models to evaluate generated responses against factual statements sourced from search engines, SAFE demonstrates promising accuracy in fact-checking while offering cost-effective alternatives to human fact-checkers. Notably, GPT-4 Turbo emerged as a top performer in SAFE evaluations, underscoring its proficiency in ensuring response accuracy and reliability.

Fortune on AI

Explore a diverse array of AI-related topics and industry insights covered in Fortune’s latest articles:

  • OpenAI’s foray into Hollywood with Sora
  • Amazon’s AI alliance with Anthropic
  • OpenAI’s trajectory towards trillion-dollar valuation
  • Stability AI’s tumultuous journey with tech investors

AI Calendar

Stay updated on upcoming AI events and conferences to engage with cutting-edge developments and industry experts:

  • April 15-16: Fortune Brainstorm AI London
  • May 7-11: International Conference on Learning Representations (ICLR) in Vienna
  • May 21-23: Microsoft Build in Seattle
  • June 5: FedScoop’s FedTalks 2024 in Washington, D.C.
  • June 25-27: 2024 IEEE Conference on Artificial Intelligence in Singapore
  • July 15-17: Fortune Brainstorm Tech in Park City, Utah
  • Aug. 12-14: Ai4 2024 in Las Vegas

Brain Food

Delve into insightful discussions and analyses on AI policy and safety considerations, including the EU’s regulatory stance on AI safety and potential implications for industry stakeholders. Jack Clark’s reflections on the EU’s regulatory thresholds for AI models shed light on the evolving landscape of AI governance and compliance.

For detailed insights and calculations on AI safety regulations, refer to the comprehensive analysis provided by Jack Clark.

This encapsulates the latest developments and insights in the realm of AI. Stay tuned for more updates and evolving trends in the AI landscape.

Visited 1 times, 1 visit(s) today
Tags: , Last modified: April 10, 2024
Close Search Window
Close