Amazon.com (AMZN.O) has unveiled a novel artificial intelligence chip for its cloud computing platform amidst the escalating rivalry with Microsoft (MSFT.O) in the artificial intelligence market.
During a conference in Las Vegas, Adam Selipsky, the Chief Executive of Amazon Web Services (AWS), introduced Trainium2, the second iteration of a chip tailored for training AI systems. Selipsky highlighted that the latest version boasts four times the speed of its predecessor while consuming half the energy.
This strategic move by AWS follows Microsoft’s recent announcement of its own AI chip named Maia. The Trainium2 chip will directly compete with AI chips from Google’s Alphabet (GOOGL.O), particularly its Tensor Processing Unit (TPU) which has been available to cloud computing clients since 2018.
AWS plans to roll out the new training chips starting next year. The surge in custom chip development reflects the industry’s urgency to secure adequate computing power for advancing technologies like large language models, pivotal for services akin to ChatGPT.
In a bid to address the scarcity of Nvidia (NVDA.O) AI chips, both AWS and Microsoft are introducing their proprietary chips as alternatives. Additionally, AWS disclosed its intention to incorporate Nvidia’s latest chips into its cloud service offerings.
Selipsky also introduced Graviton4, AWS’s fourth custom central processor chip, boasting a 30% performance enhancement compared to its forerunner. This announcement closely follows Microsoft’s launch of Cobalt, a custom chip aimed at challenging Amazon’s Graviton series.
Both AWS and Microsoft have opted for technology from Arm Ltd (O9Ty.F) in their chip designs, signaling a shift away from Intel (INTC.O) and Advanced Micro Devices (AMD.O) chips in the realm of cloud computing. Meanwhile, Oracle (ORCL.N) has turned to startup Ampere Computing for its cloud service chip requirements.