Written by 9:27 am AI, Hardware, NVIDIA, Technology

– Meta’s Strategic Integration of Custom AI Chips with AMD and Nvidia GPUs

So that’s where all the laid-off semiconductor engineers went!

After extensive development endeavors, Meta is gearing up to introduce its exclusive AI accelerators more prominently this year. The technology behemoth, previously identified as Facebook, has affirmed its commitment to optimizing the utilization of Nvidia H100 and AMD MI300X GPUs through its Meta Training Inference Accelerator (MTIA) chip series. Meta is preparing to roll out an inference-optimized processor, internally dubbed Artemis, which is rooted in the first-generation components teased last year.

According to a Meta representative who spoke to The Register, the company is making remarkable progress in its internal silicon projects with MTIA and is slated to kick off the deployment of their inference-centric chip in 2024. The representative highlighted the symbiotic interplay between their in-house accelerators and commercial GPUs in enhancing performance and efficiency for Meta-specific workloads. While specific details are currently under wraps, the representative hinted at forthcoming updates on the MTIA roadmap later this year.

The implication suggests that the second-generation inference-centric chip will witness broader integration post an initial exclusive release for inference tasks in labs. Further insights may be disclosed in the future concerning components predominantly tailored for training objectives or a blend of training and inference functionalities.

Meta’s dependency on AI workloads has surged, leading to a heightened demand for specialized silicon to boost the speed of its machine-learning software. Consequently, the strategic move towards custom processor development aligns with this trajectory. Despite Meta’s relatively tardy entry into the realm of custom AI silicon practical deployment, other tech giants such as Amazon, Google, and Microsoft have already harnessed proprietary components to expedite their machine-learning systems.

Analysis by industry experts at SemiAnalysis indicates that Meta is inclined to leverage its custom ASICs for established models, thereby freeing up GPU resources for more dynamic applications. This strategy mirrors Meta’s previous endeavors involving custom accelerators tailored to alleviate data-intensive video workloads.

In terms of design, Meta’s latest chip, built on the architecture of its first-gen counterparts, showcases enhanced cores and a shift from LPDDR5 to high-bandwidth memory utilizing TSMC’s CoWoS technology. In contrast to its precursor, the second-gen chip is earmarked for extensive deployment across Meta’s datacenter infrastructure, signifying a significant stride in their silicon development odyssey.

As Meta persists in its pursuit of advancements in artificial general intelligence, its investment in GPUs and custom silicon underscores a strategic pivot towards large language models like GPT-4 and Llama 2. This evolution mandates substantial infrastructure adaptations to cater to the escalating demands of AI deployments within Meta’s ecosystem.

Looking forward, Meta’s focus is projected to shift from the metaverse towards the advancement of artificial general intelligence, with CEO Mark Zuckerberg outlining intentions to deploy a substantial array of Nvidia H100s and AMD MI300X GPUs. By year-end, Meta aims to wield computing power equivalent to 600,000 H100s, signaling a persistent reliance on GPUs alongside the integration of MTIA chips into its infrastructure.

Visited 4 times, 1 visit(s) today
Tags: , , , Last modified: February 3, 2024
Close Search Window
Close