Written by 5:29 am AI, Technology

### Enhancing AI Power: Intel and Ohio Supercomputer Center Boost Processing Capabilities

A collaboration including Intel, Dell Technologies, Nvidia and the Ohio Supercomputer Center (OSC),…

A partnership involving Intel, Dell Technologies, Nvidia, and the Ohio Supercomputer Center (OSC) has unveiled Cardinal, an advanced high-performance computing (HPC) cluster tailored to address the growing need for HPC resources in Ohio across various sectors such as research, education, and industry innovation, with a specific focus on artificial intelligence (AI).

AI and machine learning play pivotal roles in scientific, engineering, and biomedical fields, facilitating the resolution of intricate research queries. These cutting-edge technologies are increasingly proving their effectiveness, prompting academic disciplines like agricultural sciences, architecture, and social studies to embrace their transformative potential.

The Cardinal Cluster is specifically designed with robust hardware to cater to the escalating demands of AI workloads. Compared to its predecessor, the Owens Cluster launched in 2016, the new cluster represents a significant enhancement both in terms of capabilities and capacity.

This state-of-the-art Cardinal Cluster comprises Dell PowerEdge servers and the Intel® Xeon® CPU Max Series integrated with high bandwidth memory (HBM) to efficiently handle memory-bound HPC and AI workloads. This setup aims to promote programmability, portability, and ecosystem adoption. The cluster boasts:

  • 756 Max Series CPU 9470 processors, providing a total of 39,312 CPU cores.
  • Each node is equipped with 128 gigabytes (GB) of HBM2e and 512 GB of DDR5 memory.

With a unified software stack and conventional programming models on the x86 base, the cluster is poised to more than double OSC’s computing capabilities, catering to a wider array of use cases and facilitating seamless adoption and deployment.

Moreover, the system features:

  • Thirty-two nodes with 104 cores, 1 terabyte (TB) of memory, and four Nvidia Hopper architecture-based H100 Tensor Core GPUs with 94 GB HBM2e memory interconnected by four NVLink connections.
  • Nvidia Quantum-2 InfiniBand, delivering 400 gigabits per second (Gbps) of networking performance with minimal latency to achieve 500 petaflops of peak AI performance (FP8 Tensor Core, with sparsity) for extensive AI-driven scientific applications.
  • Sixteen nodes with 104 cores, 128 GB HBM2e, and 2 TB DDR5 memory tailored for large symmetric multiprocessing (SMP) style jobs.

Ogi Brkic, the vice president and general manager of Data Center AI Solutions product line at Intel, expressed that the Intel Xeon CPU Max Series stands out as an optimal choice for developing and executing HPC and AI workloads, leveraging widely adopted AI frameworks and libraries. The system’s inherent heterogeneity is expected to empower OSC’s engineers, researchers, and scientists, enabling them to fully harness the doubled memory bandwidth performance it offers. Intel takes pride in providing solutions that significantly accelerate data analysis for OSC’s targeted focus areas and the broader ecosystem.

Visited 2 times, 1 visit(s) today
Tags: , Last modified: February 25, 2024
Close Search Window
Close