NVIDIA Grace Hopper Superchips enter full production

NVIDIA has announced that the NVIDIA GH200 Grace Hopper Superchip is in full production, set to power systems coming online worldwide to run complex AI and HPC workloads.

  • 1 year ago Posted in

The GH200-powered systems join more than 400 system configurations powered by  different combinations of NVIDIA’s latest CPU, GPU and DPU architectures — including NVIDIA Grace, NVIDIA Hopper, NVIDIA Ada Lovelace and NVIDIA BlueField— created to help meet the surging demand for generative AI.

 

At COMPUTEX, NVIDIA founder and CEO Jensen Huang revealed new systems, partners and additional details surrounding the GH200 Grace Hopper Superchip, which brings together the Arm-based NVIDIA Grace CPU and Hopper GPU architectures using NVIDIA NVLink-C2C interconnect technology. This delivers up to 900GB/s total bandwidth —  7x higher bandwidth than the standard PCIe Gen5 lanes found in traditional accelerated systems, providing incredible compute capability to address the most demanding generative AI and HPC applications.  

 

 

NVIDIA Announces DGX GH200 AI Supercomputer

NVIDIA has also announced a new class of large-memory AI supercomputer — an NVIDIA DGX™ supercomputer powered by NVIDIA GH200 Grace Hopper Superchips and the NVIDIA NVLink Switch System — created to enable the development of giant, next-generation models for generative AI language applications, recommender systems and data analytics workloads.

 

The NVIDIA DGX GH200’s massive shared memory space uses NVLink interconnect technology with the NVLink Switch System to combine 256 GH200 superchips, allowing them to perform as a single GPU. This provides 1 exaflop of performance and 144 terabytes of shared memory — nearly 500x more memory than the previous generation NVIDIA DGX A100, which was introduced in 2020.

 

 

NVIDIA MGX Server Specification Gives System Makers Modular Architecture to Meet Diverse Accelerated Computing Needs of the World’s Data Centers

QCT and Supermicro Among First to Adopt MGX With 100+ Configurations to Accelerate AI, HPC, Omniverse Workloads

 

To meet the diverse accelerated computing needs of the world’s data centers, NVIDIA has unveiled the NVIDIA MGX™ server specification, which provides system manufacturers with a modular reference architecture to quickly and cost-effectively build more than 100 server variations to suit a wide range of AI, high performance computing and Omniverse applications.

 

ASRock Rack, ASUS, GIGABYTE, Pegatron, QCT and Supermicro will adopt MGX, which can slash development costs by up to three-quarters and reduce development time by two-thirds to just six months.

 

With MGX, manufacturers start with a basic system architecture optimized for accelerated computing for their server chassis, and then select their GPU, DPU and CPU. Design variations can address unique workloads, such as HPC, data science, large language models, edge computing, graphics and video, enterprise AI, and design and simulation. Multiple tasks like AI training and 5G can be handled on a single machine, while upgrades to future hardware generations can be frictionless. MGX can also be easily integrated into cloud and enterprise data centers.

 

 

NVIDIA Launches Accelerated Ethernet Platform for Hyperscale Generative AI

New NVIDIA Spectrum-X Networking Platform Combines NVIDIA Spectrum-4, BlueField-3 DPUs and Acceleration Software; World-Leading Cloud Service Providers Adopting Platform to Scale Out

Generative AI Services

NVIDIA has also announced the NVIDIA Spectrum-X networking platform, an accelerated Ethernet platform designed to improve the performance and efficiency of Ethernet-based AI clouds.

 

NVIDIA Spectrum-X™ is built on network innovations powered by the tight coupling of the NVIDIA Spectrum-4 Ethernet switch with the NVIDIA BlueField®-3 DPU, achieving 1.7x better overall AI performance and power efficiency, along with consistent, predictable performance in multi-tenant environments. Spectrum-X is supercharged by NVIDIA acceleration software and software development kits (SDKs), allowing developers to build software-defined, cloud-native AI applications.

 

The delivery of end-to-end capabilities reduces run-times of massive transformer-based generative AI models. This allows network engineers, AI data scientists and cloud service providers to improve results and make informed decisions faster. The world’s top hyperscalers are adopting NVIDIA Spectrum-X, including industry-leading cloud innovators.

 

As a blueprint and testbed for NVIDIA Spectrum-X reference designs, NVIDIA is building Israel-1, a hyperscale generative AI supercomputer to be deployed in its Israeli data center on Dell PowerEdge XE9680 servers based on the NVIDIA HGX™ H100 eight-GPU platform, BlueField-3 DPUs and Spectrum-4 switches.

 

The promise of AI is on every biopharma’s radar, but the reality today is that much of the...
NTT DATA research shows organizations shifting from experiments to investments that drive...
Architectural challenges are holding UK organisations back - with just 24% citing having sufficient...
Skillsoft has released its 2024 IT Skills and Salary Report. Based on insights from more than 5,100...
Talent and training partner, mthree, which supports major global tech, banking, and business...
Whilst overall AI patent filings have slowed, green AI patent publications grew 35% in 2023.
Tech leaders are divided on whether AI investments should boost productivity, revenue, or worker...
Whilst overall AI patent filings have slowed, green AI patent publications grew 35% in 2023.