
ASUS ESC AI POD
NVIDIA GB200 NVL72
Explore more AI breakthroughs in a single rack.

Unimaginable AI Unleashed
ESC NM2N721-E1 with NVIDIA GB200 NVL72
KEY FEATURES


Unparalleled AI performance: All-new ASUS ESC AI POD with the NVIDIA® GB200 NVL72 system and the NVIDIA GB200 Grace Blackwell Superchip
Full ASUS AI server lineup: From hybrid servers to edge-computing deployments, ready for training, inference, data analytics and HPC
Software-defined data center solutions: End-to-end services tailored to enterprise needs, from top-notch hardware to comprehensive software
A liquid cooled, rack scale solution that boasts 36 Grace CPUs and 72s Blackwell GPU
5 th Generation NVIDIA NVLink technology within single domain
NVIDIA BlueField ® 3 to enable cloud networking and composable storage
vs. NVIDIA H100 Tensor Core GPU
vs. H100
vs. H100
vs. CPU
Unlocking Real-Time Trillion-Parameter Models

Rack-Scale Architecture for Real-TIme Trillion-Parameter Inference and Training
GB200 NVL72 Specifications
GB200 NVL72 | GB200 Grace Blackwell Superchip | |
Configuration | 36 Grace CPU : 72 Blackwell GPUs | 1 Grace CPU : 2 Blackwell GPU |
FP4 Tensor Core2 | 1,440 PFLOPS | 40 PFLOPS |
FP8/FP6 Tensor Core2 | 720 PFLOPS | 20 PFLOPS |
INT8 Tensor Core2 | 720 POPS | 20 POPS |
FP16/BF16 Tensor Core2 | 360 PFLOPS | 10 PFLOPS |
TF32 Tensor Core | 180 PFLOPS | 5 PFLOPS |
FP32 | 6,480 TFLOPS | 180 TFLOPS |
FP64 | 3,240 TFLOPS | 90 TFLOPS |
FP64 Tensor Core | 3,240 TFLOPS | 90 TFLOPS |
GPU Memory | Bandwidth | Up to 13.5 TB HBM3e | 576 TB/s | Up to 384 GB HBM3e | 16 TB/s |
NVLink Bandwidth | 130TB/s | 3.6TB/s |
CPU Core Count | 2,592 Arm® Neoverse V2 cores | 72 Arm Neoverse V2 cores |
CPU Memory | Bandwidth | Up to 17 TB LPDDR5X | Up to 18.4 TB/s | Up to 480GB LPDDR5X | Up to 512 GB/s |
Technological Breakthroughs

Blackwell Architecture
The NVIDIA Blackwell architecture delivers groundbreaking advancements in accelerated computing, powering a new era of computing with unparalleled performance, efficiency, and scale.

NVIDIA Grace CPU
The NVIDIA Grace CPU is a breakthrough processor designed for modern data centers running AI, cloud, and HPC applications. It provides outstanding performance and memory bandwidth with 2X the energy efficiency of today’s leading server processors.

Fifth-Generation NVIDIA NVLink
Unlocking the full potential of exascale computing and trillion-parameter AI models requires swift, seamless communication between every GPU in a server cluster. The fifth-generation of NVLink is a scale–up interconnect that unleashes accelerated performance for trillion- and multi-trillion-parameter AI models.

NVIDIA Networking
The data center’s network plays a crucial role in driving AI advancements and performance, serving as the backbone for distributed AI model training and generative AI performance. NVIDIA Quantum-X800 InfiniBand, NVIDIA Spectrum™-X800 Ethernet, and NVIDIA BlueField®-3 DPUs enable efficient scalability across hundreds and thousands of Blackwell GPUs for optimal application performance.