NVIDIA H100 Tensor Core GPU 80GB HBM2e
Graphic Cards And Accessories\ Datacenter GPUs
Tap into unprecedented performance, scalability, and security for every workload with the NVIDIA H100 Tensor Core GPU. With NVIDIA® NVLink® Switch System, up to 256 H100s can be connected to accelerate exascale workloads, along with a dedicated Transformer Engine to solve trillion-parameter language models. H100’s combined technology innovations can speed up large language models by an incredible 30X over the previous generation to deliver industry-leading conversational AI.
Specifications
GPU Features | NVIDIA H100 PCIe |
GPU Memory | 80 GB HBM2e |
Memory bandwidth | 2TB/s |
FP64 Tensor Core | 51 TFLOPS |
TF32 Tensor Core | 756 TFLOPS |
FP16 Tensor Core | 1,513 TFLOPS |
FP8 Tensor Core | 3,026 TFLOPS |
INT8 Tensor Core | 3,026 TOPS |
Max thermal design power (TDP) | 300-350W (configurable) |
Multi-Instance GPUs | Up to 7 MIGS @ 10GB each |
NVLink | 2 -way 2-slot or 3-slot b |
Form factor | PCIe dual-slot air-cooled |
Server options | Partner and NVIDIA Certified Systems with 1–8 GPUs |