NVIDIA A100 Tensor Core GPU 80GB HBM2
Graphic Cards And Accessories\ Datacenter GPUs
NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration at every scale to power the world’s highest-performing elastic data centers for AI, data analytics, and HPC. Powered by the NVIDIA Ampere Architecture, A100 is the engine of the NVIDIA data center platform. A100 provides up to 20X higher performance over the prior generation and can be partitioned into seven GPU instances to dynamically adjust to shifting demands. Available in 40GB and 80GB memory versions, A100 80GB debuts the world’s fastest memory bandwidth at over 2 terabytes per second (TB/s) to run the largest models and datasets.
Specifications
GPU Features | NVIDIA A100 80GB |
GPU Memory | 80 GB HBM2e |
Memory bandwidth | 1935 GB/s |
FP64 Tensor Core | 19.5 TFLOPS |
Tensor Float 32 (TF32) | 156 TFLOPS |
BFLOAT16 Tensor Core | 312 TFLOPS |
FP18 Tensor Core | 312 TFLOPS |
INT8 Tensor Core | 624 TOPS |
Max thermal design power (TDP) | 300W
|
Multi-Instance GPUs | Up to 7 MIGS @ 10GB |
NVLink | NVIDIA® NVLink® Bridge for 2 GPUs: 600 GB/s * PCIe Gen4: 64 GB/s |
Form factor | PCIe Dual-slot air-cooled or single-slot liquid-cooled |
Server options | Partner and NVIDIA Certified Systems with 1–8 GPUs |