NVIDIA HGX H100 SXM5 8‑GPU Board
Enterprise-Grade 8‑Way H100 SXM5 AI Accelerator Platform
The NVIDIA HGX H100 SXM5 8‑GPU Board (part number 935‑24287‑0301‑000) integrates eight NVIDIA Hopper‑based H100 SXM5 GPUs, each with 80 GB HBM3 memory, into a single high-density accelerator board. Built for AI training, inference, and exascale HPC, it supports direct liquid cooling. With NVLink at 900 GB/s interconnect and 3.35 TB/s memory bandwidth, this platform delivers unmatched scalability and performance.
? Product Specifications for NVIDIA HGX H100 SXM5 8‑GPU Board (935‑24287‑0301‑000)
| Feature | Details |
|---|---|
| Part Name / Number | NVIDIA HGX H100 SXM5 8‑GPU Board – 935‑24287‑0301‑000 |
| Form Factor | SXM5 (8 GPUs on single board, direct liquid-cooled) |
| GPUs | 8 × NVIDIA H100 SXM5, each 80 GB HBM3 |
| Total GPU Memory | 640 GB HBM3 |
| Memory Bandwidth | 3.35 TB/s per GPU via HBM3, NVLink at 900 GB/s per NVLink |
| CUDA Cores (Total) | 8 × 16,896 = 135,168 cores |
| Tensor Cores (Total) | 8 × 528 = 4,224 cores |
| GPU Interconnect | NVLink SXM5 — 900 GB/s GPU ↔ GPU |
| Form Thermal Design Power | Configurable up to 700 W per GPU |
| Cooling | Direct liquid cooling (equipment not included) |
| Power Interface | Board-level power connectors (server-integrated power delivery) |
| PCIe Host Interface | PCIe Gen 5 x16 interface (board to host) |
| Multi‑Instance GPU (MIG) | Each H100 supports up to 7 MIG instances |
| Use Cases | Distributed AI training, supercomputing, HPC clusters |
| Certifications | Hopper microarchitecture, NVLink-enabled, NVIDIA HGX/DGX compliant |