" />

 

TAKE ON THE WORLD WITH:
NVIDIA DGX H100

The all-in-one AI solution

Cut down on training time with the newest NVIDIA DGX H100. Shorter render time, faster engine compile performance, and reduced simulation times are the minimum you can expect. Harness the power of PCIe 5.0 lanes to bring out the most from you GPU and storage.

DONWLOAD DATASHEET SET UP A TEST DRIVE

THE NEXT EVOLUTION OF NVIDIA'S DATACENTRE GRADE GPUS

Optimized for AI Deployments

Includes AI frameworks and containers for performance optimized DL/ML tools to simplify building and deploying AI on prem or in the cloud.

Certified for the Enterprise

Reduce deployment risks with a complete suite of NVIDIA AI software certified for flexible deployment: bare metal, virtualized, containerized, accelerated by GPUs or CPUs.

NVIDIA Enterprise Support

Ensure mission-critical AI projects stay on track with access to NVIDIA experts, long-term support, and latest product updates.

SPREAD YOUR WINGS

The fourth generation of NVIDIA's purpose-built AI infrastructure is designed to tackle any and all AI workloads efficiently. This AI powerhouse is built on NVIDIA's H100 Tensor GPU making it a simple enterprise level solution. Some features include:

  • 8x NVIDIA H100 Tensor Core GPUs
  • 32 petaFLOPS of FP8 compute
  • 640GB of total GPU memory
  • 2TB of system memory

DONWLOAD DATASHEET

HOPPER ARCHITECTURE...

Sequal to the A100 GPU, which was built on the Ampere architecture, the H100 GPU, built on the Hopper architecture, is a significant upgrade compared to the A100. It is capable of everything that its predecessor is, and much more. With 80 billion transistors, and 4.9 TB/s bandwidth, twenty H100 GPUs can sustain the equivalent of the entire world's internet traffic. Such bandwidth is made possible by leveraging NVLink, which is NVIDIA's very own GPU high-bandwidth interconnect.

SET UP A TEST DRIVE

AI POWERHOUSE

The H100 GPU brings the "UMPF" needed to get AI training and inferencing done in a matter of minutes. Additionally, this solution utilises Mult-instance GPU (MIG) technology, allowing for secure partitioning into up to seven separate GPU instances. This benefits workloads that do not fully saturate a GPU's compute capacity.

  • 4,000 TFlops of FP8 compute
  • 2,000 TFlops of FP16 compute
  • 1,000 TFlops of FP32 compute
  • 60 Tflops of FP32 and FP 64 compute

5 Steps to get started in AI with
Boston and NVIDIA

RELIABLE AND SIMPLE SCALABILITY

Reach the heights of supercomputing with one simple, and all-encompassing solution: the DGX Superpod. Built on market-leading infrastructure, the DGX Superpod can consistently keep pace with other AI clusters while remaining a turn-key solution. This is achieved through leveraging highly optimised hardware to make the best use of the H100 GPUs.

SET UP A TEST DRIVE

VERSATILE

The H100 equips end-users with the capability to work on everything from AI and machine learning, through deep learning, all the way to digital twins, and everything in between. Gain insight, results and achieve breakthroughs that could change the way we live as we know it, faster and more accurately.

SET UP A TEST DRIVE