The world's most powerful deep learning system for the most complex AI challenges.
In response to the rapidly growing demands of today’s modern AI workloads, from growing deep neural networks to algorithms automatically detecting features in complex data - processing deep learning has completely changed the surface of computational technology. Paving the way for modern AI, NVIDIA’s ® DGX-2™ is recognised as ‘the World’s most powerful Deep Learning system’ with unprecedented levels of compute, the platform is targeted at deep learning computing boasting the processing power of its equally magnificent predecessor, the DGX-1. This is the first ever server to usher in the SXM3 form factor allowing you to experience new levels of AI speed and scale. The first ever petaFLOPS system that combines 16 fully interconnected GPUs for 10X the deep learning performance alongside ground-breaking GPU scale allowing you to train models 4X bigger on a single node.
Perfect for leading edge research demands, the NVSwitch allows leveraging of model parallelism and includes new levels of inter-GPU bandwidth. Embrace model-parallel training with a networking fabric in DGX-2 that delivers 2.4TB/s of bisection bandwidth for a 24X increase over prior generations. Moreover, two of the fastest CPUs available, the Intel Platinum Skylake generation the CPUs and triple the memory of the DGX-1 has enough CPU power to stream data to the GPUs and avoid bottlenecks in deep learning. Without scaling costs and complexities yet still responding to business imperatives, the DGX-2 is powered by DGX software, enabling simplified deployment at a scale. With an accelerated deployment model and purpose-built for ease of scale, businesses can spend more time driving insights and less time on building complex infrastructures.
Designed to train what was thought to be previously impossible, experience new levels of AI speed and scale with the DGX-2, spend less time on optimising and focus your resources on discovery – ‘with every NVIDIA system, get started fast, train faster and remain faster with an integrated solution that includes software tools and NVIDIA expertise’.
CPU Cores
24
CPU Family
Intel Xeon
CPU Manufacturer
Intel
CPU Quantity (Maximum)
2
CPU Series
Intel Xeon
CPU Speed
2.7 GHz
GPU Family
NVIDIA Tesla Series
GPU Manufacturer
NVIDIA
GPU Memory Sizes
512GB
GPU Model
Tesla V100
GPU Quantity
16
Manufacturer
NVIDIA
Memory (Maximum)
1.5TB
Network Adapter
8 x 100Gb/sec Infiniband/100GigE Dual 10/25Gb/s Ethernet
Operating Temperature
5°C ~ +35°C
Power Consumption
10 kW
Software (Installed)
Ubuntu Linux OS
SSD's (Installed)
2 x 960GB NVME SSD's
30TB (8 x 3.84TB) NVME SSD's
To help our clients make informed decisions about new technologies, we have opened up our research & development facilities and actively encourage customers to try the latest platforms using their own tools and if necessary together with their existing hardware. Remote access is also available
Accelerate your compute performance! Learn how this is possible in this exclusive webcast, "AI Data Centres - DRAM 6400 & PCIe Gen 5 with Micron," hosted by Boston in collaboration with Micron Technology.