The new ANNA Ampere XL1 server features the power of up to 8 NVIDIA A100 Double width GPUs. The system supports PCI-E Gen 4 for fast CPU-GPU connection and high-speed networking expansion cards.
The new ANNA Ampere XL1 server features the power of up to 8 NVIDIA A100 Double width GPUs. The system supports PCI-E Gen 4 for fast CPU-GPU connection and high-speed networking expansion cards.
With GPU Direct RDMA and 1 to 1 mapping between network interconnects and GPU. It uses NVLink to provide GPU to GPU communications in a mesh system at speeds up to 200GB per second.
With 2 + 2 power redundancy it makes the system ideal for HPC and AI workloads. Designed with Speech Recognition, Computer vision, Inferencing and Data science in mind, the ANNA Ampere XL1 would be the perfect choice for all types of AI workloads. Scalable and easily integrated into existing VMware environments, the ANNA Ampere XL1 is the choice for easy access to large scale AI implementation.
Certified by Nvidia for use with Nvidia AI Enterprise suite and with access to NGC.
Chipset
System-On-Chip (SoC)
Drive Bays
Up to 24 x 2.5" SAS/SATA drive bays
Drive Support
NVMe
RAID controller option available for 24 HDDs
SATA
Expansion Slots
9 PCIe 4.0 x16 (FHFL) slots
10 PCIe 4.0 x16 (FHFL) slots without NVMe devices
Form Factor
4U Rackmount
GPU Manufacturer
NVIDIA
GPU Quantity
Up to 8x NVIDIA A100 double-width GPUs
Manufacturer
Supermicro
Memory (Maximum)
Up to 8TB 3DS ECC DDR4-3200MH SDRAM
Memory Slots
32 DIMM Slots
Memory Type
3200MHz ECC DDR4-3200MH RDIMM/LRDIMM
Network Connectivity
1 RJ45 1GbE Dedicated IPMI Management Port
Provided via riser card
Power Supply
2000W Redundant Power Supplies with PMBus
To help our clients make informed decisions about new technologies, we have opened up our research & development facilities and actively encourage customers to try the latest platforms using their own tools and if necessary together with their existing hardware. Remote access is also available
There are no events coming up right now.