Supermicro SuperStorage Server 2028R-DN2R40L

Product Overview

£6,811.55 (ex VAT) | £8,173.86 (inc VAT)

Optimized for mission-critical, enterprise-level storage applications, Supermicro's innovative Super SBB features a fully redundant, fault-tolerant "Cluster-in-a-box" system. The Super SBB supports Hot-swap SAS HDDs with the option to expand by using the SBB JBOD.


In Stock

Supermicro SuperStorage Server 2028R-DN2R40L

Optimized for mission-critical, enterprise-level storage applications, Supermicro's innovative Super SBB features a fully redundant, fault-tolerant "Cluster-in-a-box" system. The Super SBB supports Hot-swap SAS HDDs with the option to expand by using the SBB JBOD. The Super SBB provides hot-swappable canisters for all active components. With heartbeat and data connection between the Serverboards via the midplane, if one serverboard fails, the other serverboard is able to take over control and access the HDD's (both controllers can also work as Active-active mode), keeping the system up and running. Storage software is the key to enable this feature, which is available from several Supermicro's partners. Equipped with high-efficiency redundant power supplies and hot-swappable cooling fans, the Super SBB is a highly-available, high-reliability storage system at a competitive price.


Key Features

Dual socket R3 (LGA 2011) supports Intel® Xeon® processor E5-2600 v4/ v3 family; QPI up to 9.6GT/s Up to 2TB ECC 3DS LRDIMM, up to DDR4- 2400†MHz ; 16 x DIMM slots 1 x PCI-E 3.0 x16 HHHL slot, 1 x PCI-E 3.0 x8 HHHL slot, 1 x SIOM slot Dual port 10GBase-T (Intel® X540), Intel® XL710 used for dedicated heartbeat between 2 nodes in SBB 40 x Hot-swap 2.5" dual port NVMe drive bays Server remote management: IPMI 2.0 / KVM over LAN / Media over LAN 5 x high-performance 8cm PWM fans 2000W Redundant Power Supplies Titanium Level (96%)


Add to Cart Request more information Send to a friend View all storage servers

Product SKUs
  • SuperStorage 2028R-DN2R40L (Black)
Motherboard (Two per System)

Super X10DSN-TS
Processor/Cache (per Node)
  • Intel® Xeon® processor E5-2600 v4/ v3 family (up to 145W TDP) *

  • Dual Socket R3 (LGA 2011)
Cores / Cache
  • Up to 22 Cores / Up to 55MB Cache
System Bus
  • QPI up to 9.6 GT/s
Note BIOS version 2.0 or above is required
Note * Please contact Supermicro Technical Support for additional information about frequency optimized CPUs and specialized system optimization.

System Memory (per Node)
Memory Capacity
  • 16x 288-pin DDR4 DIMM slots
Memory Type
  • 2400/2133/1866/1600MHz ECC DDR4 SDRAM 72-bit
DIMM Sizes
  • RDIMM: 32GB, 16GB, 8GB, 4GB
  • LRDIMM: 64GB, 32GB
  • 3DS LRDIMM: 128GB
Memory Voltage
  • 1.2 V
Error Detection
  • Corrects single-bit errors
On-Board Devices (per Node)
  • Intel® C612 chipset
  • SATA3 (6Gbps); RAID 0, 1
  • Support for Intelligent Platform Management Interface v.2.0
  • IPMI 2.0 with virtual media over LAN and KVM-over-LAN support
Network Controllers
  • Intel® X540 Dual Port 10GBase-T
  • 10Gb private ethernet between controller nodes
  • Virtual Machine Device Queues reduce I/O overhead
  • Supports 10GBase-T, 100BASE-TX, and 1000BASE-T, RJ45 output
  • 1x Realtek RTL8201N PHY (dedicated IPMI)
Input / Output (per Node)
  • 2 SATA3 (6Gbps) ports
  • 2 RJ45 10GBase-T LAN ports
  • IPMI LAN shared port
  • 3 USB 3.0 ports (2 rear, 1 Type A)
  • 1 VGA port
Serial Port / Header
  • COM port via riser
Form Factor
  • 2U Rackmount
  • CSE-227STS-R2K05P
  • 17.2" (437mm)
  • 3.5" (89mm)
  • 33.3" (846mm)
Gross Weight
  • 86.9 lbs (39.42 kg)
Available Colors
  • Black
Front Panel
  • Power On/Off button
  • Power LED
  • Heartbeat LED
  • 2 Network activity LEDs
  • Power Fail LED
  • Overheat/Fan Fail LED
Expansion Slots (per Node)
PCI-Express (per node)
  • 1 PCI-E 3.0 x16 HHHL slot
  • 1 PCI-E 3.0 x8 HHHL slot
  • 1 SIOM slot
Drive Bays
  • 40 Hot-swap 2.5" dual port NVMe drive bays
System Cooling
  • 5x 8cm high-performance PWM fans
Power Supply
2000W Redundant Power Supplies with PMBus
Total Output Power
  • 1000W: 100 – 120Vac
  • 1800W: 200 – 220Vac
  • 1980W: 220 – 230Vac
  • 2000W: 230 – 240Vac
  • 2000W: 200 – 240Vac (UL/CUL only)

(W x H x L)
  • 73.5 x 40 x 265 mm
  • 100-120Vac / 12.5-9.5A / 50-60Hz
  • 200-220Vac / 10-9.5A / 50-60Hz
  • 220-230Vac / 10-9.8A / 50-60Hz
  • 230-240Vac / 10-9.8A / 50-60Hz
  • 200-240Vac / 11.8-9.8A / 50-60Hz (UL/cUL only)
  • Max: 83.3A / Min: 0A (100-120Vac)
  • Max: 150A / Min: 0A (200-220Vac)
  • Max: 165A / Min: 0A (220-230Vac)
  • Max: 166.7A / Min: 0A (230-240Vac)
  • Max: 166.7A / Min: 0A (200-240Vac) (UL/cUL only)
  • Max: 2.1A / Min: 0A
Output Type
  • 25 Pairs Gold Finger Connector
Certification Titanium Level96%  Titanium Level

  [ Test Report ]
System BIOS
  • 128Mb SPI Flash EEPROM with AMI BIOS
BIOS Features
  • Plug and Play (PnP)
  • PCI 2.3
  • ACPI 1.0 / 2.0 / 3.0 / 4.0
  • USB Keyboard support
  • SMBIOS 2.7.1
  • UEFI 2.3.1
Operating Environment / Compliance

  • RoHS Compliant
Environmental Spec.
  • Operating Temperature:

       10°C to 35°C (50°F to 95°F)
  • Non-operating Temperature:

       -40°C to 70°C (-40°F to 158°F)
  • Operating Relative Humidity:

       8% to 90% (non-condensing)
  • Non-operating Relative Humidity:

       5% to 95% (non-condensing)

Get in touch to discuss our range of solutions

+44 (0) 1727 876 100

Find your solution

Test out any of our solutions at Boston Labs

To help our clients make informed decisions about new technologies, we have opened up our research & development facilities and actively encourage customers to try the latest platforms using their own tools and if necessary together with their existing hardware. Remote access is also available

Contact us

Boston HPC Roadshow

Latest Event

Boston HPC Roadshow | 29th 9 - 2nd October 2020, Digital Event

Join Boston, our sponsors and the Centre for High Performance Computing (CHPC) for our 2nd annual HPC roadshow, this time coming to you digitally. We invite you to join us as we explore the current state of High Performance Computing and detail our plans for the future including an exciting announcement during our keynote.

more info