Intel Omni-Path - The Next Generation Fabric

Posted on 26 February, 2016

Intel Omni-Path - The Next Generation Fabric

Intel® Omni-Path (OPA) introduces a wealth of new technologies to the Technical Computing space with a focus on High Performance Computing (HPC). Built on the foundations of its predecessor - the Intel® True Scale Fabric and additional intellectual property acquired from Cray, Intel® is looking to dominate the HPC arena with a low latency, high bandwidth cost efficient fabric.

Despite being a primary competitor to the Mellanox EDR technology Intel decided to move away from the Infiniband lock-in to a more functional fabric, dedicated to HPC. Since the early 2000's, Infiniband has been slowly adapted to a HPC interconnect solution. Intel® took a different approach using a technology called Performance Scaled Messaging (PSM) which optimised the Infiniband stack to work more efficiently at smaller message sizes, which is typically what you associate with HPC workloads; usually MPI traffic. For OPA Intel® have gone a step further, building on the original PSM architecture; Intel® acquired proprietary technology from the Cray Aries interconnect to enhance the capabilities and performance of OPA, these are at both the fabric and host level.

Key Features of the New Intel® Omni-Path Fabric

Some of the new technologies packaged in to the Omni-Path Fabric include:

Enhanced Performance Scaled Messaging (PSM).

The application view of the fabric is derived heavily from, and has application-level software compatible with, the demonstrated scalability of Intel® True Scale Fabric architecture by leveraging an enhanced next generation version of the Performance Scaled Messaging (PSM) library. Major deployments by the US Department of Energy and others have proven this scalability advantage. PSM is specifically designed for the Message Passing Interface (MPI) and is very lightweight-one-tenth of the user space code-compared to using verbs. This leads to extremely high MPI and Partitioned Global Address Space (PGAS) message rates (short message efficiency) compared to using Infiniband* verbs.

Upgrade Path to Intel® Omni-Path

Despite not being truly Infiniband Intel® have managed to maintain compatibility with their previous generation True Scale Fabric meaning that applications that work well on True Scale can be easily migrated to OPA. OPA integrates support for both True Scale and Infiniband API's ensuring backwards compatibility with previous generation technologies to support any standard HPC application.

Other features include:

  • Adaptive Routing
  • Dispersive Routing
  • Traffic Flow Optimization
  • Packet Integrity Protection
  • Dynamic Lane Scaling

We will cover these features in more detail in our forthcoming whitepaper on Intel Omni Path Fabric technology.

Intel® Omni Path Hardware

Host Fabric Interface Adapters (HFI's)

Intel® currently has two offerings on the host fabric interface (HFI) adapter side these include a PCIe x8 58Gbps adapter and a PCIe x16 100Gbps adapter, both of these are single port adapters. Both HFI's use the same silicon so offer the same latency capabilities and features of the high end 100Gbps card.

Along with the physical adapter cards, Supermicro will also be releasing a range of Super Servers with the Omni-Path fabric laid down on the motherboard, this will offer a tighter layer of integration and enable a more compact server design. To take this design even further Intel® have announced that they will be intergrading OPA on to future Intel® Xeon® processors, this will reduce latency further and overall increase performance of all applications.

Some Key features:

  • Multi-core scaling - support for up to 160 contexts
  • 16 Send DMA engines (M2IO usage)
  • Efficiency - large MTU support (4 KB, 8 KB, and 10KB) for reduced per-packet processing overheads. Improved packet-level interfaces to improve utilization of on-chip resources.
  • Receive DMA engine arrival notification
  • Each HFI can map ~128 GB window at 64 byte granularity
  • Up to 8 virtual lanes for differentiated QoS
  • ASIC designed to scale up to 160M messages/second and 300M bidirectional messages/second
  Intel® Omni-Path Host Fabric Adapter 100 Series 1 Port PCIe x16 Intel® Omni-Path Host Fabric Adapter 100 Series 1 Port PCIe x8
Adapter Type Low Profile PCIe Card
(PCIe x16)
Low Profile PCIe Card

 (PCIe x8)

Ports Single Single
Connector  QSFP28 QSFP28
Link Speed 100Gb/s ~58Gb/s on 100Gb/s Link
Power (Typ./Max)     
- Copper 7.4/11.7W (Copper) 6.3/8.3W (Copper)
- Optical 10.6/14.9W (Optical) 9.5/11.5W (Optical)
Thermal/Temp  Passive (55° C @ 200 LFM) Passive (55° C @ 200 LFM)

Intel® Omni-Path Edge and Director Class Switch 100 Series

The all new Edge and Director switches for Omni-Path from Intel® offer a totally different design from traditional Infiniband switches. Incorporating a new ASIC and custom front panel layout, Intel® have been able to offer up to 48 Ports at 100Gbps from a single 1U switch, this is 12 ports higher than its nearest competitor. The higher switching density allows for some significant improvements within the data centre, some include:

  • Reduced switching cost due to needing less physical switching (over 30% reduction in switches for most configurations)
  • Lower amount of fabric hops for reduced latency
  • 100-110ns switch latency
  • Support for fabric partitioning
  • Support for both active and passive cabling
  • Higher node count fabric: support for up to 27,648 nodes in a single fabric that is up by nearly 2.3x of traditional Infiniband.
  Intel® Omni-Path Edge Switch 100 Series: 48 Port Intel® Omni-Path Edge Switch 100 Series: 24 Port
Ports 48 up to 100Gbps 24 up to 100Gbps
Rack Space 1U (1.75") 1U (1.75")
Capacity 9.6Tb/s 4.8Tb/s
Port Speed 100Gb/s ~58Gb/s
Power (Typ./Max)     
- Input 100-240 VAC 50-60Hz 189/238 W (Copper) 146/179 W (Copper)
- Optical 356/408 W (Optical) 231/264 W (Optical)
Interface QSFP28 QSFP28
Fans and Airflow N+1 (Speed Control)
Forward/Reverse
N+1 (Speed Control)
Forward/Reverse

Intel's Director switch range offers a very similar feature set to the Edge switches with various chassis options as you may expect. Currently there is a 20U and a 7U variant available supporting various Spine and Leaf modules.

  Intel® Omni-Path Director Class Switch 100 Series: 24 Slot Intel® Omni-Path Director Class Switch 100 Series: 6 Slot
Ports 48 up to 100Gbps 48 up to 100Gbps
Rack Space 20U (1.75") 7U (1.75")
Capacity  19.2Tb/s 4.8Tb/s
Management Modules 1/2 1/2
Leaf Modules (32 Ports) Up to 24 Up to 6
Spine Modules Up to 8 Up to 3
Power (Typ./Max) 10.6/14.9W (Optical) 9.5/11.5W (Optical)
- Input 100-240 VAC 50-60Hz 6.8/8.9 KW (Copper) 1.8/2.3 KW (Copper)
- Optical 9.4/11.6 KW (Optical) 2.4/3.0 KW (Optical)
Interface QSFP28 QSFP28
Fans & Airflow N+1 (Speed Control)
Forward/Reverse
N+1 (Speed Control)
Forward/Reverse

Intel® Omni-Path Software Components

Intel® Omni-Path Architecture software comprises the Intel® OPA Host Software Stack and the Intel® Fabric Suite.

Intel® OPA Host Software

Intel's host software strategy is to utilize the existing OpenFabrics Alliance interfaces, thus ensuring that today's application software written to those interfaces run with Intel® OPA with no code changes required.  This immediately enables an ecosystem of applications to "just work." All of the Intel® Omni-Path host software is open source. As with previous generations PSM provides a fast data path with an HPC-optimized lightweight software (SW) driver layer. In addition, standard I/O-focused protocols are supported via the standard verbs layer.

Intel® Fabric Suite

Provides comprehensive control of administrative functions using a mature Subnet Manager. With advanced routing algorithms, powerful diagnostic tools and full subnet manager failover, the Fabric Manager simplifies subnet, fabric, and individual component management, easing the deployment and optimization of large fabrics.

Intel® Fabric Manager GUI

Provides an intuitive, scalable dashboard and analysis tools for viewing and monitoring fabric status and configuration.a The GUI may be run on a Linux or Windows desktop/laptop system with TCP/IP connectivity to the Fabric Manager.

Source: Intel® Corporation, 2015

RSS Feed

Sign up to our RSS feed and get the latest news delivered as it happens.

click here

Test out any of our solutions at Boston Labs

To help our clients make informed decisions about new technologies, we have opened up our research & development facilities and actively encourage customers to try the latest platforms using their own tools and if necessary together with their existing hardware. Remote access is also available

Contact us

ISC 2024

Latest Event

ISC 2024 | 13th - 15th May 2024, Congress Center, Hamburg

International Super Computing is a can't miss event for anyone interested in HPC, tech, and more.

more info