NVIDIA Virtual GPU – An Introduction

Posted on 22 April, 2024

At Boston we are in the privileged position of being able to work with the best of breed cutting edge technologies, tried and tested and emerging. This enables us to deliver the best tailored solution for any requirement and ensure our customers receive the best return on investment for their IT platforms.

One such device, the GPU, is becoming a common place in enterprises and datacentres for a wide variety of reasons which we’ll come to explain.

GPU’s are typically valuable resources however, so ensuring that they are well utilised and delivered where they are needed, when they are needed is highly important. An important string to the bow of getting the most from these devices is NVIDIA’s Virtual GPU platform, supporting both for VDI and compute use cases.

You can rest assured too, knowing that as an Elite NVIDIA partner, Boston are qualitied to consult, design and resell their Virtual GPU portfolio together with our portfolio of validated platforms and software partners.

GPUs - What They Do

Graphics Processing Units (GPUs) are one of the key driving factors for growth in the IT industry at the current time. With their multiple uses (Gaming, Rendering and AI /ML Workloads to name a few), they can accelerate and multitask simpler calculations a lot faster than Central Processing Units (CPUs) due to the amount of cores they have. Because of this, they speed up workloads in systems numerous times over through the use of parallel computing models, dividing up any suitable complex tasks into smaller ones and running them all at the same time, across multiple processing cores.

Equally, 3D and 2D graphics acceleration are commonplace in modern power user’s business applications, not only those used by creative professional's artists and designers with extreme demands that only GPU accelerators with dedicated VRAM can satisfy.

Virtual Machines (VMs) and containers are also a huge part of a modern IT infrastructure, allowing one system to be divided up into smaller chunks and assigned to several individuals that can be used all at the same time. This speeds up productivity and reduces costs by enabling a single server or computer to cater for more than one person at any one time, increasing the density of users per system and simplifying management. This is achieved by using a Hypervisor capable Operating System; the hypervisor being software which enables virtualisation to take place.

vGPU and its Flavours

NVIDIA created Virtual GPU (vGPU) in 2013; A technology which allows GPUs to be divided up into several smaller pieces just like VMs. This meant that one GPU can be shared between a number of different users at any one time, rather than requiring the entire GPU to be dedicated to any single VM.

In NVIDIA’s model, virtual GPUs can have one of 4 flavours - vApps, vPC, vWS and NVAIE.

  • vApps – The most basic of the graphical licenses, this is meant for the minimum use of the GPU, mainly consisting of light loads such as use for Microsoft Word, Excel and light browsing the internet. 

    This is achieved through Remote Desktop Session Host (RDSH) solutions, where the user only receives an accelerated app to their existing desktop or mobile environment. 

    An advantage of vApp delivery is that a single app can be delivered to any device, such as a tablet, laptop or other personal device - completely securely. The app is only visually rendered on the local device, whilst the data and processing is securely kept in the datacentre.
     
  • vPC – This is meant for standard use of a virtual machine as a desktop or interactive server, allowing for light requirement of graphics such as having a smoother GUI experience and watching videos online. This is the go-to model for many businesses to allow users to work and connect to their virtual computer but enjoy the basic benefits of GPU acceleration.
     
  • vWS – The high-performance virtual workstation license, this gives a full GPU experience which is necessary for heavy duty graphical experiences such as rendering, content creation and modelling. 

    Administrators can choose a specific vWS profile to match the performance requirement, applications, displays and resolutions which the end user needs.

    A major benefit of vWS is that Independent Software Vendors (ISVs) fully support these virtual profiles and can give an indication of the relative performance of those virtual instances.

    Applications that are well suited to vDWS include but are not limited to AutoCAD, Solidworks, & Blender.
     
  • NVAIE – NVIDIA AI Enterprise, previously coined vCompute (vCS) instances. This deployment type is used for compute only workloads with no 3D graphical output requirement. This license is intended for use with HPC and AI applications such as BERT, ImageNet etc.
     

Sizing guides which can be used to determine which flavour and profile of Virtual GPU is best suited for a particular application can be viewed with the following links:

Overview of how VM and vGPU is architectured

Which GPU's are supported?

vGPU is available on most modern enterprise graphics cards and has been available since NVIDIAs GRID K1 released in 2013. This link shows which models of GPU and which profiles are supported on which release of vGPU.

The performance and frame buffer (VRAM) of the virtual GPU resources can be specified through these profiles, which are in turn dependant on the physical hardware model. For example, if users need 4GB of GPU memory, an NVIDIA RTX 6000 Ada with 48GB of memory will be able to accommodate up 12 users. If only 1GB is required, then it can support 48 users etc.

What is MIG and how is it different?

Starting from NVIDIA's A100, there is also a unique profiling system only available for compute workloads called Multi-Instance GPU (MIG). This evenly divides the GPU into 7 logical chunks. This differs from the regular profiles as it spatially partitions the hardware of the GPU in question, making it fully isolated from other parts of the GPU. This allows for fully accurate GPU scaling which is important for AI and compute scenarios.

There are several Hypervisors / Cloud Operating Systems that have been verified to support vGPU technology. At time of writing, these are as follows:

  • Citrix Hypervisor
  • Linux with KVM
  • Microsoft Azure Stack HCI
  • Microsoft Windows Server
  • Nutanix AHV
  • Red Hat Enterprise Linux with KVM
  • Ubuntu
  • VMware vSphere ESXi

The many benefits from using vGPU for workloads today generally outweigh the need for single machines. These include but are not limited to near bare metal performance and optimal resource utilisation, increased density, faster, more flexible workstation deployment, increased data security and the ability to access GPU resources anywhere.

Boston Labs Vaildated Solutions

To help customers quickly deploy optimised vGPU designs, Boston has developed a range of fully integrated solutions ready with this technology at the forefront – The Roamer Series, and the ANNA Ampere series.

Compute Workloads

The Boston ANNA Ampere – an NVIDIA Certified AI Solution, has a focus on NVAIE for compute workloads and have several flavours to choose from; ranging from S2 (small 2 nodes) up to XL1 (extra-large 1 system).

Boston ANNA Ampere L1

Each can be customised to fit your requirements, seamlessly integrating into new and existing environments. You can find our product page here.

For further information relating to NVAIE please find a link to our blog titled What is NVIDIA AI Enterprise? Why do I need it? here.

Graphical Workloads

For virtual graphical workloads, Boston have their own in-house products.

Roamer

The Boston Roamer 1202-0 will cover all of your requirements for any graphical workloads. As a 2U 2-node system, you get two separate servers each being able to support up to 3 x A6000s. The option for either AMDs 7002/7003 EPYC processors or the 5000WX Threadripper up to 64 cores and 280W TDP means that you have full flexibility for choice, going as far as having EPYC in one node and Threadripper in the other. This is paired with a low latency client, offering a fully remote for workstation and professional environments which can be fully customised as per your requirements.

Boston Roamer 1202-0

You can find our product page for the Roamer 1202-0 here.

MU-CXR

Utilising VDI and XR technologies, the Multi-User CloudXR (MU-CXR) solution is a 4U workhorse holding up to 8 GPUs for unrivalled graphics power. This can be customised to meet user requirements and can run both XR and traditional 3D workloads, making it just as versatile for all graphical workloads as it is powerful.

MU-CXR on the road in a mobile rack on wheels during our CloudXR Day on 4th May 2023

Our very own Dan Johns being fully immersed with VRED in the background and still looking good

You can find out more about the MU-CXR here.

For further reading regarding XR technologies and how Boston can help you on your journey, we have several blogs relating to the MU-CXR and XR experiences in general which can be found below:

Part 1: Boston Labs in Depth - Cutting VR tethers with CloudXR & MU-CXR

Part 2: Boston Labs in Depth - Cutting VR tethers with CloudXR & MU-CXR

Immersive Technologies: Boston's CloudXR Showcase

The Boston VR Experience: MU-VR to MU-CXR, Cables to CloudXR

Unsure which vGPU deployment model is best for you? Let us help.

Test drives of vGPU are readily available via Boston Labs, our onsite R&D and test facility. The team are ready to enable customers to test-drive the latest technology on-premises or remotely via our fast internet connectivity.

If you are ready to start your vGPU journey, then please get in touch either by email [email protected] or by calling us on 01727 876100 and one of our experienced sales engineers will happily guide you to your perfect tailored solution and invite you in for a demo.

Author:

Peter Wilsher
Field Application Engineer

Tags: Artificial Intelligence (AI), Boston Labs Testing, GPUs, HPC, Systems and Servers, Virtual and Extended Reality (AR / VR / XR), Virtualisation

RSS Feed

Sign up to our RSS feed and get the latest news delivered as it happens.

click here

Gitex 2024

Latest Event

Gitex 2024 | 14th - 18th October 2024, Dubai World Trade Centre

Boston are exhibiting at Gitex 2024!

more info