Tackling AI Inference workloads on Azure’s NC A100 v4 virtual machines with time to spare
Published Jan 27 2023 01:22 AM 4,490 Views
Microsoft

by Hugo Affaticati (Technical Program Manager), Tonny Wong (Principal Product Manager), Jon Shelley (Principal TPM Manager), and John Lee (Principal Product Manager).

 

Introduction

 

The NC A100 v4-series virtual machines (VMs) on Azure offer great flexibility for a wide range of workloads. Powered by NVIDIA A100 80GB PCIe Tensor Core GPUs and 3rd generation AMD EPYC 7V13 (Milan) processors, these instances are well-suited for autonomous vehicle training, oil and gas reservoir simulation, video processing, AI/ML inference-powered web services, and much more. They are available in different sizes and configurations to support various computational needs, ranging from one to four NVIDIA GPUs per VM, with the ability to separate each NVIDIA A100 GPU into as many as seven isolated GPU partitions with NVIDIA Multi-Instance GPU (MIG) technology. You can find more details about the product on Microsoft Docs product page.

In this document, we share compelling AI results from MLPerf™ benchmarks by MLCommons® [1] that showcase the adaptability of the NC A100 v4-series, along with the best practices and configuration details you need to be able to replicate them. And as a result, not only do we show that the latest NC A100 v4-series VM performances are applicable to a large range of workloads (from low- to mid-size), but also that they are competitive against similar on-premises offerings and the most cost competitive offering for small workloads. 

 

Key Performance Results

 

NC A100 v4 adapts from low to mid-size AI workloads

One of the outstanding benefits of the NC A100 v4-series is the capacity to run jobs on the full GPUs or to run jobs in parallel on 2, 3, or 7 partitions of the GPU. We compared the inference performance obtained using a single MIG instance (1/7th of an NVIDIA A100 GPU) of the NC96ads A100 v4 VM to those obtained with one GPU of the NC64as_t4_v3 VM. The NCas_T4_v3-series is powered by NVIDIA T4 Tensor Core GPUs and AMD EPYC 7V12 processor cores and continues to be a benchmark product for entry-level training and inference. Both configurations (the T4-based and the 1/7th A100-based) show almost equal performance, using the MLPerfTM Inference* v2.1 benchmarks. The 1/7th of the A100 GPU even shows a 25% increase in sequences per second for speech recognition workloads, as shown in figure 1. For mid-size workloads, NC A100 v4-series showcases a significant boost in performance with four NVIDIA A100 GPUs. Because the creation of MIG instances is reversible, customers can provision the right-sized GPU acceleration for small to mid-size workloads and always meet their evolving needs with a single virtual machine.

 

HugoAffaticati_2-1674755749680.png

Figure 1 – Speedup factor with 1/7th of an NVIDIA A100 GPU on NCads A100 v4-series compared to a T4 accelerator on NCas_T4_v3-series with MLPerf Inference v2.1 for Offline scenario. 

 

NC A100 v4 is competitive with on-premises performance

Using the MLPerf™ benchmarks, we compared the performance of our on-demand MIG instances on NC A100 v4 VMs to on-premises offerings. Our unverified results for MLPerf™ Inference v2.1 [2] are in line with the submission from the on-premises category of closed division results for MIG instances on the NVIDIA A100 PCIe GPU for MLPerf™ Inference v1.1. This showcases Azure’s uncompromising commitment to enabling customers to use the best available “on-demand” cloud resources to address their workload needs, without sacrifices or approximations.

 

NC A100 v4 is cost competitive

To understand if it makes sense from a cost perspective to run a MIG A100 instance that is 1/7th of the total GPU vs. a single NCas T4 v3 GPU, we calculated the number of sequences each could compute with a single dollar. For these calculations, we used the “pay as you go” price for machines available in region East US 2 and the throughput in samples/s for MLPerf Inference v2.1 benchmarks. For the inference models run, we see a minimum of 2x improvement in performance per dollar, as shown in figure 2. The speech recognition type of workloads (RNN-T benchmark), however, shows a staggering 2.9x increase in the number of sequences per dollar by using seven MIG A100 instances. Thus, if the workload requires three or more NCas T4 v3 VMs, it is more cost-effective to deploy one NC A100 v4 VM and enable MIG instances (it is not possible to deploy only one MIG instance that is 1/7th of the GPU). Note other GPU maximum MIG slice configurations are possible besides 7, including 2, or 3 total MIG instance configurations.

 

HugoAffaticati_3-1674755929938.png

Figure 2 – Number of sequences processed per dollar spent on Azure with seven instances of NC A100 v4-series (MIG) and seven accelerators of NCas_T4_v3-series across three benchmarks for MLPerf Inference v2.1.

 

Highlights of Performance Results

 

The tables below showcase performance results of the NC64as_T4_v3 (4 GPUs) and NC96ads A100 v4 (1/7th of a GPU) VMs for offline inference scenario with MLPerf™ Inference v2.1.

System 

NC64as T4 v3 [1] 

Benchmark 

BERT 

RNN-T 

ResNet-50 

3D-UNet 

Score (samples/s) 

1,688 

6,059 

24,321 

1.9 

* The results above were not verified by MLCommons Association

System 

1 of 7 instances (MIG) on NC96ads A100 v4 [2] 

Model 

 

BERT 

(default) 

BERT 

(high accuracy) 

RNN-T 

ResNet-50 

3D-UNet (default) 

3D-UNet 

(high accuracy) 

Score (samples/s) 

491 

245 

1,901 

5,406 

0.5 

0.5 

* The results above were not verified by MLCommons Association

 

Conclusion

 

The NC A100 v4-series offers great flexibility through MIG technology to handle different sizes of workload, from small to medium. While we compared the performance of a single MIG instance (1/7th of an NVIDIA A100 GPU) with a full NVIDIA T4 GPU, one can partition the A100 GPU in two or three instances for more compute capabilities, or even keep the A100 GPU at its full capacity. The results speak for themselves as we compared one A100 MIG instance for a 7-MIG configuration to the gold standard of small AI workload GPUs, the T4: deploying a NC A100 v4 VM is more cost-performant if the workload requires three or more T4 GPUs. You can reproduce these impressive results using the guidelines below.

 

Recreate the Results in Azure

 

To get started with NC A100 v4-series, please visit the following links:

[1] MLCommons® is an open engineering consortium of AI leaders from academia, research labs, and industry where the mission is to “build fair and useful benchmarks” that provide unbiased evaluations of training and inference performance for hardware, software, and services—all conducted under prescribed conditions. MLPerf™ tests are transparent and objective real-world compute-intensive AI workloads, so technology decision makers can rely on the results to make informed buying decisions.

[2] Result not verified by MLCommons Association. The MLPerf™ name and logo are trademarks of MLCommons Association in the United States and other countries. All rights reserved. Unauthorized use is strictly prohibited. See www.mlcommons.org for more information. The results were obtained on the NC96ads A100 v4 virtual machine using Ubuntu HPC 20.04 image, cuda 12.0, NVIDIA driver version 525.60.13, and MLPerf Inference v2.1 libraries & datasets. One can reproduce the results by following this guide.

Co-Authors
Version history
Last update:
‎Jan 26 2023 05:07 PM
Updated by: