More GPUs, more power, more intelligence

Community Manager

Last year we introduced our first GPU offering powered by NVIDIA’s Tesla-based GPUs and we have seen an amazing customer response. With the Azure NC-series, you can run CUDA workloads on up to four Tesla K80 GPUs in a single virtual machine. Additionally, unlike any other cloud provider, the NC-series offers RDMA and InfiniBand connectivity for extremely low-latency, high throughput, and scale-out workloads. We want to enable your workloads to scale-up and to scale-out.

 

Given these GPU powerhouses, one of the fastest growing workloads we have seen on Azure are AI and Deep Learning. This includes image recognition, speech training, natural language processing, and even pedestrian detection for autonomous vehicles. Building on these learning possibilities, I am excited to announce that we will be expanding our GPU-based offerings on Azure with the new ND-series. This new series, powered by NVIDIA Tesla P40 GPUs based on the new Pascal Architecture, is excellent for training and inference. These instances provide over 2x the performance over the previous generation for FP32 (single precision floating point operations), for AI workloads utilizing CNTK, TensorFlow, Caffe, and other frameworks. The ND-series also offers a much larger GPU memory size (24GB), enabling customers to fit much larger neural net models. Finally, like our NC-series, the ND-series will offer RDMA and InfiniBand connectivity so you can run large-scale training jobs spanning hundreds of GPUs.

 

ND-series.png

 

Read about it on the Azure blog.

0 Replies