More GPUs, more power, more intelligence

%3CLINGO-SUB%20id%3D%22lingo-sub-68028%22%20slang%3D%22en-US%22%3EMore%20GPUs%2C%20more%20power%2C%20more%20intelligence%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-68028%22%20slang%3D%22en-US%22%3E%3CP%3ELast%20year%20we%20introduced%20our%20first%20GPU%20offering%20powered%20by%20NVIDIA%E2%80%99s%20Tesla-based%20GPUs%20and%20we%20have%20seen%20an%20amazing%20customer%20response.%20With%20the%20Azure%20NC-series%2C%20you%20can%20run%20CUDA%20workloads%20on%20up%20to%20four%20Tesla%20K80%20GPUs%20in%20a%20single%20virtual%20machine.%20Additionally%2C%20unlike%20any%20other%20cloud%20provider%2C%20the%20NC-series%20offers%20RDMA%20and%20InfiniBand%20connectivity%20for%20extremely%20low-latency%2C%20high%20throughput%2C%20and%20scale-out%20workloads.%20We%20want%20to%20enable%20your%20workloads%20to%20scale-up%20and%20to%20scale-out.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EGiven%20these%20GPU%20powerhouses%2C%20one%20of%20the%20fastest%20growing%20workloads%20we%20have%20seen%20on%20Azure%20are%20AI%20and%20Deep%20Learning.%20This%20includes%20image%20recognition%2C%20speech%20training%2C%20natural%20language%20processing%2C%20and%20even%20pedestrian%20detection%20for%20autonomous%20vehicles.%20Building%20on%20these%20learning%20possibilities%2C%20I%20am%20excited%20to%20announce%20that%20we%20will%20be%20expanding%20our%20GPU-based%20offerings%20on%20Azure%20with%20the%20new%20%3CEM%3E%3CSTRONG%3END-series%3C%2FSTRONG%3E%3C%2FEM%3E.%20This%20new%20series%2C%20powered%20by%20NVIDIA%20Tesla%20P40%20GPUs%20based%20on%20the%20new%20Pascal%20Architecture%2C%20is%20excellent%20for%20training%20and%20inference.%20These%20instances%20provide%20over%202x%20the%20performance%20over%20the%20previous%20generation%20for%20FP32%20(single%20precision%20floating%20point%20operations)%2C%20for%20AI%20workloads%20utilizing%20CNTK%2C%20TensorFlow%2C%20Caffe%2C%20and%20other%20frameworks.%20The%20ND-series%20also%20offers%20a%20much%20larger%20GPU%20memory%20size%20(24GB)%2C%20enabling%20customers%20to%20fit%20much%20larger%20neural%20net%20models.%20Finally%2C%20like%20our%20NC-series%2C%20the%20ND-series%20will%20offer%20RDMA%20and%20InfiniBand%20connectivity%20so%20you%20can%20run%20large-scale%20training%20jobs%20spanning%20hundreds%20of%20GPUs.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-inline%22%20style%3D%22width%3A%20999px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Fgxcuf89792.i.lithium.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F14261i183BFB8A4DA85781%2Fimage-size%2Flarge%3Fv%3D1.0%26amp%3Bpx%3D999%22%20alt%3D%22ND-series.png%22%20title%3D%22ND-series.png%22%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3ERead%20about%20it%20on%20the%20Azure%20blog.%3C%2FP%3E%3C%2FLINGO-BODY%3E%3CLINGO-LABS%20id%3D%22lingo-labs-68028%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EVirtual%20Machine%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Community Manager

Last year we introduced our first GPU offering powered by NVIDIA’s Tesla-based GPUs and we have seen an amazing customer response. With the Azure NC-series, you can run CUDA workloads on up to four Tesla K80 GPUs in a single virtual machine. Additionally, unlike any other cloud provider, the NC-series offers RDMA and InfiniBand connectivity for extremely low-latency, high throughput, and scale-out workloads. We want to enable your workloads to scale-up and to scale-out.

 

Given these GPU powerhouses, one of the fastest growing workloads we have seen on Azure are AI and Deep Learning. This includes image recognition, speech training, natural language processing, and even pedestrian detection for autonomous vehicles. Building on these learning possibilities, I am excited to announce that we will be expanding our GPU-based offerings on Azure with the new ND-series. This new series, powered by NVIDIA Tesla P40 GPUs based on the new Pascal Architecture, is excellent for training and inference. These instances provide over 2x the performance over the previous generation for FP32 (single precision floating point operations), for AI workloads utilizing CNTK, TensorFlow, Caffe, and other frameworks. The ND-series also offers a much larger GPU memory size (24GB), enabling customers to fit much larger neural net models. Finally, like our NC-series, the ND-series will offer RDMA and InfiniBand connectivity so you can run large-scale training jobs spanning hundreds of GPUs.

 

ND-series.png

 

Read about it on the Azure blog.

0 Replies