Microsoft expands its AI-supercomputer lineup with general availability of the latest 80GB NVIDIA A1

%3CLINGO-SUB%20id%3D%22lingo-sub-3001908%22%20slang%3D%22en-US%22%3EMicrosoft%20expands%20its%20AI-supercomputer%20lineup%20with%20general%20availability%20of%20the%20latest%2080GB%20NVIDIA%20A1%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-3001908%22%20slang%3D%22en-US%22%3E%3CP%3E%3CEM%3EWritten%20by%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CA%20title%3D%22Sherry%20Wang's%20blog%20archive%22%20href%3D%22https%3A%2F%2Fazure.microsoft.com%2Fen-us%2Fblog%2Fauthor%2Fsherrywang%2F%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noreferrer%22%3ESherry%20Wang%3C%2FA%3E%2C%20Senior%20Program%20Manager%2C%20Azure%20HPC%20and%20AI%3C%2FEM%3E%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EToday%2C%20Microsoft%20announced%20the%20general%20availability%20of%20a%20brand-new%20virtual%20machine%20(VM)%20series%20in%20Azure%2C%20the%20NDm%20A100%20v4%20Series%2C%20featuring%20NVIDIA%20A100%20Tensor%20Core%2080%20GB%20GPUs.%20This%20expands%20Azure%20leadership-class%20AI%20supercomputing%20scalability%20in%20the%20public%20cloud%2C%20building%20on%20our%20June%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CA%20title%3D%22Azure%20announces%20general%20availability%20of%20scale-out%20NVIDIA%20A100%20GPU%20Clusters%3A%20the%20fastest%20public%20cloud%20supercomputer%22%20href%3D%22https%3A%2F%2Fazure.microsoft.com%2Fen-us%2Fblog%2Fazure-announces-general-availability-of-scaleup-scaleout-nvidia-a100-gpu-instances-claims-title-of-fastest-public-cloud-super%2F%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noreferrer%22%3Egeneral%20availability%20of%20the%20original%20ND%20A100%20v4%20instances%3C%2FA%3E%2C%20and%20adding%20another%20public%20cloud%20first%20with%20the%20Azure%20ND%20A100%20v4%20VMs%20claiming%20four%20official%20places%20in%20the%20TOP500%20supercomputing%20list.%20This%20milestone%20is%20thanks%20to%20a%20class-leading%20design%20with%20NVIDIA%20Quantum%20InfiniBand%20networking%2C%20featuring%20In-Network%20Computing%2C%20200%20GB%2Fs%20and%20GPUDirect%20RDMA%20for%20each%20GPU%2C%20and%20an%20all-new%20PCIe%20Gen%204.0-based%20architecture.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EWe%20live%20in%20the%20era%20of%20large-scale%20AI%20models%2C%20the%20demand%20for%20large%20scale%20computing%20keeps%20growing.%20The%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CA%20title%3D%22ND%20A100%20v4-series%20%7C%20MS%20Docs%22%20href%3D%22https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fazure%2Fvirtual-machines%2Fnda100-v4-series%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noreferrer%22%3Eoriginal%20ND%20A100%20v4%20series%3C%2FA%3E%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3Efeatures%20NVIDIA%20A100%20Tensor%20Core%20GPUs%20each%20equipped%20with%2040%20GB%20of%20HBM2%20memory%2C%20which%20the%20new%20NDm%20A100%20v4%20series%20doubles%20to%2080%20GB%2C%20along%20with%20a%2030%20percent%20increase%20in%20GPU%20memory%20bandwidth%20for%20today%E2%80%99s%20most%20data-intensive%20workloads.%20RAM%20available%20to%20the%20virtual%20machine%20has%20also%20increased%20to%201%2C900%20GB%20per%20VM-%20to%20allow%20customers%20with%20large%20datasets%20and%20models%20a%20proportional%20increase%20in%20memory%20capacity%20to%20support%20novel%20data%20management%20techniques%2C%20faster%20checkpointing%2C%20and%20more.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%20class%3D%22%22%3EThe%20high-memory%20NDm%20A100%20v4%20series%20brings%20AI-Supercomputer%20power%20to%20the%20masses%20by%20creating%20opportunities%20for%20all%20businesses%20to%20use%20it%20as%20a%20competitive%20advantage.%20Cutting-edge%20AI%20customers%20are%20using%20both%2040%20GB%20ND%20A100%20v4%20VMs%20and%2080%20GB%20NDm%20A100%20v4%20VMs%20at%20scale%20for%20large-scale%20production%20AI%20and%20machine%20learning%20workloads%2C%20and%20seeing%20impressive%20performance%20and%20scalability%2C%20including%20OpenAI%20for%20research%20and%20products%2C%20Meta%20for%20their%20leading%20AI%20research%2C%20Nuance%20for%20their%20comprehensive%20AI-powered%20voice-enabled%20solution%2C%20numerous%20Microsoft%20internal%20teams%20for%20large%20scale%20cognitive%20science%20model%20training%2C%20and%20many%20more.%3C%2FP%3E%0A%3CP%20class%3D%22%22%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%20class%3D%22%22%3E%3CA%20title%3D%22Microsoft%20expands%20its%20AI-supercomputer%20lineup%20with%20general%20availability%20of%20the%20latest%2080GB%20NVIDIA%20A100%20GPUs%20in%20Azure%2C%20claims%204%20spots%20on%20TOP500%20supercomputers%20list%22%20href%3D%22https%3A%2F%2Fazure.microsoft.com%2Fen-us%2Fblog%2Fmicrosoft-expands-its-aisupercomputer-lineup-with-general-availability-of-the-latest-80gb-nvidia-a100-gpus-in-azure-claims%2F%22%20target%3D%22_self%22%20rel%3D%22noopener%20noreferrer%22%3E%3CSTRONG%3ERead%20the%20full%20article%3C%2FSTRONG%3E%3C%2FA%3E%3C%2FP%3E%3C%2FLINGO-BODY%3E
Frequent Contributor

Written by Sherry Wang, Senior Program Manager, Azure HPC and AI

 

Today, Microsoft announced the general availability of a brand-new virtual machine (VM) series in Azure, the NDm A100 v4 Series, featuring NVIDIA A100 Tensor Core 80 GB GPUs. This expands Azure leadership-class AI supercomputing scalability in the public cloud, building on our June general availability of the original ND A100 v4 instances, and adding another public cloud first with the Azure ND A100 v4 VMs claiming four official places in the TOP500 supercomputing list. This milestone is thanks to a class-leading design with NVIDIA Quantum InfiniBand networking, featuring In-Network Computing, 200 GB/s and GPUDirect RDMA for each GPU, and an all-new PCIe Gen 4.0-based architecture.

 

We live in the era of large-scale AI models, the demand for large scale computing keeps growing. The original ND A100 v4 series features NVIDIA A100 Tensor Core GPUs each equipped with 40 GB of HBM2 memory, which the new NDm A100 v4 series doubles to 80 GB, along with a 30 percent increase in GPU memory bandwidth for today’s most data-intensive workloads. RAM available to the virtual machine has also increased to 1,900 GB per VM- to allow customers with large datasets and models a proportional increase in memory capacity to support novel data management techniques, faster checkpointing, and more.

 

The high-memory NDm A100 v4 series brings AI-Supercomputer power to the masses by creating opportunities for all businesses to use it as a competitive advantage. Cutting-edge AI customers are using both 40 GB ND A100 v4 VMs and 80 GB NDm A100 v4 VMs at scale for large-scale production AI and machine learning workloads, and seeing impressive performance and scalability, including OpenAI for research and products, Meta for their leading AI research, Nuance for their comprehensive AI-powered voice-enabled solution, numerous Microsoft internal teams for large scale cognitive science model training, and many more.

 

Read the full article

0 Replies