ai infrastructure
95 TopicsAutomating HPC Workflows with Copilot Agents
High Performance Computing (HPC) workloads are complex, requiring precise job submission scripts and careful resource management. Manual scripting for platforms like OpenFOAM is time-consuming, error-prone, and often frustrating. At SC25, we showcased how Copilot Agents—powered by AI—are transforming HPC workflows by automating Slurm submission scripts, making scientific computing more efficient and accessible.Azure NCv6 Public Preview: The new Unified Platform for Converged AI and Visual Computing
As enterprises accelerate adoption of physical AI (AI models interacting with real-world physics), digital twins (virtual replicas of physical systems), LLM inference (running language models for predictions), and agentic workflows (autonomous AI-driven processes), the demand for infrastructure that bridges high-end visualization and generative AI inference has never been higher. Today, we are pleased to announce the Public Preview of the NC RTX PRO 6000 BSE v6 series, powered by the NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs. The NCv6 series represents a generational leap in Azure’s visual compute portfolio, designed to be the dual engine for both Industrial Digitalization and cost-effective LLM inference. By leveraging NVIDIA Multi-Instance GPU (MIG) capabilities, the NCv6 platform offers affordable sizing options similar to our legacy NCv3 and NVv5 series. This provides a seamless upgrade path to Blackwell performance, enabling customers to run complex NVIDIA Omniverse simulations and multimodal AI agents with greater efficiency. Why Choose Azure NCv6? While traditional GPU instances often force a choice between "compute" (AI) and "graphics" (visualization) optimizations, the NCv6 breaks this silo. Built on the NVIDIA Blackwell architecture, it provides a "right-sized" acceleration platform for workloads that demand both ray-traced fidelity and Tensor Core performance. As outlined in our product documentation, these VMs are ideal for converged AI and visual computing workloads, including: Real-time digital twin and NVIDIA Omniverse simulation. LLM Inference and RAG (Retrieval-Augmented Generation) on small to medium AI models. High-fidelity 3D rendering, product design, and video streaming. Agentic AI application development and deployment. Scientific visualization and High-Performance Computing (HPC). Key Features of the NCv6 Platform The Power of NVIDIA Blackwell At the heart of the NCv6 is the NVIDIA RTX PRO 6000 Blackwell Server Edition GPU. This powerhouse delivers breakthrough performance featuring 96 GB of ultra-fast GDDR7 memory. This massive frame buffer allows for the handling of complex multimodal AI models and high-resolution textures that previous generations simply could not fit. Host Performance: Intel Granite Rapids To ensure your workloads aren't bottlenecked by the CPU, the VM host is equipped with Intel Xeon Granite Rapids processors. These provide an all-core turbo frequency of up to 4.2 GHz, ensuring that demanding pre- and post-processing steps—common in rendering and physics simulations—are handled efficiently. Optimized Sizing for Every Workflow We understand that one size does not fit all. The NCv6 series introduces three distinct sizing categories to match your specific unit economics: General Purpose: Balanced CPU-to-GPU ratios (up to 320 vCPUs) for diverse workloads. Compute Optimized: Higher vCPU density for heavy simulation and physics tasks. Memory Optimized: Massive memory footprints (up to 1,280 GB RAM) for data-intensive applications. Crucially, for smaller inference jobs or VDI, we will also offer fractional GPU options, allowing you to right-size your infrastructure and optimize costs. NCv6 Technical Specifications Specification Details GPU NVIDIA RTX PRO 6000 Blackwell Server Edition (96 GB GDDR7) Processor Intel Xeon Granite Rapids (up to 4.2 GHz Turbo) vCPUs 16 – 320 vCPUs (Scalable across GP, Compute, and Memory optimized sizes) System Memory 64 GB – 1,280 GB DDR5 Network Up to 200,000 Mbps (200 Gbps) Azure Accelerated Networking Storage Up to 2TB local temp storage; Support for Premium SSD v2 & Ultra Disk Real-World Applications The NCv6 is built for versatility, powering everything from pixel-perfect rendering to high-throughput language reasoning: Production Generative AI & Inference: Deploy self-hosted LLMs and RAG pipelines with optimized unit economics. The NCv6 is ideal for serving ranking models, recommendation engines, and content generation agents where low latency and cost-efficiency are paramount. Automotive & Manufacturing: Validate autonomous driving sensors (LiDAR/Radar) and train physical AI models in high-fidelity simulation environments before they ever touch the real world. Next-Gen VDI & Azure Virtual Desktop: Modernize remote workstations with NVIDIA RTX Virtual Workstation capabilities. By leveraging fractional GPU options, organizations can deliver high-fidelity, accelerated desktop experiences to distributed teams—offering a superior, high-density alternative to legacy NVv5 deployments. Media & Entertainment: Accelerate render farms for VFX studios requiring burst capacity, while simultaneously running generative AI tools for texture creation and scene optimization. Conclusion: The Engine for the Era of Converged AI The Azure NCv6 series redefines the boundaries of cloud infrastructure. By combining the raw power of NVIDIA’s Blackwell architecture with the high-frequency performance of Intel Granite Rapids, we are moving beyond just "visual computing." Innovators can now leverage a unified platform to build the industrial metaverse, deploy intelligent agents, and scale production AI—all with the enterprise-grade security and hybrid reach of Azure. Ready to experience the next generation? Sign up for the NCv6 Public Preview here.Azure CycleCloud 8.8 and CCWS 1.2 at SC25 and Ignite
Azure CycleCloud 8.8: Advancing HPC & AI Workloads with Smarter Health Checks Azure CycleCloud continues to evolve as the backbone for orchestrating high-performance computing (HPC) and AI workloads in the cloud. With the release of CycleCloud 8.8, users gain access to a suite of new features designed to streamline cluster management, enhance health monitoring, and future-proof their HPC environments. Key Features in CycleCloud 8.8 1. ARM64 HPC Support The platform expands its hardware compatibility with ARM64 HPC support, opening new possibilities for energy-efficient and cost-effective compute clusters. This includes access to the newer generation of GB200 VMs as well as general ARM64 support, enabling new AI workloads at a scale never possible before 2. Slurm Topology-Aware Scheduling The integration of topology-aware scheduling for Slurm clusters allows CycleCloud users to optimize job placement based on network and hardware topology. This leads to improved performance for tightly coupled HPC workloads and better utilization of available resources. 3. Nvidia MNNVL and IMEX Support With expanded support for Nvidia MNNVL and IMEX, CycleCloud 8.8 ensures compatibility with the latest GPU technologies. This enables users to leverage cutting-edge hardware for AI training, inference, and scientific simulations. 4. HealthAgent: Event-Driven Health Monitoring and Alerting A standout feature in this release is the enhanced HealthAgent, which delivers event-driven health monitoring and alerting. CycleCloud now proactively detects issues across clusters, nodes, and interconnects, providing real-time notifications and actionable insights. This improvement is a game-changer for maintaining uptime and reliability in large-scale HPC deployments. Node Healthagent supports both impactful healthchecks which can only run while nodes are idle as well as non-impactful healthchecks that can run throughout the lifecycle of a job. This allows CycleCloud to alert on issues that not only happen while nodes are starting, but also issues that may result from failures for long-running nodes. Later releases of CycleCloud will also include automatic remediation for common failures, so stay tuned! 5. Enterprise Linux 9 and Ubuntu 24 support One common request has been wider support for the various Enterprise Linux (EL) 9 variants, including RHEL9, AlmaLinux 9, and Rocky Linux 9. CycleCloud 8.8 introduces support for those distributions as well as the latest Ubuntu HPC release. Why These Features Matter The CycleCloud 8.8 release marks a significant leap forward for organizations running HPC and AI workloads in Azure. The improved health check support—anchored by HealthAgent and automated remediation—means less downtime, faster troubleshooting, and greater confidence in cloud-based research and innovation. Whether you’re managing scientific simulations, AI model training, or enterprise analytics, CycleCloud’s latest features help you build resilient, scalable, and future-ready HPC environments. Key Features in CycleCloud Workspace for Slurm 1.2 Along with the release of CycleCloud 8.8 comes a new CycleCloud Workspace for Slurm (CCWS) release. This release includes the General Availability of features that were previously in preview, such as Open OnDemand, Cendio ThinLinc, and managed Grafana monitoring capabilities. In addition to previously announced features, CCWS 1.2 also includes support for a new Hub and Spoke deployment model. This allows customers to retain a central hub of shared resources that can be re-used between cluster deployments with "disposable" spoke clusters that branch from the hub. Hub and Spoke deployments enable customers who need to re-deploy clusters in order to upgrade their operating system, deploy new versions of software, or even reconfigure the overall architecture of Slurm clusters. Come visit us at SC25 and MS Ignite To learn more about these features, come visit us at the Microsoft booth at #SC25 in St. Louis, MO and #Microsoft #Ignite in San Francisco this week!Azure ND GB300 v6 now Generally Available - Hyper-optimized for Generative and Agentic AI workloads
We are pleased to announce the General Availability (GA) of ND GB300 v6 virtual machines, delivering the next leap in AI infrastructure. On 10/09, we shared the delivery of the first at-scale production cluster with more than 4,600 NVIDIA GB300 NVL72, featuring NVIDIA Blackwell Ultra GPUs connected through the next-generation NVIDIA InfiniBand network. We have now deployed tens of thousands of GB300 GPUs for production customer workloads and expect to scale to hundreds of thousands. Built on NVIDIA GB300 NVL72 systems, these VMs redefine performance for frontier model training, large-scale inference, multimodal reasoning, and agentic AI. The ND GB300 v6 series enables customers to: Deploy trillion-parameter models with unprecedented throughput. Accelerate inference for long-context and multimodal workloads. Scale seamlessly at high bandwidth for large scale training workloads. In recent benchmarks, ND GB300 v6 achieved over 1.1 million tokens per second on Llama 2 70B inference workloads - a 27% uplift over ND GB200 v6. This performance breakthrough enables customers to serve long-context, multimodal, and agentic AI models with unmatched speed and efficiency. With the general availability of ND GB300 v6 VMs, Microsoft strengthens its long-standing collaboration with NVIDIA by leading the market in delivering the latest GPU innovations, reaffirming our commitment to world-class AI infrastructure. The ND v6 GB300 systems are built in a rack-scale design, with each rack hosting 18 VMs for a total of 72 GPUs interconnected by high-speed NVLINK. Each VM has 2 NVIDIA Grace CPUs and 4 Blackwell Ultra GPUs. Each NVLINK connect rack contains: 72 NVIDIA Blackwell Ultra GPUs (with 36 NVIDIA Grace CPUs). 800 gigabits per second (Gbp/s) per GPU cross-rack scale-out bandwidth via next-generation NVIDIA Quantum-X800 InfiniBand (2x ND GB200 v6). 130 terabytes (TB) per second of NVIDIA NVLink bandwidth within rack. 37TB of fast memory. (~20 TB HBM3e + ~17TB LPDDR) Up to 1,440 petaflops (PFLOPS) of FP4 Tensor Core performance. (1.5x ND GB200 v6) Together, NVLINK and XDR InfiniBand enable GB300 systems to behave as a unified compute and memory pool, minimizing latency, maximizing bandwidth, and dramatically improving scalability. Within a rack, NVLink enables coherent memory access and fast synchronization for tightly coupled workloads. Across racks, XDR InfiniBand ensures ultra-low latency, high-throughput communication with SHARP offloading—maintaining sub-100 µs latency for cross-node collectives. Azure provides an end-to-end AI platform that enables customers to build, deploy, and scale AI workloads efficiently on GB300 infrastructure. Services like Azure CycleCloud and Azure Batch simplify the setup and management of HPC and AI environments, allowing organizations to dynamically adjust resources, integrate leading schedulers, and run containerized workloads at massive scale. With tools such as CycleCloud Workspace for Slurm, users can create and configure clusters without prior expertise, while Azure Batch handles millions of parallel tasks, ensuring cost and resource efficiency for large-scale training. For cloud-native AI, Azure Kubernetes Service (AKS) offers rapid deployment and management of containerized workloads, complemented by platform-specific optimizations for observability and reliability. Whether using Kubernetes or custom stacks, Azure delivers a unified suite of services to maximize performance and scalability. Learn More & Get Started https://azure.microsoft.com/en-us/blog/microsoft-azure-delivers-the-first-large-scale-cluster-with-nvidia-gb300-nvl72-for-openai-workloads/ https://techcommunity.microsoft.com/blog/azurehighperformancecomputingblog/breaking-the-million-token-barrier-the-technical-achievement-of-azure-nd-gb300-v/4466080 NVIDIA Blog: Azure’s GB300 NVL72 Supercomputing Cluster Azure VM Sizes OverviewAnnouncing the Public Preview of AMLFS 20: Azure Managed Lustre New SKU for Massive AI&HPC Workloads
Sachin Sheth - Principal PDM Manager Brian Barbisch - Principal Group Software Engineering Manager Matt White - Principal Group Software Engineering Manager Brian Lepore - Principal Product Manager Wolfgang De Salvador - Senior Product Manager Ron Hogue - Senior Product Manager Introduction We are excited to announce the Public Preview of AMLFS Durable Premium 20 (AMLFS 20), a new SKU in Azure Managed Lustre designed to deliver unprecedented performance and scale for demanding AI and HPC workloads. Key Features Massive Scale: Store up to 25 PiB of data in a single namespace, with up to 512 GB/s of total bandwidth. Advanced Metadata Performance: Multi-MDS (Metadata Server) architecture dramatically improves metadata IOPS. In mdtest benchmarks, AMLFS 20 demonstrated more than 5x improvement in metadata operations. An additional MDS is provided for every 5 PiB of provisioned filesystem. High File Capacity: Supports up to 20 billion inodes for maximum namespace size. Why AMLFS 20 Matters Simplified Architecture: Previously, datasets larger than 12.5 PiB required multiple filesystems and complex management. AMLFS 20 enables a single, high-performance file system for massive AI and HPC workloads up to 25 PiB, streamlining deployment and administration. Accelerated Data Preparation: The multi-MDT architecture significantly increases metadata IOPS, which is crucial during the data preparation stage of AI training, where rapid access to millions of files is required. Faster Time-to-Value: Researchers and engineers benefit from easier management, reduced bottlenecks, and faster access to large datasets, accelerating innovation. Availability AMLFS 20 is available in Public Preview alongside the already existing AMLFS SKUs. For more details on other SKUs, visit the Azure Managed Lustre documentation. How to Join the Preview If you are working with large-scale AI or HPC workloads and would like early access to AMLFS 20, we invite you to fill out this form to tell us about your use case. Our team will follow up with onboarding details.Join Microsoft @ SC25: Experience HPC and AI Innovation
Supercomputing 2025 is coming to St. Louis, MO, November 16–21! Visit Microsoft Booth #1627 to explore cutting-edge HPC and AI solutions, connect with experts, and experience interactive demos that showcase the future of compute. Whether you’re attending technical sessions, stopping by for a coffee, or joining our partner events, we’ve got something for everyone. Booth Highlights Alpine Formula 1 Showcar: Snap a photo with a real Alpine F1 car and learn how high-performance computing drives innovation in motorsports. Silicon Wall: Discover silicon diversity—featuring chips from our partners AMD and NVIDIA, alongside Microsoft’s own first-party silicon: Maia, Cobalt, and Majorana. NVIDIA Weather Modeling Demo: See how AI and HPC predict extreme weather events with Tomorrow.io and NVIDIA technology. Coffee Bar with Barista: Enjoy a handcrafted coffee while you connect with our experts. Immersive Screens: Watch live demos and visual stories about HPC breakthroughs and AI innovation. Hardware Bar: Explore AMD EPYC™ and NVIDIA GB200 systems powering next-generation workloads. Whether you’re attending technical sessions, stopping by for a coffee and chat with our team, or joining our partner events, we’ve got something for everyone. Conference Details Conference week: Sun, Nov 16 – Fri, Nov 21 Expo hours (CST): Mon, Nov 17: 7:00–9:00 PM (Opening Night) Tue, Nov 18: 10:00 AM–6:00 PM Wed, Nov 19: 10:00 AM–6:00 PM Thu, Nov 20: 10:00 AM–3:00 PM Customer meeting rooms: Four Seasons Hotel Quick links RSVP — Microsoft + AMD Networking Reception (Tue, Nov 18): https://aka.ms/MicrosoftAMD-Mixer RSVP — Microsoft + NVIDIA Panel Luncheon (Wed, Nov 19): Luncheon is now closed as the event is fully booked. Earned Sessions (Technical Program) Sunday, Nov 16 Session Type Time (CST) Title Microsoft Contributor(s) Location Tutorial 8:30 AM–5:00 PM Delivering HPC: Procurement, Cost Models, Metrics, Value, and More Andrew Jones Room 132 Tutorial 8:30 AM–5:00 PM Modern High Performance I/O: Leveraging Object Stores Glenn Lockwood Room 120 Workshop 2:00–5:30 PM 14th International Workshop on Runtime and Operating Systems for Supercomputers (ROSS 2025) Torsten Hoefler Room 265 Monday, Nov 17 Session Type Time (CST) Title Microsoft Contributor(s) Location Early Career Program 3:30–4:45 PM Voices from the Field: Navigating Careers in Academia, Government, and Industry Joe Greenseid Room 262 Workshop 3:50–4:20 PM Towards Enabling Hostile Multi-tenancy in Kubernetes Ali Kanso; Elzeiny Mostafa; Gurpreet Virdi; Slava Oks Room 275 Workshop 5:00–5:30 PM On the Performance and Scalability of Cloud Supercomputers: Insights from Eagle and Reindeer Amirreza Rastegari; Prabhat Ram; Michael F. Ringenburg Room 267 Tuesday, Nov 18 Session Type Time (CST) Title Microsoft Contributor(s) Location BOF 12:15–1:15 PM High Performance Software Foundation BoF Joe Greenseid Room 230 Poster 5:30–7:00 PM Compute System Simulator: Modeling the Impact of Allocation Policy and Hardware Reliability on HPC Cloud Resource Utilization Jarrod Leddy; Huseyin Yildiz Second Floor Atrium Wednesday, Nov 19 Session Type Time (CST) Title Microsoft Contributor(s) Location BOF 12:15–1:15 PM The Future of Python on HPC Systems Michael Droettboom Room 125 BOF 12:15–1:15 PM Autonomous Science Network: Interconnected Autonomous Science Labs Empowered by HPC and Intelligent Agents Joe Tostenrude Room 131 Paper 1:30–1:52 PM Uno: A One‑Stop Solution for Inter‑ and Intra‑Data Center Congestion Control and Reliable Connectivity Abdul Kabbani; Ahmad Ghalayini; Nadeen Gebara; Terry Lam Rooms 260–267 Paper 2:14–2:36 PM SDR‑RDMA: Software‑Defined Reliability Architecture for Planetary‑Scale RDMA Communication Abdul Kabbani; Jie Zhang; Jithin Jose; Konstantin Taranov; Mahmoud Elhaddad; Scott Moe; Sreevatsa Anantharamu; Zhuolong Yu Rooms 260–267 Panel 3:30–5:00 PM CPUs Have a Memory Problem — Designing CPU‑Based HPC Systems with Very High Memory Bandwidth Joe Greenseid Rooms 231–232 Paper 4:36–4:58 PM SparStencil: Retargeting Sparse Tensor Cores to Scientific Stencil Computations Kun Li; Liang Yuan; Ting Cao; Mao Yang Rooms 260–267 Thursday, Nov 20 Session Type Time (CST) Title Microsoft Contributor(s) Location BOF 12:15–1:15 PM Super(computing)heroes Laura Parry Rooms 261–266 Paper 3:30–3:52 PM Workload Intelligence: Workload‑Aware IaaS Abstraction for Cloud Efficiency Anjaly Parayil; Chetan Bansal; Eli Cortez; Íñigo Goiri; Jim Kleewein; Jue Zhang; Pantea Zardoshti; Pulkit Misra; Raphael Ghelman; Ricardo Bianchini; Rodrigo Fonseca; Saravan Rajmohan; Xiaoting Qin Room 275 Paper 4:14–4:36 PM From Deep Learning to Deep Science: AI Accelerators Scaling Quantum Chemistry Beyond Limits Fusong Ju; Kun Li; Mao Yang Rooms 260–267 Friday, Nov 21 Session Type Time (CST) Title Microsoft Contributor(s) Location Workshop 9:00 AM–12:30 PM Eleventh International Workshop on Heterogeneous High‑performance Reconfigurable Computing (H2RC 2025) Torsten Hoefler Room 263 Booth Theater Sessions Monday, Nov 17 — 7:00 PM–9:00 PM Time (CST) Session Title Presenter(s) 8:00–8:20 PM Inside the World’s Most Powerful AI Data Center Chris Jones 8:30–8:50 PM Transforming Science and Engineering — Driven by Agentic AI, Powered by HPC Joe Tostenrude Tuesday, Nov 18 — 10:00 AM–6:00 PM Time (CST) Session Title Presenter(s) 11:00–11:50 AM Ignite Keynotes 12:00–12:20 PM Accelerating AI workloads with Azure Storage Sachin Sheth; Wolfgang De Salvador 12:30–12:50 PM Accelerate Memory Bandwidth‑Bound Workloads with Azure HBv5, now GA Jyothi Venkatesh 1:00–1:20 PM Radiation & Health Companion: AI‑Driven Flight‑Dose Awareness Olesya Sarajlic 1:30–1:50 PM Ascend HPC Lab: Your On‑Ramp to GPU‑Powered Innovation Daniel Cooke (Oakwood) 2:00–2:20 PM Azure AMD HBv5: Redefining CFD Performance and Value in the Cloud Rick Knoechel (AMD) 2:30–2:50 PM Empowering High Performance Life Sciences Workloads on Azure Qumulo 3:00–3:20 PM Transforming Science and Engineering — Driven by Agentic AI, Powered by HPC Joe Tostenrude 4:00–4:20 PM Unleashing AMD EPYC on Azure: Scalable HPC for Energy and Manufacturing Varun Selvaraj (AMD) 4:30–4:50 PM Automating HPC Workflows with Copilot Agents Xavier Pillons 5:00–5:20 PM Scaling the Future: NVIDIA’s GB300 NVL72 Rack for Next‑Generation AI Inference Kirthi Devleker (NVIDIA) 5:30–5:50 PM Enabling AI and HPC Workloads in the Cloud with Azure NetApp Files Andy Chan Wednesday, Nov 19 — 10:00 AM–6:00 PM Time (CST) Session Title Presenter(s) 10:30–10:50 AM AI‑Powered Digital Twins for Industrial Engineering John Linford (NVIDIA) 11:00–11:20 AM Advancing 5 Generations of HPC Innovation with AMD on Azure Allen Leibovitch (AMD) 11:30–11:50 AM Intro to LoRA Fine‑Tuning on Azure Christin Pohl 12:00–12:20 PM VAST + Microsoft: Building the Foundation for Agentic AI Lior Genzel (VAST Data) 12:30–12:50 PM Inside the World’s Most Powerful AI Data Center Chris Jones 1:00–1:20 PM Supervised GenAI Simulation – Stroke Prognosis (NVads V710 v5) Kurt Niebuhr 1:30–1:50 PM What You Don’t See: How Azure Defines VM Families Anshul Jain 2:00–2:20 PM Hammerspace Tier 0: Unleashing GPU Storage Performance on Azure Raj Sharma (Hammerspace) 2:30–2:50 PM GM Motorsports: Accelerating Race Performance with AI Physics on Rescale Bernardo Mendez (Rescale) 3:00–3:20 PM Hurricane Analysis and Forecasting on the Azure Cloud Salar Adili (Microsoft); Unni Kirandumkara (GDIT); Stefan Gary (Parallel Works) 3:30–3:50 PM Performance at Scale: Accelerating HPC & AI Workloads with WEKA on Azure Desiree Campbell; Wolfgang De Salvador 4:00–4:20 PM Pushing the Limits of Performance: Supercomputing on Azure AI Infrastructure Biju Thankachen; Ojasvi Bhalerao 4:30–4:50 PM Accelerating Momentum: Powering AI & HPC with AMD Instinct™ GPUs Jay Cayton (AMD) Thursday, Nov 20 — 10:00 AM–3:00 PM Time (CST) Session Title Presenter(s) 11:30–11:50 AM Intro to LoRA Fine‑Tuning on Azure Christin Pohl 12:00–12:20 PM Accelerating HPC Workflows with Ansys Access on Microsoft Azure Dr. John Baker (Ansys) 12:30–12:50 PM Accelerate Memory Bandwidth‑Bound Workloads with Azure HBv5, now GA Jyothi Venkatesh 1:00–1:20 PM Pushing the Limits: Supercomputing on Azure AI Infrastructure Biju Thankachen; Ojasvi Bhalerao 1:30–1:50 PM The High Performance Software Foundation Todd Gamblin (HPSF) 2:00–2:20 PM Heidi AI — Deploying Azure Cloud Environments for Higher‑Ed Students & Researchers James Verona (Adaptive Computing); Dr. Sameer Shende (UO/ParaTools) Partner Session Schedule Tuesday, Nov 18 Date Time (CST) Title Microsoft Contributor(s) Location Nov 18 11:00 AM–11:50 AM Cloud Computing for Engineering Simulation Joe Greenseid Ansys Booth Nov 18 1:00 PM–1:30 PM Revolutionizing Simulation with Artificial Intelligence Joe Tostenrude Ansys Booth Nov 18 4:30 PM–5:00 PM [HBv5] Jyothi Venkatesh AMD Booth Wednesday, Nov 19 Date Time (CST) Title Microsoft Contributor(s) Location Nov 19 11:30 AM–1:30 PM Accelerating Discovery: How HPC and AI Are Shaping the Future of Science (Lunch Panel) Andrew Jones (Moderator); Joe Greenseid (Panelist) Ruth's Chris Steak House Nov 19 1:00 PM–1:30 PM VAST and Microsoft Kanchan Mehrotra VAST Booth Demo Pods at Microsoft Booth Azure HPC & AI Infrastructure Explore how Azure delivers high-performance computing and AI workloads at scale. Learn about VM families, networking, and storage optimized for HPC. Agentic AI for Science See how autonomous agents accelerate scientific workflows, from simulation to analysis, using Azure AI and HPC resources. Hybrid HPC with Azure Arc Discover how Azure Arc enables hybrid HPC environments, integrating on-prem clusters with cloud resources for flexibility and scale. Ancillary Events (RSVP Required) Microsoft + AMD Networking Reception — Tuesday Night When: Tue, Nov 18, 6:30–10:00 PM (CST) Where: UMB Champions Club, Busch Stadium RSVP: https://aka.ms/MicrosoftAMD-Mixer Microsoft + NVIDIA Panel Luncheon — Wednesday When: Wed, Nov 19, 11:30 AM–1:30 PM (CST) Where: Ruth’s Chris Steak House Topic: Accelerating Discovery: How AI and HPC Are Shaping the Future of Science Panelists: Dan Ernst (NVIDIA); Rollin Thomas (NERSC); Joe Greenseid (Microsoft); Antonia Maar (Intersect360 Research); Fernanda Foertter (University of Alabama) RSVP: Luncheon is now closed as the event is fully booked. Conclusion We’re excited to connect with you at SC25! Whether you’re exploring our booth demos, attending technical sessions, or joining one of our partner events, this is your opportunity to experience how Microsoft is driving innovation in HPC and AI. Stop by Booth #1627 to see the Alpine F1 showcar, explore the Silicon Wall featuring AMD, NVIDIA, and Microsoft’s own chips, and enjoy a coffee from our barista while networking with experts. Don’t forget to RSVP for our Microsoft + AMD Network Reception and Microsoft + NVIDIA Panel Luncheon See you in St. Louis!Breaking the Million-Token Barrier: The Technical Achievement of Azure ND GB300 v6
Azure ND GB300 v6 Virtual Machines with NVIDIA GB300 NVL72 rack-scale systems achieve unprecedented performance of 1,100,000 tokens/s on Llama2 70B Inference, beating the previous Azure ND GB200 v6 record of 865,000 tokens/s by 27%.A Quick Guide to Benchmarking AI models on Azure: Llama 405B and 70B with MLPerf Inference v5.1
by Mark Gitau (Software Engineer) Introduction For the MLPerf Inference v5.1 submission, Azure shared performance results on the new ND GB200 v6 virtual machines. A single ND GB200 v6 VM on Azure is powered by two NVIDIA Grace CPUs and four NVIDIA Blackwell B200 GPUs. This document highlights Azure’s MLPerf Inference v5.1 results and outlines the steps to run these benchmarks on Azure. These MLPerf™ benchmark results demonstrate Azure’s commitment to providing our customers with the latest GPU offerings of the highest quality. Highlights from MLPerf Inference v5.1 benchmark results include: Azure had the highest Llama 2 70B Offline submission with 52,000 tokens/s on a single ND GB200 v6 virtual machine. This corresponds to an 8% increase on single node performance since our record which would correspond to 937,098 tokens/s on a full NVL72 rack. Azure results for Llama 3.1 405B are at par with the best submitters (1% difference), cloud and on-premises, with 847 tokens/s. How to replicate the results in Azure Pre-requisites: ND GB200 v6-series (single node): Deploy and set up a virtual machine on Azure Set up the environment First, we need to export the path to the directory where we will perform the benchmarks. For ND GB200 v6-series (single node), create a directory called mlperf in /mnt/nvme Set mlperf scratch space: export MLPERF_SCRATCH_PATH=/mnt/nvme/mlperf Clone the MLPerf repository inside the scratch path: git clone https://github.com/mlcommons/inference_results_v5.1.git Then create empty directories in your scratch space to house the data: mkdir $MLPERF_SCRATCH_PATH/data $MLPERF_SCRATCH_PATH/models $MLPERF_SCRATCH_PATH/preprocessed_data Download the models & datasets Download the models inside the models directory you created in the previous step. This will take a while because the weights are large. Llama 2 70B model Llama 3.1 405B model Download the preprocessed datasets for both models Llama 2 70B datasets Llama 3.1 405B datasets Prepare the datasets for Llama 2 70B: inference_results_v5.1/closed/Azure/code/llama2-70b/tensorrt at main · mlcommons/inference_results_v5.1 prepare the datasets for Llama 3.1 405B: inference_results_v5.1/closed/Azure/code/llama3.1-405b/tensorrt at main · mlcommons/inference_results_v5.1 Build & launch MLPerf container Export the Submitter and System name: export SUBMITTER=Azure SYSTEM_NAME=ND_GB200_v6 Enter the container by entering the closed/Azure directory and running: make prebuild Inside the container, run make build Build engines & run benchmarks Make sure you are still in the closed/Azure directory of the MLPerf repository To build the engines for both Llama 3.1 405B and Llama 2 70B: make generate_engines RUN_ARGS="--benchmarks=llama2-70b,llama3.1-405b --scenarios=offline,server" To run the benchmarks for both Llama 3.1 405B and Llama 2 70B: make run_harness RUN_ARGS=="--benchmarks=llama2-70b,llama3.1-405b --scenarios=offline,server" MLPerf from MLCommons® MLCommons® is an open engineering consortium of AI leaders from academia, research, and industry where the mission is to “build fair and useful benchmarks” that provide unbiased evaluations of training and inference performance for hardware, software, and services—all conducted under predetermined conditions. MLPerf™ Inference benchmarks consist of compute-intensive AI workloads that simulate realistic usage of the systems, making the results very influential in technology management’s buying decisions.Azure’s ND GB200 v6 Delivers Record Performance for Inference Workloads
Achieving peak AI performance requires both cutting-edge hardware and a finely optimized infrastructure. Azure’s ND GB200 v6 Virtual Machines, accelerated by the NVIDIA GB200 Blackwell GPUs, have already demonstrated world record performance of 865,000 tokens/s for inferencing on the industry standard LLAMA2 70BPerformance analysis of DeepSeek R1 AI Inference using vLLM on ND-H100-v5
Introduction The DeepSeek R1 model represents a new frontier in large-scale reasoning for AI applications. Designed to tackle complex inference tasks, R1 pushes the boundaries of what’s possible—but not without significant infrastructure demands. To deploy DeepSeek R1 effectively in an inference service like vLLM, high-performance hardware is essential. Specifically, the model requires two Azure ND_H100_v5 nodes, each equipped with 8 NVIDIA H100 GPUs, totaling 16 H100s. These nodes are interconnected via InfiniBand and NVLink, ensuring the bandwidth and latency characteristics necessary to support the model’s massive memory footprint and parallel processing needs. In this post, we’ll present inference benchmark results for DeepSeek R1, measuring performance across GPU utilization, memory throughput, and interconnect efficiency. While R1 excels in reasoning tasks, it’s important to recognize that such models are not universally optimal. For many general-purpose AI applications, smaller models like Llama 3.1 8B offer a compelling alternative, delivering sufficient accuracy and performance at a fraction of the cost. We explore the performance characteristics of DeepSeek R1 and help you decide when a large reasoning model is worth the investment—and when a leaner solution might be the better choice. Benchmark environment 2 ND_H100_v5 nodes (16 H100) were required to load the DeepSeek R1 model into GPU memory. vLLM was used to set-up the online inference server using a single API server endpoint. vLLM bench (specifically hugging face AI-MO/aimo-validation-aime) was used to run the inference benchmark. Detailed specifications are available in the tables below. Results Reasoning model’s additional chain of thought tokens We show that DeepSeek R1 generates many additional tokens (for chain of thought), which helps reasoning models answer complex logical questions. In this example the DeepSeek R1 generates 1162 completion tokens, but Llama 3.1 8B only generates 37. This example also demonstrates that while non-reasoning models may struggle with this seemingly simple logical question, reasoning models like DeepSeek R1 can solve it—though they require significantly more intermediate tokens to do so. #!/bin/bash curl http://localhost:8000/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "deepseek-ai/DeepSeek-R1", "messages": [ {"role": "user", "content": "9.11 and 9.8, which is greater? Explain your reasoning"} ] }' Note: The REST API json output is formatted to make the output more readable. (./run_restapi_query.sh 2> /dev/null | python3 ./convert_json_2_txt.py >& run_query.out_$$) Reasoning: Okay, so I need to figure out whether 9.11 is greater than 9.8 or if 9.8 is greater. Hmm, let me start by looking at both numbers. They both have a 9 in the units place, so that part is equal. Now, I need to compare the decimal parts: .11 and .8. Wait, but decimals can be tricky because the places matter. The first number is 9.11, which has a 1 in the tenths place and a 1 in the hundredths place. The second number is 9.8, which is the same as 9.80 when you add a zero to the hundredths place for comparison. So, 9.80 versus 9.11. Now, looking at the tenths place: 8 is greater than 1. Therefore, 9.80 should be greater than 9.11. But let me double-check because sometimes when numbers have different decimal lengths, it's easy to make a mistake. If I convert both to fractions, maybe that will help. 9.11 is the same as 9 + 0.11, and 0.11 is 11/100. On the other hand, 9.8 is 9 + 0.8, which is 9 + 8/10. To compare 8/10 and 11/100, I can convert them to have the same denominator. 8/10 is equal to 80/100. Now, 80/100 is much larger than 11/100. So yes, 9.8 is indeed greater than 9.11. Another way to see it is by subtracting the two numbers. If I subtract 9.11 from 9.8, what do I get? Let's see: 9.8 - 9.11. Since 9.8 is the same as 9.80, subtracting 9.11 gives 0.69. Since the result is positive, that means 9.8 is larger. Wait, maybe I can also think about money to make it clearer. If I have $9.11 and you have $9.80, which is more? $9.80 is $9 and 80 cents, whereas $9.11 is $9 and 11 cents. Definitely, 80 cents is more than 11 cents, so $9.80 is more. That helps visualize it. Alternatively, using a number line. If I imagine a number line with 9.0, 9.1, 9.2,... up to 10.0. 9.11 is just a little past 9.1, while 9.8 is almost at 10.0. So clearly, 9.8 is much further to the right, meaning it's a larger number. I think that covers different methods: decimal place comparison, converting to fractions, subtraction, real-life money examples, and number line visualization. All of these methods point to 9.8 being greater than 9.11. I don't see any contradictions here. Maybe I can also consider writing both numbers with the same number of decimal places to make it even clearer. 9.11 is already two decimal places, and 9.8 can be written as 9.80. Now, comparing 9.80 and 9.11 digit by digit: - The units place: 9 vs. 9 (equal) - Tenths place: 8 vs. 1 (8 is greater) - Since the tenths place of 9.80 is greater, the entire number is larger, regardless of the hundredths place. So yes, that confirms it again. The key idea here is that when comparing decimals, you start from the leftmost digit after the decimal point and move right until you find a difference. In this case, the tenths place already shows that 9.8 is larger. I guess another common mistake could be thinking that because 11 is more than 8, 0.11 is more than 0.8, but that's not correct because the places are different. Tenths versus hundredths. 0.8 is equivalent to 0.80, which is way bigger than 0.11. To prevent that mistake, always align the decimal points and add trailing zeros if necessary. Then compare each digit one by one from left to right. That method avoids confusion. So, all in all, through multiple approaches, 9.8 is definitely greater than 9.11. Answer: To determine which number is greater between **9.11** and **9.8**, follow these steps: **Align the decimal places** by rewriting 9.8 as **9.80**. **Compare digit by digit** from left to right: - **Units place**: Both have **9** (equal). - **Tenths place**: **8** (in 9.80) vs. **1** (in 9.11). Since \(8 > 1\), **9.80 is greater** at this point. - No need to compare hundredths place once a difference is found. **Conclusion**: \(9.8\) (or \(9.80\)) is greater than \(9.11\) because **8 tenths** (\(0.8\)) is larger than **1 tenth + 1 hundredth** (\(0.11\)). \(\boxed{9.8 \text{ is greater}}\) Completion Info: Finish reason: stop Prompt tokens: 21 Completion tokens: 1162 Total tokens: 1183 When running using Llama 3.1 8B (Not reasoning model). Answer: To determine which is greater, we need to compare the two numbers. 9.11 is greater than 9.8 because 9.11 is a larger decimal value. Completion Info: Finish reason: stop Prompt tokens: 51 Completion tokens: 37 Total tokens: 88 Throughput and latency results Cost comparison In this cost analysis we use the ND-H100-v5 and ND-H200-v4 pay as you go pricing in south central US region and the measured total throughput tokens/sec to compute the $/(1K tokens). Note: ND-H200-v5 pricing was estimated at 20% more than ND-H100-v5 pricing. Analysis The DeepSeek R1 is a large, complex reasoning model that is costlier and slower than smaller models. It needs 16 H100 GPUs for FP8 precision and generates many more intermediate tokens in its chain of thought process—about 31 times more than Llama 3.1 8B—but at a much slower rate (~54 times slower). Its latency is also higher, with TTFT and ITL being roughly 6 and 3 times slower, respectively. The DeepSeek R1 model has small intranode and internode network requirements (~14% of available InfiniBand network bandwidth was used, and < 1% of available NVLink bandwidth is used. GPUs with higher memory bandwidth and higher FLOPS would help improve its performance. The cost analysis shows that the cost to generate DeepSeek R1 tokens is ~54 times more expensive than Llama 3.1 8B on the same 16 H100 GPU’s and ~34 times more expensive on 8 H200 GPU’s. DeepSeek R1 model is very capability, but due to its higher TCO it should be only used in specific AI applications that require its strong reasoning abilities. Conclusion The DeepSeek R1 model demonstrates exceptional reasoning capabilities, but its deployment demands substantial infrastructure and incurs high latency and cost. While it excels in generating detailed chains of thought, its throughput and efficiency lag significantly behind smaller models like Llama 3.1 8B. For applications requiring deep logical analysis, DeepSeek R1 is a powerful tool. However, for general-purpose inference tasks, more lightweight models offer better performance and cost-effectiveness. Strategic use of DeepSeek R1 should be reserved for scenarios where its advanced reasoning justifies the resource investment. References Deepseek R1 model on Hugging Face https://huggingface.co/deepseek-ai/DeepSeek-R1 vLLM GitHub repository https://github.com/vllm-project/vllm Azure ND H100 v5 documentation https://learn.microsoft.com/en-us/azure/virtual-machines/nd-h100-v5-series FlashInfer GitHub repository https://github.com/flashinfer-ai/flashinfer DeepGEMM GitHub repository https://github.com/deepseek-ai/DeepGEMM AI-MO validation dataset on Hugging Face https://huggingface.co/datasets/AI-MO/aimo-validation-aime Appendix Install vLLM curl -LsSf https://astral.sh/uv/install.sh | sh uv venv myvllm --python 3.11 --seed source myvllm/bin/activate uv pip install vllm --torch-backend=auto git clone https://github.com/flashinfer-ai/flashinfer.git --recursive uv pip install ninja cd flashinfer uv pip install --no-build-isolation --verbose . Install DeepSeek DeepEP git clone https://github.com/vllm-project/vllm.git cd ~/vllm/tools/ep_kernels export CUDA_HOME=/usr/local/cuda-12.8 TORCH_CUDA_ARCH_LIST="9.0" (For Hopper) bash install_python_libraries.sh 2.&1 | tee install_python_libraries.log_$$ sudo bash configure_system_drivers.sh 2>&1 | tee configure_system_drivers.log_$$ sudo reboot Install DeepSeek DeepGEMM git clone --recursive https://github.com/deepseek-ai/DeepGEMM.git cd deepGEMM ./install.sh 2>&1 | tee install.log_$$ Configure DeepSeek R1 with vLLM on 2 ND_H100_v5 Second node configuration Execute this script on second node before the script on the primary node. #!/bin/bash MODEL="deepseek-ai/DeepSeek-R1" PORT=8000 export VLLM_LOGGING_LEVEL=INFO export HF_HUB_CACHE=/home/azureuser/cgshared/hf_cache #export VLLM_ALL2ALL_BACKEND=deepep_high_throughput export VLLM_ALL2ALL_BACKEND=deepep_low_latency export VLLM_USE_DEEP_GEMM=1 export GLOO_SOCKET_IFNAME=eth0 vllm serve $MODEL --port $PORT --tensor-parallel-size 1 --enable-expert-parallel --data-parallel-size 16 --data-parallel-size-local 8 --data-parallel-start-rank 8 --data-parallel-address 10.0.0.6 --data-parallel-rpc-port 23345 --headless --max-model-len 32768 --reasoning-parser deepseek_r1 Primary node configuration #!/bin/bash MODEL="deepseek-ai/DeepSeek-R1" PORT=8000 export VLLM_LOGGING_LEVEL=INFO export HF_HUB_CACHE=/home/azureuser/cgshared/hf_cache #export VLLM_ALL2ALL_BACKEND=deepep_high_throughput export VLLM_ALL2ALL_BACKEND=deepep_low_latency export VLLM_USE_DEEP_GEMM=1 export GLOO_SOCKET_IFNAME=eth0 vllm serve $MODEL --port $PORT --tensor-parallel-size 1 --enable-expert-parallel --data-parallel-size 16 --data-parallel-size-local 8 --data-parallel-address 10.0.0.6 --data-parallel-rpc-port 23345 --api-server-count 1 --max-model-len 32768 --reasoning-parser deepseek_r1 Install vLLM benchmark environment cd vllm uv pip install vllm[bench] Run vLLM benchmark #!/bin/bash vllm bench serve \ --backend vllm \ --model deepseek-ai/DeepSeek-R1 \ --endpoint /v1/completions \ --dataset-name hf \ --dataset-path AI-MO/aimo-validation-aime \ --ramp-up-strategy linear \ --ramp-up-start-rps 1 \ --ramp-up-end-rps 10 \ --num-prompts 400 \ --seed 42