virtual machines
156 TopicsEnhancing Resiliency in Azure Compute Gallery
In today's cloud-driven world, ensuring the resiliency and recoverability of critical resources is top of mind for organizations of all sizes. Azure Compute Gallery (ACG) continues to evolve, introducing robust features that safeguard your virtual machine (VM) images and application artifacts. In this blog post, we'll explore two key resiliency innovations: the new Soft Delete feature (currently in preview) and Zonal Redundant Storage (ZRS) as the default storage type for image versions. Together, these features significantly reduce the risk of data loss and improve business continuity for Azure users. The Soft Delete Feature in Preview: A safety net for your Images Many Azure customers have struggled with accidental deletion of VM images, which disrupts workflows and causes data loss without any way to recover, often requiring users to rebuild images from scratch. Previously, removing an image from the Azure Compute Gallery was permanent and resulted in customer dissatisfaction due to service disruption and lengthy process of recreating the image. Now, with Soft Delete (currently available in public preview), Azure introduces a safeguard that makes it easy to recover deleted images within a specified retention period. How Soft Delete Works When Soft Delete is enabled on a gallery, deleting an image doesn't immediately remove it from the system. Instead, the image enters a "soft-deleted" state, where it remains recoverable for up to 7 days. During this grace period, administrators can review and restore images that may have been deleted by mistake, preventing permanent loss. After the retention period expires, the platform automatically performs a hard (permanent) delete, at which point recovery is no longer possible. Key Capabilities and User Experience Recovery period: Images are retained for a default period of 7 days, giving users time to identify and restore any resources deleted in error. Seamless Recovery: Recover soft-deleted images directly from the Azure Portal or via REST API, making it easy to integrate with automation and CI/CD pipelines. Role-Based Access: Only owners or users with the Compute Gallery Sharing Admin role at the subscription or gallery level can manage soft-deleted images, ensuring tight control over recovery and deletion operations. No Additional Cost: The Soft Delete feature is provided at no extra charge. After deletion, only one replica per region is retained, and standard storage charges apply until the image is permanently deleted. Comprehensive Support: Soft Delete is available for Private, Direct Shared, and Community Galleries. New and existing galleries can be configured to support the feature. To enable Soft Delete, you can update your gallery settings via the Azure Portal or use the Azure CLI. Once enabled, the "delete" operation will soft-delete images, and you can view, list, restore, or permanently remove these images as needed. Learn more about Soft Delete feature at https://aka.ms/sigsoftdelete Zonal Redundant Storage (ZRS) by Default Another major resiliency enhancement in Azure Compute Gallery is the default use of Zonal Redundant Storage (ZRS) for image versions. ZRS replicates your images across multiple availability zones within a region, ensuring that your resources remain available even if a zone experiences an outage. By defaulting to ZRS, Azure raises the baseline for image durability and access, reducing the risk of disruptions due to zone-level failures. Automatic Redundancy: All new image versions are stored using ZRS by default, without requiring manual configuration. High Availability: Your VM images are protected against the failure of any single availability zone within the region. Simplified Management: Users benefit from resilient storage without the need to explicitly set up or manage storage account redundancy settings. Default ZRS capability starts with API version 2025-03-03; Portal/SDK support will be added later. Why These Features Matter The combination of Soft Delete and ZRS by default provides Azure customers with enhanced operational reliability and data protection. Whether overseeing a suite of VM images for development and testing purposes or coordinating production deployments across multiple teams, these features offer the following benefits: Mitigate operational risks associated with accidental deletions or regional outages. Minimize downtime and reduce manual recovery processes. Promote compliance and security through advanced access controls and transparent recovery procedures. To evaluate the Soft Delete feature, you may register for the preview and activate it on your galleries through the Azure Portal or RestAPI. Please note that, during its preview phase, this capability is recommended for assessment and testing rather than for production environments. ZRS is already available out-of-the-box, delivering image availability starting with API version 2025-03-03. For comprehensive details and step-by-step guidance on enabling and utilizing Soft Delete, please review the public specification document at https://aka.ms/sigsoftdelete Conclusion Azure Compute Gallery continues to push the envelope on resource resiliency. With Soft Delete (preview) offering a reliable recovery mechanism for deleted images, and ZRS by default protecting your assets against zonal failures, Azure empowers you to build and manage VM deployments with peace of mind. Stay tuned for future updates as these features evolve toward general availability.401Views2likes1CommentAzure Recognized as an NVIDIA Cloud Exemplar, Setting the Bar for AI Performance in the Cloud
As AI models continue to scale in size and complexity, cloud infrastructure must deliver more than theoretical peak performance. What matters in practice is reliable, end-to-end, workload-level AI performance—where compute, networking, system software, and optimization work together to deliver predictable, repeatable results at scale. This directly translates to business value: efficient full-stack infrastructure accelerates time-to-market, maximizes ROI on GPU and cloud investments, and enables organizations to scale AI from proof-of-concept to revenue-generating products with predictable economics. Today, Microsoft is proud to share an important milestone in partnership with NVIDIA: Azure has been validated as an NVIDIA Exemplar Cloud, becoming the first cloud provider recognized for Exemplar-class AI performance aligned with GB300-class (Blackwell generation) systems. This recognition builds on Azure’s previously validated Exemplar status for H100 training workloads and reflects NVIDIA’s confidence in Azure’s ability to extend that rigor and performance discipline into the next generation of AI platforms. What Is NVIDIA Exemplar Cloud? The NVIDIA Exemplar Cloud initiative celebrates cloud platforms that demonstrate robust end-to-end AI workload performance using NVIDIA’s Performance Benchmarking suite. Rather than relying on synthetic microbenchmarks, Performance Benchmarking evaluates real AI training workloads using: Large-scale LLM training scenarios Production-grade software stacks Optimized system and network configurations Workload-centric metrics such as throughput and time-to-train Achieving Exemplar validation signals that a provider can consistently deliver world-class AI performance in the cloud, showcasing that end users are getting optimal performance value by default. Proven Exemplar Validation on H100 Azure’s Exemplar Cloud journey began with publicly shared benchmarking results for H100-based training workloads, where Azure ND GPU clusters demonstrated exemplar performance using NVIDIA Performance Benchmarking recipes. Those results—published previously and validated through NVIDIA’s benchmarking framework—established a proven foundation of end-to-end AI performance for large-scale, production workloads running on Azure today. Extending Exemplar-Class AI Performance to GB300-Class Platforms Building on the rigor and learnings from H100 validation, Microsoft has now been recognized by NVIDIA as the first cloud provider to achieve Exemplar-class performance and readiness aligned with GB300-class systems. This designation reflects NVIDIA’s assessment that the same principles applied to H100—including end-to-end system tuning, networking optimization, and software alignment—are being successfully carried forward into the Blackwell generation. Rather than treating GB300 as a point solution, Azure approaches it as a continuation of a proven performance model: delivering consistent world-class AI performance in the cloud while preserving the flexibility, elasticity, and global scale customers expect. What Enables Exemplar-Class AI Performance on Azure Delivering Exemplar-class AI performance requires optimization across the full AI stack: Infrastructure and Networking High-performance Azure ND GPU clusters with NVIDIA InfiniBand NUMA-aware CPU, GPU, and NIC alignment to minimize latency Tuned NCCL communication paths for efficient multi-GPU scaling Software and System Optimization Tight integration with NVIDIA software, including Performance Benchmarking recipes and NVIDIA AI Enterprise Parallelism strategies aligned with large-scale LLM training Continuous tuning as models, workloads, and system architectures evolve End-to-End Workload Focus Measuring real training performance, not isolated component metrics Driving repeatable improvements in application-level throughput and efficiency Closing the performance gap between cloud and on-premises systems—without sacrificing manageability Together, these capabilities enabled Azure to deliver consistent Exemplar-class AI performance across generations of NVIDIA platforms. What This Means for Customers For customers training and deploying advanced AI models, this milestone delivers clear benefits: World-class AI performance in a fully managed cloud environment Predictable scaling from small clusters to thousands of GPUs Faster time to train and improved performance per dollar Confidence that Azure is ready for Blackwell-class and GB300-class AI workloads As AI workloads become more complex and reasoning-heavy, infrastructure performance increasingly determines outcomes. Azure’s NVIDIA Cloud Exemplar recognition provides a clear signal: customers can build and scale next-generation AI systems on Azure without compromising on performance. Learn More DGX Cloud Benchmarking on Azure DGX Cloud Benchmarking on Azure | Microsoft Community HubAzure Automated Virtual Machine Recovery: Minimizing Downtime
Co-authors: Mukhtar Ahmed, Shekhar Agrawal, Harish Luckshetty, Vinay Nagarajan, Jie Su, Sri Harsha Kanukuntla, David Maldonado, Shardul Dabholkar. Keeping virtual machines running smoothly is essential for businesses across every industry. When a VM stays down for even a short period, the impact can cascade quickly; delayed financial transactions, stalled manufacturing lines, unavailable retail systems, or interruptions to healthcare services. This understanding led to the creation of this solution, with its primary goal of ensuring fast and reliable recovery times so customers can focus on their business priorities without worrying about manual recovery strategies. This feature helps ensure your business Service-Level Agreements are consistently met. When a VM experiences an issue, our system springs into action within seconds, working to restore your service as quickly as possible. It automatically executes the optimal recovery strategy, all without customer intervention. The feature operates continuously in the background, monitoring the health of VMs through multiple detection mechanisms. Lastly, it automatically selects the fastest recovery path based on the specific failure type. Getting Started The best part? Azure Automated VM Recovery requires no setup or configuration. Running quietly in the background, this service helps guarantee the highest level of recoverability and a smooth experience for every Azure customer. Your VMs are already benefiting from faster detection, smarter diagnosis, and optimized recovery strategies. The Importance of Automated VM Recovery Automated VM recovery is essential to keeping cloud services resilient, reliable, and interruption-free. Automated recovery ensures that the moment a failure occurs, the platform responds instantly with fast detection, intelligent diagnostics, and the optimal repair action, all without requiring customer intervention. Better experience for customers: By minimizing VM downtime, it helps customers keep their services online, avoiding disruptions and potential business losses. Stronger trust in Azure: Fast, reliable recovery builds customer confidence in Azure’s platform, reinforcing our reputation for dependability. Reduced financial impact for customers: The lower the downtime, the less time your customers will be impacted, reducing potential loss of revenue and minimizing business disruption during critical operations. Empowering internal teams: Automated monitoring and clear visibility into recovery metrics help teams track health, onboard easily, and identify opportunities for improvement with minimal effort. How Azure Automated VM Recovery Works: A Three-Stage Approach Azure automatically handles VM issues through a three-stage recovery framework: Detection, Diagnosis, and Mitigation. Detection From the moment a failure occurs, multiple parallel mechanisms identify issues quickly. Azure hardware devices send regular health signals, which are monitored for interruptions or degradation. At the application level, operational health is tracked via response times, error rates, and successful operations to detect software-level problems rapidly. Diagnosis Once detected, lightweight diagnostics determine the best recovery action without unnecessary delays. Diagnostics operates at multiple levels; host level checks asses underlying infrastructure, VM level diagnostics evaluate the virtual machine state and system-on-chip (SoC) level analysis examines hardware components. This includes network checks, resource utilization assessments, and service responsiveness tests. Detailed data is also collected for post-incident analysis, continuously improving diagnostic algorithms while active recovery proceeds. Mitigation Based on diagnostics, the system automatically executes the optimal recovery strategy, starting with the least disruptive methods and escalating as needed. Hardware failures may trigger VM migration, while software issues might be resolved with targeted service restarts. If needed, a host reset is performed while preserving virtual machine state, ensuring minimal disruption to running workloads. Post-mitigation health checks ensure full VM functionality before recovery is considered complete. Recovery Event Annotations Recovery Event Annotations are specialized annotations that provide detailed visibility into every stage of VM recovery, going beyond simple uptime metrics. These indicators act as custom monitoring metrics, breaking down each incident into precise time segments. For example, TTD (Time to Detect) measures the time between a VM becoming unhealthy and the system recognizing the issue, while TTDiag (Time to Diagnose) tracks the duration of diagnostic checks. By analyzing these segments, Recovery Timing Indicators help identify bottlenecks, optimize recovery steps, and improve overall reliability. Key benefits include: Understanding why some VMs recover faster than others. Identifying which diagnostics add value versus those that don’t. Highlighting opportunities that provide a faster path of recovery. Enabling early detection of regressions through event annotation-driven alerts. Establishing a common language across Azure teams for measuring and improving downtime. Customer Impact and Results Azure Automated VM Recovery demonstrates our commitment to not only high availability but also rapid recovery. By minimizing downtime, it helps customers build resilient applications and maintain business continuity during unexpected failures. Over the past 18 months, this solution has cut average VM downtime by more than half, significantly enhancing reliability and customer experience. Our ongoing goal is to provide a platform where customers can deploy workloads with confidence, knowing automated recovery will minimize disruptions.762Views8likes1CommentAnnouncing General Availability of Azure Da/Ea/Fasv7-series VMs based on AMD ‘Turin’ processors
Today, Microsoft is announcing the general availability of Azure’s new AMD based Virtual Machines (VMs) powered by 5th Gen AMD EPYC™ (Turin) processors. These VMs include general-purpose (Dasv7, Dalsv7), memory-optimized (Easv7), and compute-optimized (Fasv7, Falsv7, Famsv7) series, available with or without local disks. Azure’s latest AMD based VMs offer faster CPU performance, greater scalability, and flexible configurations, making them the ideal choice for high performance, cost efficiency, and diverse workloads. Key improvements include up to 35% better CPU performance and price-performance compared to equivalent v6 AMD-based VMs. Workload-specific gains are significant—up to 25% for Java applications, up to 65% for in-memory cache applications, up to 80% for crypto workloads, and up to 130% for web server applications just to name a few. Dalsv7-series VMs are cost-efficient for low memory workloads like web servers, video encoding, and batch processing. Dasv7-series suit general computing tasks such as e-commerce, web front ends, virtualization, customer relationship management applications (CRM), and entry to mid-range databases. Easv7-series target memory-heavy workloads like enterprise applications, data warehousing, business intelligence, in-memory analytics and more. Falsv7-, Fasv7-, and Famsv7 series deliver full-core performance without Simultaneous Multithreading (SMT) for compute-intensive tasks like scientific simulations, financial modeling, gaming and more. You can now choose constrained-core VM sizes — reducing the vCPU total by 50% or 75% while maintaining the other resources. Dasv7, Dalsv7, and Easv7 VMs now scale up to 160 vCPUs, an increase from 96 vCPUs in the previous generation. The Fasv7, Falsv7, and Famsv7 VMs, which do not include Simultaneous Multithreading (SMT), support up to 80 vCPUs—up from 64 vCPUs in the prior generation—and introduce a new 1-core option. These VMs offer a maximum boost CPU frequency of up to 4.5 GHz for faster compute-intensive operations. The new VMs deliver increased memory capacity —up to 640 GiB for Dasv7 and 1280 GiB for Easv7—making them ideal for memory-intensive workloads. They also support three memory (GiB)-to-vCPU ratios: 2:1 (Dalsv7-series, Daldsv7-series, Falsv7-series and Faldsv7-series), 4:1 (Dasv7-series, Dadsv7-series, Fasv7-series and Fadsv7-series), and 8:1 (Easv7-series, Eadsv7-series, Famsv7-series and Famdsv7-series). Remote storage performance is improved up to 20% higher IOPS, up to 50% greater throughput, while local storage performance offers up to 55% higher throughput. Network performance is also enhanced up to 75% compared to corresponding D-series and E-series v6 VMs. New VM series Fadsv7, Faldsv7, and Famdsv7, introduce local disk support. The new VMs leverage Azure Boost technology to enhance performance and security, utilize the Microsoft Azure Network Adapter (MANA), and support the NVMe protocol for both local and remote disks. The 5th Generation AMD EPYC™ processor family, based on the newest ‘Zen 5’ core, provides enhanced capabilities for these new Azure’s AMD based VM series such as AVX-512 with a full 512-bit data path for vector and floating-point operations, higher memory bandwidth, and improved instructions per clock compared to the previous generation. These updates provide the ability to handle compute-intensive tasks for AI and machine learning, scientific simulations, and financial analytics, among others. AMD Infinity Guard hardware-based security features, such as Transparent Secure Memory Encryption (TSME), continue in this generation to ensure sensitive information remains secure. These VMs are available in the following Azure regions: Australia East, Central US, Germany West Central, Japan East, North Europe, South Central US, Southeast Asia, UK South, West Europe, West US 2, and West US 3. The large 160 vCPU Easv7-series and Eadsv7-series sizes are available in North Europe, South Central US, West Europe, and West US 2. More regions are coming in 2026. Refer to Product Availability by Region for the latest information. Our customers have shared the benefits they’ve observed with these new VMs: “Elastic enables customers to drive innovation and cost-efficiency with our observability, security, and search solutions on Azure. In our testing, Azure’s latest Daldsv7 VMs provided up to 13% better indexing throughput compared to previous generation Daldsv6 VMs, and we are looking forward to the improved performance for Elasticsearch users deploying on Azure.” — Yuvraj Gupta, Director, Product Management, Elastic “The Easv7 series of Azure VMs offers a balanced mix of CPU, memory, storage, and network performance that suits the majority of Oracle Database configurations very well. The 80 Gbps network with the jumbo frame capability is especially helpful for efficient operation of FlashGrid Cluster with Oracle RAC on Azure VMs.” — Art Danielov, CEO, FlashGrid "Our analysis indicates that Azure’s new AMD based v7 series Virtual Machines demonstrate significantly higher performance compared to the v6 series, particularly in single-thread ratings. This advancement is highly beneficial, as several of our critical applications, such as ArcGIS Enterprise, are single-threaded and CPU-bound. Consequently, these faster v7 series VMs have resulted in improved performance with the same number of users, evidenced by lower server utilization and faster client-side response times." — Thomas Buchmann, Senior Cloud Architect, VertiGIS Here’s what our technology partners are saying: “AMD and Microsoft have built one of the industry’s most successful cloud partnerships, bringing over 60 VM series to market through years of deep engineering collaboration. With the new v7 Azure VMs powered by 5th Gen AMD EPYC processors, we’re setting a new benchmark for performance, efficiency, and scalability—giving customers the proven, leadership compute they expect from AMD in the world’s most demanding cloud environments.” — Steve Berg, Corporate Vice President and General Manager of the Server CPU Cloud Business Group at AMD “Our collaboration with Microsoft continues to empower developers and enterprises alike. The new AMD based v7-series VMs on Azure offer a powerful foundation for the full spectrum of modern workloads, from development to production AI/ML pipelines. We are excited to support this launch, ensuring every user gets a seamless experience on Ubuntu, with the enterprise security and long-term stability of Ubuntu Pro available for their most critical systems." — Jehudi Castro-Sierra, Public Cloud Alliances Director "The new Azure Da/Ea/Fa v7-series AMD Turin-based instances running SUSE Linux Enterprise Server provide a significant performance uplift during initial tests. They show an impressive 20%-40% increase with typical Linux kernel compilation tasks compared to the same instance sizes of the v6 series. This demonstrates the enhanced capabilities the v7 series brings to our joint customers seeking maximum efficiency and performance for their critical applications.” — Peter Schinagl, Sr. Technical Architect, SUSE You can learn more about these latest Azure AMD based VMs by visiting the specification pages at Dasv7-series, Dadsv7-series, Dalsv7-series, Daldsv7-series, Easv7-series, Eadsv7-series, Fasv7-series, Fadsv7-series, Falsv7-series, Faldsv7-series, Famsv7-series , Famdsv7-series, constrained-core sizes. For pricing details, visit the Azure Virtual Machines pricing page. These VMs support all remote disk types. See Azure managed disk type for additional details. Disk storage is billed separately. Azure Integrated HSM (Hardware Security Module) will continue to be in preview with these VMs. Azure Integrated HSM is an ephemeral HSM cache that enables secure key management within Azure VMs by ensuring that cryptographic keys remain protected inside a FIPS 140-3 Level 3-compliant boundary throughout their lifecycle. To explore this new feature, please sign up using the form. Have questions? Please reach us at Azure Support and our experts will be there to help you with your Azure journey.2KViews3likes1CommentAnnouncing preview of new Azure Dasv7, Easv7, Fasv7-series VMs based on AMD EPYC™ ‘Turin’ processor
Today, Microsoft is announcing preview of the new Azure AMD-based Virtual Machines (VMs), powered by 5th Generation AMD EPYC™ (Turin) processors. The preview includes general purpose (Dasv7 & Dalsv7 series), memory-optimized (Easv7 series) and compute-optimized (Fasv7, Falsv7, Famsv7 series) VMs, available with and without local disks. These VMs are in preview in the following Azure regions: East US 2, North Europe, and West US 3. To request access to the preview, please fill out the Preview-Signup. The latest Azure AMD-based VMs deliver significant enhancements over the previous generation (v6) AMD-based VMs: improved CPU performance, greater scalability, and expanded configuration options to meet the needs of a wide range of workloads. Key improvements include: Up to 35% CPU performance improvement compared to equivalent sized (v6) AMD-based VMs. Significant performance gains on other workloads: Up to 25% for Java-based workloads Up to 65% for in-memory cache applications Up to 80% for crypto workloads Up to 130% for web server applications Maximum boost CPU frequency of 4.5 GHz, enabling faster operations for compute-intensive workloads. Expanded VM sizes: Dasv7-series, Dalsv7-series and Easv7-series now scale up to 160 vCPUs. Fasv7-series supports up to 80 vCPUs, with a new 1-core size. Increased memory capacity: Dasv7-series now offers up to 640 GiB of memory. Easv7-series scales up to 1280 GiB and is ideal for memory-intensive applications. Enhanced remote storage performance: VMs offer up to 20% higher IOPS and up to 50% greater throughput compared to similar sized previous generation (v6) VMs. New VM families introduced: Fadsv7, Faldsv7, and Famdsv7 are now available with local disk support. Expanded constrained-core offerings: New constrained-core sizes for Easv7 and Famsv7, available with and without local disks, helping to optimize licensing costs for core-based software licensing. These enhancements make these latest VMs a compelling choice for customers seeking high performance, cost efficiency, and workload flexibility on Azure. Additionally, these VMs leverage the latest Azure Boost technology enhancements to performance and security of these new VMs. The new VMs utilize the Microsoft Azure Network Adapter (MANA), a next-generation network interface that provides stable, forward-compatible drivers for Windows and Linux operating systems. These VMs also support the NVMe protocol for both local and remote disks. The 5th Generation AMD EPYC™ processor family, based on the newest ‘Zen 5’ core, provides enhanced capabilities for these new Azure AMD-based VM series such as AVX-512 with a full 512-bit data path for vector and floating-point operations, higher memory bandwidth, and improved instructions per clock compared to the previous generation. These updates provide increased throughput and ability to scale for compute-intensive tasks like AI and machine learning, scientific simulations, and financial analytics, among others. AMD Infinity Guard hardware-based security features, such as Transparent Secure Memory Encryption (TSME), continue in this generation to ensure sensitive information remains secure. These VMs support three memory (GiB)-to-vCPU ratios such as 2:1 (Dalsv7-series, Daldsv7-series, Falsv7-series and Faldsv7-series), 4:1 (Dasv7-series, Dadsv7-series, Fasv7-series and Fadsv7-series), and 8:1 (Easv7-series, Eadsv7-series, Famsv7-series and Famdsv7-series). The Dalsv7-series are ideal for workloads that require less RAM per vCPU that can reduce costs when running non-memory intensive applications, including web servers, video encoding, batch processing and more. The Dasv7-series VMs work well for many general computing workloads, such as e-commerce systems, web front ends, desktop virtualization solutions, customer relationship management applications, entry-level and mid-range databases, application servers, and more. The Easv7-series VMs are ideal for workloads such as memory-intensive enterprise applications, data warehousing, business intelligence, in-memory analytics, and financial transactions. The new Falsv7-series, Fasv7-series and Famsv7-series VM series do not have Simultaneous Multithreading (SMT), meaning a vCPU equals a full core, which makes these VMs well-suited for compute-intensive workloads needing the highest CPU performance, such as scientific simulations, financial modeling and risk analysis, gaming, and more. In addition to the standard sizes, the latest VM series are available in constrained-core sizes, with vCPU count constrained to one-half or one-quarter of the original VM size, giving you the flexibility to select the core and memory configuration that best fits your workloads. In addition to the new VM capabilities, the previously announced Azure Integrated HSM (Hardware Security Module), will be in Preview soon with the latest Azure AMD-based VMs. Azure Integrated HSM is an ephemeral HSM cache that enables secure key management within Azure virtual machines by ensuring that cryptographic keys remain protected inside a FIPS 140-3 Level 3-compliant boundary throughout their lifecycle. To explore this new feature, please sign up using the form provided below. These latest Azure AMD-based VMs will be charged during preview; pricing information will be shared with access to the VMs. Eligible new Azure customers can sign up for a free account and receive a $200 Azure credit. The new VMs support all remote disk types. To learn more about the disk types and their regional availability, please refer to Azure managed disk type. Disk storage is billed separately from virtual machines. You can learn more about these latest Azure AMD-based VMs by visiting the specification pages at Dasv7-series, Dadsv7-series, Dalsv7-series, Daldsv7-series, Easv7-series, Eadsv7-series, Fasv7-series, Fadsv7-series, Falsv7-series, Faldsv7-series, Famsv7-series and Famdsv7-series. The latest Azure AMD-based VMs provide options for your wide range of computing needs. Explore the new VMs today and discover how these VMs can enhance your workload performance and lower your costs. To request access to the preview, please fill out the Preview-Signup form. Have questions? Please reach us at Azure Support and our experts will be there to help you with your Azure journey.2.8KViews1like3CommentsAnnouncing Preview of New Azure Dnl/Dn/En v6 VMs powered by Intel 5th Gen processor & Azure Boost
We are thrilled to announce the public preview of Azure's first Network Optimized VMs powered by the latest 5th Gen Intel® Xeon® processor offering unparalleled performance and flexibility. The network optimized VMs will be relevant for workloads such as network virtual appliances, large-scale e-commerce applications, express route, application gateway, central DNS and monitoring servers, firewalls, media processing tasks that involve transferring large amounts of data quickly, and any workloads that require the ability to handle a high number of user connections and data transfers. Network Optimized VMs enhance networking performance by providing hardware acceleration for initial connection setup for certain traffic types, a task previously performed in software. These VMs will have lower end-to-end latency for initially establishing a connection or initial packet flow, as well as allow a VM to scale up the number of connections it manages more quickly. These Intel-based VMs come with three different memory-to-core ratios and offer options with and without local SSD across the VM families: Dnsv6, Dndsv6, Dnlsv6, Dnldsv6, Ensv6 and Endsv6 series. There are 55 VM sizes in total, ranging from 2 to 192 vCPU and up to 1.8TB of memory. The new Network Optimized VMs have higher network bandwidth per vCPU, numbers of vNICs per vCPU and connections per second. What’s New Compared to the current Intel Dl/D/Ev6 VMs, the network optimized VMs have: Up to 3x improvement in NW BW/vCPU than the current generation Intel Dl/D/Ev6 VMs 2x vNIC allocation on smaller vCPU sizes Up to 200 Gbps VM network bandwidth Up to 8x CPS connections enhancement across sizes Up to 192vCPU and >18GiB of memory Azure Boost which enables: Up to 400k IOPS and 12 GB/s remote storage throughput Up to 200 Gbps VM network bandwidth NVMe interface for local and remote disks Enhanced security through Total Memory Encryption (TME) technology Customers are excited about the new Azure Dnl/Dn/Ensv6 VMs “Palo Alto Networks, the global cybersecurity leader, is working with Microsoft to bring best-in-class Network Virtual Appliance performance capabilities to their customers. As the performance needs of customers on Azure continue to grow, innovations like Network Optimized VMs, Azure Boost, and Microsoft Azure Network Adapter (MANA) technology will help ensure that both our VM Series network virtual appliance and Cloud NGFW, our Azure native firewall service, can scale efficiently and cost-effectively,” said Rich Campagna, SVP Products, Palo Alto Networks. “We look forward to continuing our partnership with Microsoft to bring these innovations to life." General Purpose Workloads - Dnlsv6, Dnldsv6, Dnsv6, Dndsv6 The new Network Optimized Dnlsv6-series and Dnsv6 series VMs offer a balance of memory to CPU performance with increased scalability of up to 128 vCPUs and 512 GiB of RAM. Below is an overview of the specifications offered by the Dnlsv6-series and Dnsv6 series VMs. Series vCPU vNIC Network Bandwidth (Gbps) CPS Memory (GiB) Local Disk (GiB) Max Data Disks Dnlsv6-series 2 – 128 4 - 15 25.0 – 200.0 30K – 400K 4 – 256 n/a 8 – 64 Dnldsv6-series 2 – 128 4 - 15 25.0 – 200.0 30K – 400K 4 – 256 110 – 7,040 8 – 64 Dnsv6-series 2 – 128 4 - 15 25.0 – 200.0 30K – 400K 8 – 512 n/a 8 – 64 Dndsv6-series 2 – 128 4 - 15 25.0 – 200.0 30K – 400K 8 – 512 110 – 7,040 8 – 64 Memory Intensive Workloads - Ensv6 and Endsv6 The new Network Optimized Ensv6-series and Endsv6-series virtual machines are ideal for memory-intensive workloads offering up to 192vCPU and 1.8 TiB of RAM. Below is an overview of specifications offered by the Ensv6-series and Endsv6-series VMs. Series vCPU vNIC Network Bandwidth (Gbps) CPS Memory (GiB) Local Disk (GiB) Max Data Disks Ensv6-series 2 – 128 4 - 15 25.0 – 200.0 30K – 400K 16 – >1800 n/a 8 – 64 Endsv6-series 2 – 192 4 - 15 25.0 – 200.0 30K – 400K 16 – >1800 110 – 10,560 8 – 64 The Dnlv6, Dnv6, and Env6-series Azure Virtual Machines will offer options with and without local disk storage. These VMs are also compatible with remote persistent disk options including Premium SSD, Premium SSD v2, and Ultra Disk. Join the Preview Dnlv6, Dnv6, and Env6 series VMs are now available for preview in US East. VMs above 96 vCPUs and the VM series with local disk will be supported later in the preview. To request access to the preview, please fill out the survey form here. We look forward to hearing from you.2.2KViews1like2CommentsFrequent platform-initiated VM redeployments (v6) in North Europe – host OS / firmware issues
Hi everyone, We’ve been experiencing recurring platform-initiated redeployments on Azure VMs (v6 series) in the North Europe region and wanted to check if others are seeing something similar. Around two to three times per week, one of our virtual machines becomes unavailable and is automatically redeployed by the Azure platform. The Service Health notifications usually mention that the host OS became unresponsive and that there is a low-level issue between the host operating system and firmware. The VM is then started on a different host as part of the auto-recovery process. There is no corresponding public Azure Status incident for North Europe when this occurs. From the guest OS perspective, there are no warning signs beforehand such as high CPU or memory usage, kernel errors, or planned maintenance events. This behavior looks like a host or hardware stamp issue, but the frequency is concerning. Has anyone else running v6 virtual machines in North Europe observed similar unplanned redeployments? Has Microsoft shared any statements or acknowledgements regarding ongoing host or firmware stability issues for this region or SKU? If you worked with Azure Support on this, were you told this was cluster-specific or related to a particular hardware stamp? We are already engaging Azure Support, but I wanted to check whether this is an isolated case or something others are also encountering. Thanks in advance for any insights or shared experiences.127Views1like2CommentsAzure V710 V5 Series -AMD Radeon GPU - Validation of Siemens CAD -NX
Overview of Siemens NX Siemens NX is a next-generation integrated CAD/CAM/CAE platform used by aerospace, automotive, industrial machinery, energy, medical, robotics, and defense manufacturers. It spans: Complex 3D modeling Assemblies containing thousands to millions of parts Surfacing and composites Tolerance engineering CAM and machining simulation Integrated multi physics through Simcenter / NX Nastran Because NX is used to design real-world engineered systems — aircraft structures, automotive platforms, satellites, robotic arms, injection molds — its usability and performance directly affect engineering velocity and product timelines. NX Needs GPU Acceleration NX is highly visual. It leans heavily on: OpenGL acceleration Shader-based rendering Hidden line removal Real-time shading / material rendering Ray-Traced Studio for photorealistic output Switch shading modes → CAD content must stay readable Zoom, section, annotate → requires stable frame pacing NVads V710 v5-Series on Azure The NVads V710 v5-series virtual machines on Azure are designed for GPU-accelerated workloads and virtual desktop environments. Key highlights: Hardware Specs: o GPU: AMD Radeon™ Pro V710 (up to 24 GiB frame buffer; fractional GPU options available). o CPU: AMD EPYC™ 9V64 F (Genoa) with SMT, base frequency 3.95 GHz, peak 4.3 GHz. o Memory: 16 GiB to 160 GiB. o Storage: NVMe-based ephemeral local storage supported. VM Sizes: o Ranges from Standard_NV4ads_V710_v5 (4 vCPUs, 16 GiB RAM, 1/6 GPU) to Standard_NV28adms_V710_v5 (28 vCPUs, 160 GiB RAM, full GPU). Supported Features: o Premium storage, accelerated networking, ephemeral OS disk. o Both Windows and Linux VMs supported. o No additional GPU licensing is required. AMD Radeon™ PRO GPUs offer: o Optimized OpenGL professional driver stack o Stable interactive performance vs large assemblies Business Scenario Enabled by NX + Cloud GPU Engineering Anywhere Distributed teams can securely work on the same assemblies from any geographic region. Supplier Ecosystem Collaboration Tier-1/2 manufacturers and engineering partners can access controlled models without local high-end workstations. Secure IP Protection Data stays in Azure — files never leave the controlled workspace. Faster Engineering Cycles Visualization + simulation accelerate design reviews, decision making, and manufacturability evaluations. Scalable Cost Model Pay for compute only when needed — ideal for burst design cycles and testing workloads. Architecture Overview – Siemens NX on Azure NVads_v710 Key Architecture Elements Create Azure Virtual Machine- NVads_v710_24 Install Azure AMD V710 GPU drivers Deploy Azure File-based storage Hosting assemblies, metadata, drawing packages, PMI, simulation data. Configure Vnet with Accelerated Networking Install NX licenses and software. Install NXCP & ATS Test suites on the Virtual Machine Qualitative Benchmark on Azure NVads_v710_24 Siemens has approved the following qualitative test results. The certification matrix update is currently in progress. Technical variant: Complex assemblies with thousands of components maintained smooth rotation, zooming, and selection, even under concurrent session load. NXCP and ATS test results on NVads_v710_24 Non-Interactive test results: Note: Execution Time (seconds) ATS Non‑Interactive Test Results validate the correctness and stability of Siemens NX graphical rendering by comparing generated images against approved reference outputs. The minimal or zero pixel differences confirm deterministic and visually consistent rendering, indicating a stable GPU driver and visualization pipeline. The reported test execution times (in seconds) represent the duration required to complete each automated graphics validation scenario, demonstrating predictable and repeatable processing performance under non‑interactive conditions. Interactive test results on Azure NVads_v710_24: Note: Execution Time (seconds) ATS Interactive Test Results evaluate Siemens NX graphics behavior during real‑time user interactions such as rotation, zoom, pan, sectioning, and view manipulation. The results demonstrate stable and consistent rendering during interactive workflows, confirming that the GPU driver and visualization stack reliably support user‑driven NX operations. The measured execution times (in seconds) reflect the responsiveness of each interactive graphics operation, indicating predictable behavior under live, user‑controlled conditions rather than peak performance tuning. NX CAD functions Automatic Tests Interactive Tests Grace1 Basic Tests GrPlayer_xp64.exe <FILE> Basic_Features.tgl Passed! Passed! GrPlayer_xp64.exe <FILE> Fog_Measurement_Clipping.tgl Passed! Passed! GrPlayer_xp64.exe <FILE> lighting.tgl Passed! Passed! GrPlayer_xp64.exe <FILE> Shadow_Bump_Environment.tgl Passed! Passed! GrPlayer_xp64.exe <FILE> Texture_Map.tgl Passed! Passed! Grace2 Graphics Tests GrPlayer_64.exe <FILE> GrACETrace.tgl Passed! Passed! Grace2 Graphics Tests GrPlayer_64.exe <FILE> GrACETrace.tgl Passed! Passed! NXCP Test Scenarios Automatic Tests NXCP Gdat Tests gdat_leg_xp64.exe -infile <FILE> leg_gfx_cert_1.cgi Passed! gdat_leg_xp64.exe -infile <FILE> leg_gfx_cert_2.cgi Passed! gdat_leg_xp64.exe -infile <FILE> leg_gfx_cert_4.cgi Passed! gdat_leg_xp64.exe -infile <FILE> leg_gfx_cert_5.cgi Passed! gdat_leg_xp64.exe -infile <FILE> leg_gfx_cert_6.cgi Passed! gdat_leg_xp64.exe -infile <FILE> leg_gfx_cert_7.cgi Passed! gdat_leg_xp64.exe -infile <FILE> leg_gfx_cert_8.cgi Passed! gdat_leg_xp64.exe -infile <FILE> leg_gfx_cert_9.cgi Passed! gdat_leg_xp64.exe -infile <FILE> leg_gfx_cert_10.cgi Passed! gdat_leg_xp64.exe -infile <FILE> leg_gfx_cert_11.cgi Passed! gdat_leg_xp64.exe -infile <FILE> leg_gfx_cert_12.cgi Passed! gdat_leg_xp64.exe -infile <FILE> leg_gfx_cert_13.cgi Passed! gdat_leg_xp64.exe -infile <FILE> leg_gfx_cert_14.cgi Passed! gdat_leg_xp64.exe -infile <FILE> leg_gfx_cert_15.cgi Passed! Benefits Azure NVads_v710 (AMD GPU Platform for NX Workstation-class AMD Radeon PRO graphics drivers baked into Azure Ensures ISV-validated driver pipeline. Excellent performance for CAD workloads Makes GPU-accelerated NX accessible to wider user bases. Remote engineering enablement Critical for companies who now operate global design teams. Elastic scale Spin up GPU when development peaks; scale down when idle. Conclusion: Siemens NX on Azure NVads_v710 powered by AMD GPUs enables enterprise-class CAD/CAM/CAE experiences in the cloud. NX benefits directly from workstation-grade OpenGL optimization, shading stability, and Ray Traced Studio acceleration, allowing engineers to interact smoothly with large assemblies, run visualization workloads, and perform design reviews without local hardware dependencies. Right‑sized GPU delivers workstation‑class experience at lower cost The family enables fractional GPU allocation (down to 1/6 of a Radeon™ Pro V710), allowing Siemens NX deployments to be right‑sized per user role. This avoids over‑provisioning full GPUs while still delivering ISV‑grade OpenGL and visualization stability, resulting in a lower per‑engineer cost compared to fixed full‑GPU cloud or on‑prem workstations Elastic scale improves cost efficiency for burst engineering workloads NVads_V710_v5 instances support on demand scaling and ephemeral NVMe storage, allowing NX environments to scale up for design reviews, supplier collaboration, or peak integration cycles and scale down when idle. This consumption model provides a cost advantage over fixed on prem workstations that remain underutilized outside peak engineering periods NX visualization pipelines benefit from balanced CPU–GPU architecture The combination of high‑frequency AMD EPYC™ Genoa CPUs (up to 4.3 GHz) and Radeon™ Pro V710 GPUs addresses Siemens NX’s mixed CPU–GPU workload profile, where scene graph processing, tessellation, and OpenGL submission are CPU‑sensitive. This balance reduces idle GPU cycles, improving effective utilization and overall cost efficiency when compared with GPU‑heavy but CPU‑constrained configurations The result is a scalable, secure, and cost-efficient engineering platform that supports distributed innovation, supplier collaboration, and digital product development workflows — all backed by the Rendering and interaction consistency of AMD GPU virtualization on Azure.