Forum Widgets
Latest Discussions
Base Azure VM instance that supports nested virtualization
Hi folks, I need to know what baseline Azure VMs are available that supports virtualization technology (nested virtualization) as customer wants to run Proxmox on it. Looking forward to some guidance please. Thanks, Pradeep401Views0likes1CommentBoosting Performance with the Latest Generations of Virtual Machines in Azure
Microsoft Azure recently announced the availability of the new generation of VMs (v6)—including the Dl/Dv6 (general purpose) and El/Ev6 (memory-optimized) series. These VMs are powered by the latest Intel Xeon processors and are engineered to deliver: Up to 30% higher per-core performance compared to previous generations. Greater scalability, with options of up to 128 vCPUs (Dv6) and 192 vCPUs (Ev6). Significant enhancements in CPU cache (up to 5× larger), memory bandwidth, and NVMe-enabled storage. Improved security with features like Intel® Total Memory Encryption (TME) and enhanced networking via the new Microsoft Azure Network Adaptor (MANA). By Microsoft Evaluated Virtual Machines and Geekbench Results The table below summarizes the configuration and Geekbench results for the two VMs we tested. VM1 represents a previous-generation machine with more vCPUs and memory, while VM2 is from the new Dld e6 series, showing superior performance despite having fewer vCPUs. VM1 features VM1 - D16S V5 (16 Vcpus - 64GB RAM) VM1 - D16S V5 (16 Vcpus - 64GB RAM) VM2 features VM2 - D16ls v6 (16 Vcpus - 32GB RAM) VM2 - D16ls v6 (16 Vcpus - 32GB RAM) Key Observations: Single-Core Performance: VM2 scores 2013 compared to VM1’s 1570, a 28.2% improvement. This demonstrates that even with half the vCPUs, the new Dld e6 series provides significantly better performance per core. Multi-Core Performance: Despite having fewer cores, VM2 achieves a multi-core score of 12,566 versus 9,454 for VM1, showing a 32.9% increase in performance. VM 1 VM 2 Enhanced Throughput in Specific Workloads: File Compression: 1909 MB/s (VM2) vs. 1654 MB/s (VM1) – a 15.4% improvement. Object Detection: 2851 images/s (VM2) vs. 1592 images/s (VM1) – a remarkable 79.2% improvement. Ray Tracing: 1798 Kpixels/s (VM2) vs. 1512 Kpixels/s (VM1) – an 18.9% boost. These results reflect the significant advancements enabled by the new generation of Intel processors. Score VM 1 VM 1 VM 1 Score VM 2 VM 2 VM 2 Evolution of Hardware in Azure: From Ice Lake-SP to Emerald Rapids Technical Specifications of the Processors Evaluated Understanding the dramatic performance improvements begins with a look at the processor specifications: Intel Xeon Platinum 8370C (Ice Lake-SP) Architecture: Ice Lake-SP Base Frequency: 2.79 GHz Max Frequency: 3.5 GHz L3 Cache: 48 MB Supported Instructions: AVX-512, VNNI, DL Boost VM 1 Intel Xeon Platinum 8573C (Emerald Rapids) Architecture: Emerald Rapids Base Frequency: 2.3 GHz Max Frequency: 4.2 GHz L3 Cache: 260 MB Supported Instructions: AVX-512, AMX, VNNI, DL Boost VM 2 Impact on Performance Cache Size Increase: The jump from 48 MB to 260 MB of L3 cache is a key factor. A larger cache reduces dependency on RAM accesses, thereby lowering latency and significantly boosting performance in memory-intensive workloads such as AI, big data, and scientific simulations. Enhanced Frequency Dynamics: While the base frequency of the Emerald Rapids processor is slightly lower, its higher maximum frequency (4.2 GHz vs. 3.5 GHz) means that under load, performance-critical tasks can benefit from this burst capability. Advanced Instruction Support: The introduction of AMX (Advanced Matrix Extensions) in Emerald Rapids, along with the robust AVX-512 support, optimizes the execution of complex mathematical and AI workloads. Efficiency Gains: These processors also offer improved energy efficiency, reducing the energy consumed per compute unit. This efficiency translates into lower operational costs and a more sustainable cloud environment. Beyond Our Tests: Overview of the New v6 Series While our tests focused on the Dld e6 series, Azure’s new v6 generation includes several families designed for different workloads: 1. Dlsv6 and Dldsv6-series Segment: General purpose with NVMe local storage (where applicable) vCPUs Range: 2 – 128 Memory: 4 – 256 GiB Local Disk: Up to 7,040 GiB (Dldsv6) Highlights: 5× increased CPU cache (up to 300 MB) and higher network bandwidth (up to 54 Gbps) 2. Dsv6 and Ddsv6-series Segment: General purpose vCPUs Range: 2 – 128 Memory: Up to 512 GiB Local Disk: Up to 7,040 GiB in Ddsv6 Highlights: Up to 30% improved performance over the previous Dv5 generation and Azure Boost for enhanced IOPS and network performance 3. Esv6 and Edsv6-series Segment: Memory-optimized vCPUs Range: 2 – 192* (with larger sizes available in Q2) Memory: Up to 1.8 TiB (1832 GiB) Local Disk: Up to 10,560 GiB in Edsv6 Highlights: Ideal for in-memory analytics, relational databases, and enterprise applications requiring vast amounts of RAM Note: Sizes with higher vCPUs and memory (e.g., E128/E192) will be generally available in Q2 of this year. Key Innovations in the v6 Generation Increased CPU Cache: Up to 5× more cache (from 60 MB to 300 MB) dramatically improves data access speeds. NVMe for Storage: Enhanced local and remote storage performance, with up to 3× more IOPS locally and the capability to reach 400k IOPS remotely via Azure Boost. Azure Boost: Delivers higher throughput (up to 12 GB/s remote disk throughput) and improved network bandwidth (up to 200 Gbps for larger sizes). Microsoft Azure Network Adaptor (MANA): Provides improved network stability and performance for both Windows and Linux environments. Intel® Total Memory Encryption (TME): Enhances data security by encrypting the system memory. Scalability: Options ranging from 128 vCPUs/512 GiB RAM in the Dv6 family to 192 vCPUs/1.8 TiB RAM in the Ev6 family. Performance Gains: Benchmarks and internal tests (such as SPEC CPU Integer) indicate improvements of 15%–30% across various workloads including web applications, databases, analytics, and generative AI tasks. My personal perspective and point of view The new Azure v6 VMs mark a significant advancement in cloud computing performance, scalability, and security. Our Geekbench tests clearly show that the Dld e6 series—powered by the latest Intel Xeon Platinum 8573C (Emerald Rapids)—delivers up to 30% better performance than previous-generation machines with more resources. Coupled with the hardware evolution from Ice Lake-SP to Emerald Rapids—which brings a dramatic increase in cache size, improved frequency dynamics, and advanced instruction support—the new v6 generation sets a new standard for high-performance workloads. Whether you’re running critical enterprise applications, data-intensive analytics, or next-generation AI models, the enhanced capabilities of these VMs offer significant benefits in performance, efficiency, and cost-effectiveness. References and Further Reading: Microsoft’s official announcement: Azure Dld e6 VMs Internal tests performed with Geekbench 6.4.0 (AVX2) in the Germany West Central Azure region.187Views0likes1CommentBackup vaults Vs Recovery Service Vault
Hello Team, Microsoft has introduced multiple vault types, each serving different backup and disaster recovery needs. Below is a high-level differentiation: Recovery Services Vault (RSV) Supports Azure Backup (VMs, SQL, SAP HANA, Files) and Azure Site Recovery (disaster recovery). Offers backup policies, recovery points, replication, and failover management. Backup Vault A newer, streamlined vault designed for Azure Backup only. Supports Backup Short-Term Retention (Instant Restore) and Cross-region Restore. Primarily used with Azure Policy & Backup Center for better management at scale. Microsoft Continuity Center (MCC) A centralized disaster recovery hub in Azure. Integrates Azure Site Recovery (ASR) and backup services into a single pane of glass. Allows for failover, backup monitoring, and business continuity planning. Do we have any document talks about little deeper about the above topics.143Views0likes1CommentBasic LoadBalancer Upgrade - no outbound rule created
The AzureBasicLoadBalancerUpgrade module used for upgrading load balancer from basic to standard sku. It doesn't seem to create outbound rule when there's no existing backend pool in the basic LB. It can create the outbound rule if there is pre-existing backendpool in the basic lb. I know the outbound connection is implicit in the basic LB and I want to maintain the outbound connection after upgrading to standard sku. So my question is whether it's ok to create a backendpool for the standard LB using all the nics from inbound NAT rule then create a outbound rule based on the new backend pool? Is there's any security concern by doing this way?SolvedjameshaoFeb 11, 2025Copper Contributor87Views0likes2CommentsDetermining sizing requirements for GPU enabled Azure VM
Greetings, We are trying to determine the correct VM sizing requirement for our AI workload, which is used for NLP processing. This workload does not require any training, but will only be used for inference. We have the following software configuration: a C# application that is heavily multithreaded using a lot of socket I/O. The application has concentrated bursts where 10-20 threads are fired concurrently to perform tasks (mostly socket I/O). This app communicates via dedicated sockets to: a Python application which performs various NLP tasks. This app is also multithreaded to handle multiple incoming requests from the .NET app. This app sends queries to a local LLM (model size will vary based on query type). We estimate we will need to support sub-second performance (at the very least) on a 7B parameter model. Ultimately, we may need to go to larger model sizes if accuracy is insufficient. The amount of text passed to the LLM will range from 300-3000 tokens. In short, we need: a) a CPU with sufficient cores to handle multiple concurrent threads on the .NET side. The app will have 5 or 6 background threads running continuously, and sudden bursts of activity which will require a minimum of 10-20 threads to run shorter-lived tasks. b) a GPU with sufficient VRAM to handle at the very least, a 7B parameter model. Ultimately, we may need to support larger models to perform the same task due to insufficient accuracy. We need the ideal configuration of GPU/VRAM and CPU/RAM to handle these tasks, and also, potentially, larger LLM sizes of up to 14B or 70B parameters. We are looking at the NC-series VMs, with a budget of about $1,000/month (see https://azure.microsoft.com/en-us/pricing/details/virtual-machines/windows/#pricing). Any feedback on the optimal configuration in terms of CPU/GPU would be greatly appreciated. Thank you in advance.1KViews0likes2CommentsAdding VM Instance View Details, e.g. osName, to the VM Resource Object JSON (for Custom Policy Use)
I'm requesting to add more details to the JSON of the VM resource object, particularly from the VM instance view data. This is to include operating system information, such as the name and version (osName and osVersion), for use in a custom Policy. Although these details are visible in the portal, they're not present in the VM's resource object, which is necessary for our custom policy.dkappelleJan 02, 2025Copper Contributor206Views0likes1CommentAzure IMDS (Instance Metadata Service) calls to 168.63.129.16 blocked after July 1st, 2025
[ACTION REQUIRED] After 1 July 2025, it will no longer be possible to query Azure IMDS endpoints at the IP address 168.63.129.16. Please begin using 169.254.169.254 to communicate with Azure IMDS as soon as possible. Officially, IMDS APIs can only be queried at 169.254.169.254. However, due to the internal design of Azure, IMDS endpoints can also be queried at the IP address 168.63.129.16 from within a virtual machine. Some customers are using this unofficial pathway to communicate with IMDS. An upcoming change in Azure will permanently block IMDS requests on 168.63.129.16. After 1 July 2025, you won’t be able to access Azure IMDS endpoints with that IP. You can continue to use 168.63.129.16 to call into IMDS APIs until up until that date, but we recommend you begin your transition now. HOW TO CHECK IF YOU ARE IMPACTED Code analysis in your application. IMDS has a reserved IP address of “169.254.169.254" VM’s Private communication channel has reserved IP address of "168.63.129.16". Use code search to evaluate that your client is not using IP address “168.63.129.16” for making metadata requests. All IMDS REST requests starts with “/metadata” and all endpoints can be found at IMDS Public endpoints. REQUIRED ACTION Fix all URLs using 168.63.129.16 to prepare for its decoupling from IMDS. For example, this IMDS token endpoint URL would soon be blocked: curl -s -H Metadata:true --noproxy "*" "http://168.63.129.16/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https://management.azure.com/" To avoid service disruptions, fix URLs to include 169.254.169.254., as in this example: curl -s -H Metadata:true --noproxy "*" "http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https://management.azure.com/”MinnieLahotiDec 13, 2024Microsoft532Views0likes0CommentsHow to Automate KB5040434 Installation on Multiple VMs?
Hey everyone, I need to install the KB5040434 update on a bunch of VMs. This update is super important because it fixes several vulnerabilities. Doing this one by one is a huge hassle, and each VM also needs a restart after the update. Is there a way to automate this process? Maybe using Azure Cloud Shell, an automation account, or some other Azure feature? Any tips or guides would be really helpful. Thanks in advance!Solvedexperi18Oct 21, 2024Brass Contributor642Views0likes7Comments
Resources
Tags
- virtual machine224 Topics
- Compute103 Topics
- Cloud Services31 Topics
- Azure Containers25 Topics
- app service15 Topics
- Hands-on-Labs13 Topics
- machine learning8 Topics
- Cloud Essentials8 Topics
- Backup7 Topics
- azure5 Topics