Recent Discussions
Understanding Cloud Cost Fluctuations with Power BI
Staying on top of your cloud costs requires regular reviews. There are many ways to slice and dice your cloud costs; one approach I find helpful is comparing daily and monthly cost deltas. Below is a visual from my Power BI report showing how my previous month’s costs compare to the month prior. The visual is filtered to only show delta increases/decreases over $1K. I can quickly see we spent $5K more on Azure SQL Database in the selected month compared to the previous month. I call this my 'large cost swings' graph. I understand that everything is not linear, nor do things translate nicely from one day or month to the next. However, the data has a story to tell. What I ask my team to focus on is the story the data is telling. In this case, we made some modifications to ADF and SQL, leading to a $4K net reduction in costs. Some stories explain the outcome of one or more actions. Then there are those stories which can help shape your future consumption and spending.74Views2likes4CommentsHow we source Innovation Challenge hackathon use cases
At Microsoft, our leaders use the annual Microsoft Global Hackathon to highlight priorities and focus our employees’ creative energy to solve those business and societal problems. We call these “Executive Challenges” and when a team puts forward a great project it can be the start of something big. For the community facing Innovation Challenge hackathons, we take a similar approach by soliciting real world AI use cases from Azure customers looking to explore new ideas. Our customers get exposure to new talent as well as proof of concept (POC) applications to accelerate their AI journeys. The participants get the chance to demonstrate highly in demand skills while building new capabilities for themselves and our ecosystem. The purpose of the Innovation Challenge is to bring together developers from groups who are underrepresented in technology. We’re proud to be supporting these organizations who help prepare the participants: BITE-CON, Código Facilito, DIO, GenSpark, Microsoft Software and Systems Academy (MSSA), TechBridge, and Women in Cloud. Recent challenges have looked to solve for important scenarios including observability for AI systems, role-based content filtering for AI outputs, VoiceRAG, and accessibility. Check out the winners from our December hackathon here. In order to qualify for the hackathon, participants need to earn a Microsoft Applied Skills credential or one of these Azure certifications: Azure AI Engineer Associate, Azure Developer Associate, Azure Data Science Associate. During the hackathon, from March 10 – 21, 2025, participants work in teams. For everyone who participates, our goal is to help them open up doors to new career opportunities by demonstrating highly in demand skills and their experience developing AI solutions in real world situations. Projects are judged by Microsoft subject matter experts, and the best entries will be awarded prizes to be divided equally among the teams. One $10,000 top prize Two $5,000 second place prizes Three $2,500 third place prizes Judging is based on the following criteria and weighting Performance 25% Innovation 25% Breadth of Azure services used 25% Responsible AI 25% Hackathons are a key way to bring the Azure community together to build new capabilities for both individual developers as well as our industry as a whole. We’re excited to see what this new group of Azure developers will build!11Views0likes0CommentsTrying to create token with scope https://cnt-prod.loadtesting.azure.com
Hello. I have created a web app, and updated its permissions within Azure App registration. To be able to access Graph, and Azure Management (user impersonation) Using Microsoft.identity.Web I am able to add a downstreamAPI, and make a successful call to it. response = await downstreamApi.GetForUserAsync<HttpResponseMessage>("Azure Management", options => { options.RelativePath = relativePath; }); if (response?.IsSuccessStatusCode ?? false) { string responseData = await response.Content.ReadAsStringAsync(); } However when I try to create a token with a different scope, for example. var token = await tokenAcquisition.GetAccessTokenForUserAsync(new[] { "https://cnt-prod.loadtesting.azure.com/.default" }); The general error I get is that my app is trying to access a resource its not allowed to. However I cant find this resource at all to give it access (Within App Registration and API permissions) With Azure Load Testing, I have learned there is a resource plane and a data plane. The resource plane requires management.azure.com, but the access to the data plane requires https://cnt-prod.loadtesting.azure.com/.default (from the tests I have done in powershell). Anyone else come across this similar issue? Any assistance is greatly appreciated. Thank you8Views0likes0CommentsVMs not being added to host pool or AD
I've been provisioning session host monthly for the last year or so, but this month it started to fail. If I use the portal to add session host to a host pool, the VMs get created but most do not get added to the host pool and/or AD. From what I can tell the RDagent and RDagent bootloader are not being installed. If I manually install the two, it gets added to the host pool and I can manually add them to AD. If I create 40 new hosts, about 5 get added to the host pool without joining AD. The remaining 35 don't get the RD agents installed at all. This is new to me, so I'm struggling with how to troubleshoot. I don't know what changed from last month to this month. I've tried using the older golden image and I get the same result, so it doesn't appear to be anything that was installed recently (which would've been Windows patches). I've tried in three different regions; West Europe, Southeast Asia, US East. The inconsistency with the issue is throwing me off. Anyone have any ideas or suggestions on where to look?8Views0likes0CommentsNavigating Azure Retail Pricing Data in Power BI: My Journey
Recently, I embarked on integrating Azure Retail Pricing data into my Power BI Cost Management dashboard. Initially, this seemed daunting, but with the right orientation and assistance from Copilot, I successfully navigated through it. Did you know? Microsoft offers a Retail Pricing API that allows you to import data into Power BI in your preferred currency. The first hurdle I encountered was the API’s paginated results. To overcome this, I created a function in Power BI that iterates through the paginated results using a base URL. To my surprise, both the Retail Price table and the Azure Cost Management table had only one common column: meterID. This led to a many-to-many relationship, which is less than ideal as it introduces data ambiguity. Since there were multiple matching meterIDs with different retail prices, I addressed this by creating Measures. Additionally, I created another measure to calculate the retail cost, as the Retail Price table did not contain any consumption data. Happy to share more details if anyone's interested. #Azure #PowerBi #AzureCostManagement #AzureRetailPricing46Views0likes3CommentsAzure Virtual Desktop remote desktop virtual drive on rdwebclient download permission
We are currently using Azure Virtual Desktop with an application which is shared via Remote Desktop Web Client. If I login to our desktop I can use the remote desktop virtual drive on rdwebclient download folder to download files from the machine, however if I open an application and try to save a file via the application to the Download folder I receive the following error. How do you allow applications to have sufficient permissions to save to the Download folder so that end users can download files from the remote machine?1.4KViews0likes3CommentsThe brand new Azure AI Agent Service at your fingertips
Intro Azure AI Agent Service is a game-changer for developers. This fully managed service empowers you to build, deploy, and scale high-quality, extensible AI agents securely, without the hassle of managing underlying infrastructure. What used to take hundreds of lines of code can now be achieved in just a few lines! So here it is, a web application that streamlines document uploads, summarizes content using AI, and provides seamless access to stored summaries. This article delves into the architecture and implementation of this solution, drawing inspiration from our previous explorations with Azure AI Foundry and secure AI integrations. Architecture Overview Our Azure AI Agent Service WebApp integrates several Azure services to create a cohesive and scalable system: Azure AI Projects & Azure AI Agent Service: Powers the AI-driven summarization and title generation of uploaded documents. Azure Blob Storage: Stores the original and processed documents securely. Azure Cosmos DB: Maintains metadata and summaries for quick retrieval and display. Azure API Management (APIM): Manages and secures API endpoints, ensuring controlled access to backend services. This architecture ensures a seamless flow from document upload to AI processing and storage, providing users with immediate access to summarized content. Azure AI Agent Service – Frontend Implementation The frontend of the Azure AI Agent Service WebApp is built using Vite and React, offering a responsive and user-friendly interface. Key features include: Real-time AI Chat Interface: Users can interact with an AI agent for various queries. Document Upload Functionality: Supports uploading documents in various formats, which are then processed by the backend AI services. Document Repository: Displays a list of uploaded documents with their summaries and download links. This is the main UI , ChatApp.jsx. We can interact with Chat Agent for regular chat, while the keyword “upload:” activates the hidden upload menu. Azure AI Agent Service – Backend Services The backend is developed using Express.js, orchestrating various services to handle: File Uploads: Accepts documents from the frontend and stores them in Azure Blob Storage. AI Processing: Utilizes Azure AI Projects to extract text, generate summaries, and create concise titles. Metadata Storage: Saves document metadata and summaries in Azure Cosmos DB for efficient retrieval. One of the Challenges was to not recreate the Agents each time our backend reloads. So a careful plan is configured, with several files – modules for the Azure AI Agent Service interaction and Agents creation. The initialization for example is taken care by a single file-module: const { DefaultAzureCredential } = require('@azure/identity'); const { SecretClient } = require('@azure/keyvault-secrets'); const { AIProjectsClient, ToolUtility } = require('@azure/ai-projects'); require('dotenv').config(); // Keep track of global instances let aiProjectsClient = null; let agents = { chatAgent: null, extractAgent: null, summarizeAgent: null, titleAgent: null }; async function initializeAI(app) { try { // Setup Azure Key Vault const keyVaultName = process.env.KEYVAULT_NAME; const keyVaultUrl = `https://${keyVaultName}.vault.azure.net`; const credential = new DefaultAzureCredential(); const secretClient = new SecretClient(keyVaultUrl, credential); // Get AI connection string const secret = await secretClient.getSecret('AIConnectionString'); const AI_CONNECTION_STRING = secret.value; // Initialize AI Projects Client aiProjectsClient = AIProjectsClient.fromConnectionString( AI_CONNECTION_STRING, credential ); // Create code interpreter tool (shared among agents) const codeInterpreterTool = ToolUtility.createCodeInterpreterTool(); const tools = [codeInterpreterTool.definition]; const toolResources = codeInterpreterTool.resources; console.log('🚀 Creating AI Agents...'); // Create chat agent agents.chatAgent = await aiProjectsClient.agents.createAgent("gpt-4o-mini", { name: "chat-agent", instructions: "You are a helpful AI assistant that provides clear and concise responses.", tools, toolResources }); console.log('✅ Chat Agent created'); // Create extraction agent agents.extractAgent = await aiProjectsClient.agents.createAgent("gpt-4o-mini", { name: "extract-agent", instructions: "Process and clean text content while maintaining structure and important information.", tools, toolResources }); console.log('✅ Extract Agent created'); // Create summarization agent agents.summarizeAgent = await aiProjectsClient.agents.createAgent("gpt-4o-mini", { name: "summarize-agent", instructions: "Create concise summaries that capture main points and key details.", tools, toolResources }); console.log('✅ Summarize Agent created'); // Create title agent agents.titleAgent = await aiProjectsClient.agents.createAgent("gpt-4o-mini", { name: "title-agent", instructions: `You are a specialized title generation assistant. Your task is to create titles for documents following these rules: 1. Generate ONLY the title text, no additional explanations 2. Maximum length of 50 characters 3. Focus on the main topic or theme 4. Use proper capitalization (Title Case) 5. Avoid special characters and quotes 6. Make titles clear and descriptive 7. Respond with nothing but the title itself Example good responses: Digital Transformation Strategy 2025 Market Analysis: Premium Chai Tea Cloud Computing Implementation Guide Example bad responses: "Here's a title for your document: Digital Strategy" (no explanations needed) This document appears to be about digital transformation (just the title needed) The title is: Market Analysis (no extra text)`, tools, toolResources }); console.log('✅ Title Agent created'); // Store in app.locals app.locals.aiProjectsClient = aiProjectsClient; app.locals.agents = agents; console.log('✅ All AI Agents initialized successfully'); return { aiProjectsClient, agents }; } catch (error) { console.error('❌ Error initializing AI:', error); throw error; } } // Export both the initialization function and the shared instances module.exports = { initializeAI, getClient: () => aiProjectsClient, getAgents: () => agents }; Our backend utilizes 4 agents, creating the Azure AI Agent Service Agents and we will find them in the portal, when the Backend deploys At the same time, each interaction is stored and managed as thread and that’s how we are interacting with the Azure AI Agent Service. Deployment and Security of Azure AI Agent Service WebApp Ensuring secure and efficient deployment is crucial. We’ve employed: Azure API Management (APIM): Secures API endpoints, providing controlled access and monitoring capabilities. Azure Key Vault: Manages sensitive information such as API keys and connection strings, ensuring data protection. Every call to the backend service is protected with Azure API Management Basic Tier. We have only the required endpoints pointing to the matching Endpoints of our Azure AI Agent Service WebApp backend. Also we are storing the AIConnectionString variable in Key Vault and we can move all Variables in Key Vault as well, which i recommend ! Get started with Azure AI Agent Service To get started with Azure AI Agent Service, you need to create an Azure AI Foundry hub and an Agent project in your Azure subscription. Start with the quickstart guide if it’s your first time using the service. You can create a AI hub and project with the required resources. After you create a project, you can deploy a compatible model such as GPT-4o. When you have a deployed model, you can also start making API calls to the service using the SDKs. There are already 2 Quick-starts available to get your Azure AI Agent Service up and running, the Basic and the Standard. I have chosen the second one the Standard plan, since we have a WebApp, and the whole Architecture comes very handy ! We just added the CosmosDB interaction and the API Management to extend to an enterprise setup ! Our own Azure AI Agent Service deployment, allows us to interact with the Agents, and utilize tools and functions very easy. Conclusion By harnessing the power of Azure’s cloud services, we’ve developed a scalable and efficient web application that simplifies document management through AI-driven processing. This solution not only enhances productivity but also ensures secure and organized access to essential information. References Azure AI Agent Service Documentation What is Azure AI Agent Service Azure AI Agent Service Quick starts Azure API Management Azure AI Foundry Azure AI Foundry Inference Demo36Views0likes0Commentshey words what is power bi
Through the Microsoft Learn Student Ambassador program, I have gained valuable skills in event management, public speaking, and software engineering that I may not have otherwise obtained as a regular student. The program has given me a new sense of purpose in my career, and the confidence to speak at virtual meetings and engage with a global community. If you're interested in becoming a Microsoft Learn Student Ambassador and unlocking these same2Views0likes0CommentsError Code 11402 to Call REST API from Azure Data Factory
Using Copy Activity with REST connector to call REST API. Getting error details - "Error Code 11402. The Remote name could not be resolved: "<Base URL>"." But I am using the same Base URL in Postman which is working . Can you suggest to call from ADF pipeline, do I need to take care anything else ? What can be the possible reasons to get that error in ADF ? Any kind of access certificates need to be granted from Source side ?344Views0likes1CommentMicrosoft Entra SSO integration with FortiGate SSL VPN issue
Scenario: Microsoft Entra SSO integration with FortiGate SSL VPN I am unable to connect via FortiClient vpn version 7.2.x.x. But when i use FortiClient vpn client version 7.0.x.x.x to connect SSL VPN via Azure ID with SAML Authentication. its connect in 2nd attempt or 3rd attempt every time not in first attempt. In first attempt ask 2FA but not connected. when i try again in 2nd or 3rd attempt so without 2FA prompt just directly connected. is it bug or configuration issue on FortiClient firewall side or Azure FortiGate SSL VPN application side?? please suggest264Views0likes1CommentAVD RDP printer redirection settings not honored
Has anyone else noticed their users getting local printers being redirected for last couple of weeks? Despite having client RDP settings set to not. This results in GPO deployed default printers being overwritten. We have noticed this on two host pools (10 and 11 SH OS and client OS). We reapplied the settings and even manually in Azure AVD settings but still happened. You can manually change it in web client and it does honor that, but all our users either use the client or Windows App. In the end I had to add this reg key to the Session Hosts via GPO to force it to not redirect. HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services You can create a new DWORD (32-bit) value called fDisablePrinterRedirection and set it to 1. Assume this is just a temporary bug but may help others.30Views0likes0CommentsWVD grey area
Hi guys , we are curently using citrix and are planning to migrate to wvd and currently in testing phase . Can someone help to get answers of below questions that we have in wvd : 1. When a user login to a vm that is in a different hostpool , will he get his data ( profile data ) when he logs in to vm which is in host pool 2 ( considering the fact that in hostpool 1 , fslofgix is already setup and his profile successfully roams to different vms in hostpool 1. 2. i read that there is maximum 30gb of storage that is assigned to user vhd , what if it gets full , how can we get more space and are there any performance issues associated if consumed space dynamically increases ? 3. What if user profile corrupts on vhdx , will there be data loss ? how can we recover user profile on vhdx ? Any backup and recovery plan ? Appreciate responses !!Solved1.6KViews0likes5CommentsHow to take VMware RVTool Extract to Run AVS Assessment
What is RVTool? RVTool is a piece of software you install in any system which has connectivity to VMware vcenter. Using this tool, you can extract the CPU, memory and storage report of your current VMware environment and use that report to assess your migration plan to Azure VMware Solution (AVS) How to take RVtool Extract? Browse https://www.robware.net/home and download latest msi file Install the .msi Open RVtool application and enter vcenter credential, after that you will get screen similar below. Goto File and Export to excel. You will get the report on selected path Next Step: Assess your AVS migration using RVTool exact You can refer another blog to get complete details on how to assess the AVS environment Azure VMWare (AVS) Cost Optimization Using Azure Migrate Tool32Views0likes0CommentsBoosting Performance with the Latest Generations of Virtual Machines in Azure
Microsoft Azure recently announced the availability of the new generation of VMs (v6)—including the Dl/Dv6 (general purpose) and El/Ev6 (memory-optimized) series. These VMs are powered by the latest Intel Xeon processors and are engineered to deliver: Up to 30% higher per-core performance compared to previous generations. Greater scalability, with options of up to 128 vCPUs (Dv6) and 192 vCPUs (Ev6). Significant enhancements in CPU cache (up to 5× larger), memory bandwidth, and NVMe-enabled storage. Improved security with features like Intel® Total Memory Encryption (TME) and enhanced networking via the new Microsoft Azure Network Adaptor (MANA). By Microsoft By Microsoft Evaluated Virtual Machines and Geekbench Results The table below summarizes the configuration and Geekbench results for the two VMs we tested. VM1 represents a previous-generation machine with more vCPUs and memory, while VM2 is from the new Dld e6 series, showing superior performance despite having fewer vCPUs. VM1 features VM1 - D16S V5 (16 Vcpus - 64GB RAM) VM1 - D16S V5 (16 Vcpus - 64GB RAM) VM2 features VM2 - D16ls v6 (16 Vcpus - 32GB RAM) VM2 - D16ls v6 (16 Vcpus - 32GB RAM) Key Observations: Single-Core Performance: VM2 scores 2013 compared to VM1’s 1570, a 28.2% improvement. This demonstrates that even with half the vCPUs, the new Dld e6 series provides significantly better performance per core. Multi-Core Performance: Despite having fewer cores, VM2 achieves a multi-core score of 12,566 versus 9,454 for VM1, showing a 32.9% increase in performance. VM 1 VM 2 Enhanced Throughput in Specific Workloads: File Compression: 1909 MB/s (VM2) vs. 1654 MB/s (VM1) – a 15.4% improvement. Object Detection: 2851 images/s (VM2) vs. 1592 images/s (VM1) – a remarkable 79.2% improvement. Ray Tracing: 1798 Kpixels/s (VM2) vs. 1512 Kpixels/s (VM1) – an 18.9% boost. These results reflect the significant advancements enabled by the new generation of Intel processors. Score VM 1 VM 1 VM 1 Score VM 2 VM 2 VM 2 Evolution of Hardware in Azure: From Ice Lake-SP to Emerald Rapids Technical Specifications of the Processors Evaluated Understanding the dramatic performance improvements begins with a look at the processor specifications: Intel Xeon Platinum 8370C (Ice Lake-SP) Architecture: Ice Lake-SP Base Frequency: 2.79 GHz Max Frequency: 3.5 GHz L3 Cache: 48 MB Supported Instructions: AVX-512, VNNI, DL Boost VM 1 Intel Xeon Platinum 8573C (Emerald Rapids) Architecture: Emerald Rapids Base Frequency: 2.3 GHz Max Frequency: 4.2 GHz L3 Cache: 260 MB Supported Instructions: AVX-512, AMX, VNNI, DL Boost VM 2 Impact on Performance Cache Size Increase: The jump from 48 MB to 260 MB of L3 cache is a key factor. A larger cache reduces dependency on RAM accesses, thereby lowering latency and significantly boosting performance in memory-intensive workloads such as AI, big data, and scientific simulations. Enhanced Frequency Dynamics: While the base frequency of the Emerald Rapids processor is slightly lower, its higher maximum frequency (4.2 GHz vs. 3.5 GHz) means that under load, performance-critical tasks can benefit from this burst capability. Advanced Instruction Support: The introduction of AMX (Advanced Matrix Extensions) in Emerald Rapids, along with the robust AVX-512 support, optimizes the execution of complex mathematical and AI workloads. Efficiency Gains: These processors also offer improved energy efficiency, reducing the energy consumed per compute unit. This efficiency translates into lower operational costs and a more sustainable cloud environment. Beyond Our Tests: Overview of the New v6 Series While our tests focused on the Dld e6 series, Azure’s new v6 generation includes several families designed for different workloads: 1. Dlsv6 and Dldsv6-series Segment: General purpose with NVMe local storage (where applicable) vCPUs Range: 2 – 128 Memory: 4 – 256 GiB Local Disk: Up to 7,040 GiB (Dldsv6) Highlights: 5× increased CPU cache (up to 300 MB) and higher network bandwidth (up to 54 Gbps) 2. Dsv6 and Ddsv6-series Segment: General purpose vCPUs Range: 2 – 128 Memory: Up to 512 GiB Local Disk: Up to 7,040 GiB in Ddsv6 Highlights: Up to 30% improved performance over the previous Dv5 generation and Azure Boost for enhanced IOPS and network performance 3. Esv6 and Edsv6-series Segment: Memory-optimized vCPUs Range: 2 – 192* (with larger sizes available in Q2) Memory: Up to 1.8 TiB (1832 GiB) Local Disk: Up to 10,560 GiB in Edsv6 Highlights: Ideal for in-memory analytics, relational databases, and enterprise applications requiring vast amounts of RAM Note: Sizes with higher vCPUs and memory (e.g., E128/E192) will be generally available in Q2 of this year. Key Innovations in the v6 Generation Increased CPU Cache: Up to 5× more cache (from 60 MB to 300 MB) dramatically improves data access speeds. NVMe for Storage: Enhanced local and remote storage performance, with up to 3× more IOPS locally and the capability to reach 400k IOPS remotely via Azure Boost. Azure Boost: Delivers higher throughput (up to 12 GB/s remote disk throughput) and improved network bandwidth (up to 200 Gbps for larger sizes). Microsoft Azure Network Adaptor (MANA): Provides improved network stability and performance for both Windows and Linux environments. Intel® Total Memory Encryption (TME): Enhances data security by encrypting the system memory. Scalability: Options ranging from 128 vCPUs/512 GiB RAM in the Dv6 family to 192 vCPUs/1.8 TiB RAM in the Ev6 family. Performance Gains: Benchmarks and internal tests (such as SPEC CPU Integer) indicate improvements of 15%–30% across various workloads including web applications, databases, analytics, and generative AI tasks. My personal perspective and point of view The new Azure v6 VMs mark a significant advancement in cloud computing performance, scalability, and security. Our Geekbench tests clearly show that the Dld e6 series—powered by the latest Intel Xeon Platinum 8573C (Emerald Rapids)—delivers up to 30% better performance than previous-generation machines with more resources. Coupled with the hardware evolution from Ice Lake-SP to Emerald Rapids—which brings a dramatic increase in cache size, improved frequency dynamics, and advanced instruction support—the new v6 generation sets a new standard for high-performance workloads. Whether you’re running critical enterprise applications, data-intensive analytics, or next-generation AI models, the enhanced capabilities of these VMs offer significant benefits in performance, efficiency, and cost-effectiveness. References and Further Reading: Microsoft’s official announcement: Azure Dld e6 VMs Internal tests performed with Geekbench 6.4.0 (AVX2) in the Germany West Central Azure region.40Views0likes0CommentsBoosting Performance with the Latest Generations of Virtual Machines in Azure
Microsoft Azure recently announced the availability of the new generation of VMs (v6)—including the Dl/Dv6 (general purpose) and El/Ev6 (memory-optimized) series. These VMs are powered by the latest Intel Xeon processors and are engineered to deliver: Up to 30% higher per-core performance compared to previous generations. Greater scalability, with options of up to 128 vCPUs (Dv6) and 192 vCPUs (Ev6). Significant enhancements in CPU cache (up to 5× larger), memory bandwidth, and NVMe-enabled storage. Improved security with features like Intel® Total Memory Encryption (TME) and enhanced networking via the new Microsoft Azure Network Adaptor (MANA). By Microsoft Evaluated Virtual Machines and Geekbench Results The table below summarizes the configuration and Geekbench results for the two VMs we tested. VM1 represents a previous-generation machine with more vCPUs and memory, while VM2 is from the new Dld e6 series, showing superior performance despite having fewer vCPUs. VM1 features VM1 - D16S V5 (16 Vcpus - 64GB RAM) VM1 - D16S V5 (16 Vcpus - 64GB RAM) VM2 features VM2 - D16ls v6 (16 Vcpus - 32GB RAM) VM2 - D16ls v6 (16 Vcpus - 32GB RAM) Key Observations: Single-Core Performance: VM2 scores 2013 compared to VM1’s 1570, a 28.2% improvement. This demonstrates that even with half the vCPUs, the new Dld e6 series provides significantly better performance per core. Multi-Core Performance: Despite having fewer cores, VM2 achieves a multi-core score of 12,566 versus 9,454 for VM1, showing a 32.9% increase in performance. VM 1 VM 2 Enhanced Throughput in Specific Workloads: File Compression: 1909 MB/s (VM2) vs. 1654 MB/s (VM1) – a 15.4% improvement. Object Detection: 2851 images/s (VM2) vs. 1592 images/s (VM1) – a remarkable 79.2% improvement. Ray Tracing: 1798 Kpixels/s (VM2) vs. 1512 Kpixels/s (VM1) – an 18.9% boost. These results reflect the significant advancements enabled by the new generation of Intel processors. Score VM 1 VM 1 VM 1 Score VM 2 VM 2 VM 2 Evolution of Hardware in Azure: From Ice Lake-SP to Emerald Rapids Technical Specifications of the Processors Evaluated Understanding the dramatic performance improvements begins with a look at the processor specifications: Intel Xeon Platinum 8370C (Ice Lake-SP) Architecture: Ice Lake-SP Base Frequency: 2.79 GHz Max Frequency: 3.5 GHz L3 Cache: 48 MB Supported Instructions: AVX-512, VNNI, DL Boost VM 1 Intel Xeon Platinum 8573C (Emerald Rapids) Architecture: Emerald Rapids Base Frequency: 2.3 GHz Max Frequency: 4.2 GHz L3 Cache: 260 MB Supported Instructions: AVX-512, AMX, VNNI, DL Boost VM 2 Impact on Performance Cache Size Increase: The jump from 48 MB to 260 MB of L3 cache is a key factor. A larger cache reduces dependency on RAM accesses, thereby lowering latency and significantly boosting performance in memory-intensive workloads such as AI, big data, and scientific simulations. Enhanced Frequency Dynamics: While the base frequency of the Emerald Rapids processor is slightly lower, its higher maximum frequency (4.2 GHz vs. 3.5 GHz) means that under load, performance-critical tasks can benefit from this burst capability. Advanced Instruction Support: The introduction of AMX (Advanced Matrix Extensions) in Emerald Rapids, along with the robust AVX-512 support, optimizes the execution of complex mathematical and AI workloads. Efficiency Gains: These processors also offer improved energy efficiency, reducing the energy consumed per compute unit. This efficiency translates into lower operational costs and a more sustainable cloud environment. Beyond Our Tests: Overview of the New v6 Series While our tests focused on the Dld e6 series, Azure’s new v6 generation includes several families designed for different workloads: 1. Dlsv6 and Dldsv6-series Segment: General purpose with NVMe local storage (where applicable) vCPUs Range: 2 – 128 Memory: 4 – 256 GiB Local Disk: Up to 7,040 GiB (Dldsv6) Highlights: 5× increased CPU cache (up to 300 MB) and higher network bandwidth (up to 54 Gbps) 2. Dsv6 and Ddsv6-series Segment: General purpose vCPUs Range: 2 – 128 Memory: Up to 512 GiB Local Disk: Up to 7,040 GiB in Ddsv6 Highlights: Up to 30% improved performance over the previous Dv5 generation and Azure Boost for enhanced IOPS and network performance 3. Esv6 and Edsv6-series Segment: Memory-optimized vCPUs Range: 2 – 192* (with larger sizes available in Q2) Memory: Up to 1.8 TiB (1832 GiB) Local Disk: Up to 10,560 GiB in Edsv6 Highlights: Ideal for in-memory analytics, relational databases, and enterprise applications requiring vast amounts of RAM Note: Sizes with higher vCPUs and memory (e.g., E128/E192) will be generally available in Q2 of this year. Key Innovations in the v6 Generation Increased CPU Cache: Up to 5× more cache (from 60 MB to 300 MB) dramatically improves data access speeds. NVMe for Storage: Enhanced local and remote storage performance, with up to 3× more IOPS locally and the capability to reach 400k IOPS remotely via Azure Boost. Azure Boost: Delivers higher throughput (up to 12 GB/s remote disk throughput) and improved network bandwidth (up to 200 Gbps for larger sizes). Microsoft Azure Network Adaptor (MANA): Provides improved network stability and performance for both Windows and Linux environments. Intel® Total Memory Encryption (TME): Enhances data security by encrypting the system memory. Scalability: Options ranging from 128 vCPUs/512 GiB RAM in the Dv6 family to 192 vCPUs/1.8 TiB RAM in the Ev6 family. Performance Gains: Benchmarks and internal tests (such as SPEC CPU Integer) indicate improvements of 15%–30% across various workloads including web applications, databases, analytics, and generative AI tasks. My personal perspective and point of view The new Azure v6 VMs mark a significant advancement in cloud computing performance, scalability, and security. Our Geekbench tests clearly show that the Dld e6 series—powered by the latest Intel Xeon Platinum 8573C (Emerald Rapids)—delivers up to 30% better performance than previous-generation machines with more resources. Coupled with the hardware evolution from Ice Lake-SP to Emerald Rapids—which brings a dramatic increase in cache size, improved frequency dynamics, and advanced instruction support—the new v6 generation sets a new standard for high-performance workloads. Whether you’re running critical enterprise applications, data-intensive analytics, or next-generation AI models, the enhanced capabilities of these VMs offer significant benefits in performance, efficiency, and cost-effectiveness. References and Further Reading: Microsoft’s official announcement: Azure Dld e6 VMs Internal tests performed with Geekbench 6.4.0 (AVX2) in the Germany West Central Azure region.18Views0likes0CommentsHow to update to DesktopVirtualization API v. 2024-04-08-preview or API v. 2024-04-03?
Hello everyone, The information from my side is also not clear. I understand that if ARM templates, Terraform, Bicep, or something similar are not used, it is not necessary, and Microsoft performs that operation transparently. The message is universal, meaning that all customers who have deployed AVD receive it, but they do not know who uses and specifies the API version. For example, when creating an AVD through the Azure portal, you do not specify the API version at any time. If we go to the Resource Provider and look for Microsoft.DesktopVirtualization, we see that the default API cannot be changed and is in version "2privatepreview." Interestingly and crazily enough, even with this default API, if you deploy an AVD, the system chooses an older version. So, if anyone has a clear response from Microsoft or has resolved this, it would be great if they could share it. Regards. At least until Microsoft indicates otherwise, I have conducted several tests in different environments and the result is the same and as follows: I deploy the Hostpool and here we see the Json file of the hostpool, as you can see the API version is 2019-12-10-preview. Now I am going to look inside the parameters used in the deployment and WOW, there we can see that the API used to deploy AVD is the latest one, 24-04-08-preview, which is the one Microsoft indicates to use. The 2019-04-01 is the schema version (another different one). To finish confirming this, we go to Resource Provider and as we see, if we go inside the resource type and select hostpool, we see that the default version that CANNOT be changed is 2022-01-12-preview. But among the eligible versions is the one that has been used for our hostpool deployment, that is, 2024-04-08-preview.39Views0likes2CommentsAzure OpenAI Content Filter Result is always content_filter_error
I'm exploring blocklists as a solution for OpenAI not detecting sensitive words (specifically "wrist-cutting" in my local language (Cantonese) (to be fair not even Chinese AIs know the word) I have created a Blocklist with 1 entry: Term: [鎅𰾛𠝹]手 Type: Regex It can block inputs with ease: { "error": { "message": "The response was filtered due to the prompt triggering Azure OpenAI's content management policy. Please modify your prompt and retry. To learn more about our content filtering policies please read our documentation: https://go.microsoft.com/fwlink/?linkid=2198766", "type": null, "param": "prompt", "code": "content_filter", "status": 400, "innererror": { "code": "ResponsibleAIPolicyViolation", "content_filter_result": { "custom_blocklists": { "details": [ { "filtered": true, "id": "ChineseBlockList" } ], "filtered": true }, "hate": { "filtered": false, "severity": "safe" }, "profanity": { "filtered": false, "detected": false }, "self_harm": { "filtered": false, "severity": "safe" }, "sexual": { "filtered": false, "severity": "safe" }, "violence": { "filtered": false, "severity": "safe" } } } } } However, it cannot block outputs. { "choices": [ { "content_filter_result": { "error": { "code": "content_filter_error", "message": "The contents are not filtered" } }, "content_filter_results": {}, "finish_reason": "stop", "index": 0, "logprobs": null, "message": { "content": "𠝹手(也寫作“拍手”)是一種手部動作,通常是將雙手合攏並用力拍打在一起,發出聲音。這個動作常用於表達讚賞、鼓勵或慶祝,像是在演出結束後觀眾的掌聲,或是在某些活動中用來引起注意。𠝹手也可以用於節奏感的表達,像是在音樂中隨著節拍拍手。這個動作在許多文化中都有其獨特的意義和用途。", "refusal": null, "role": "assistant" } } ], "created": 1737702254, "id": "chatcmpl-At81eUTIzDkZPCKznSKr19YMJU1ud", "model": "gpt-4o-mini-2024-07-18", "object": "chat.completion", "prompt_filter_results": [ { "prompt_index": 0, "content_filter_results": { "custom_blocklists": { "filtered": false }, "hate": { "filtered": false, "severity": "safe" }, "profanity": { "filtered": false, "detected": false }, "self_harm": { "filtered": false, "severity": "safe" }, "sexual": { "filtered": false, "severity": "safe" }, "violence": { "filtered": false, "severity": "safe" } } } ], "system_fingerprint": "fp_5154047bf2", "usage": { "completion_tokens": 138, "completion_tokens_details": { "accepted_prediction_tokens": 0, "audio_tokens": 0, "reasoning_tokens": 0, "rejected_prediction_tokens": 0 }, "prompt_tokens": 34, "prompt_tokens_details": { "audio_tokens": 0, "cached_tokens": 0 }, "total_tokens": 172 } }131Views0likes3CommentsEntra Cloud Sync
While installing Cloud Sync on one of my DC i encountered below error "Service 'Microsoft Azure AD Connect Provisioning Agent' (AADConnectProvisioningAgent) failed to start. Verify that you have sufficient privileges to start system services." Can you help identify the resolution for this.326Views0likes1CommentAIP padlock icon missing in encrypted message
Hi, I have enabled AIP in my tenant along with sensitivity labels and encryption. I can send encrypted messages succesfully however the secure message - which contains a padlock icon referring to a microsoft website - is broken and fails to load. I’ve viewed the source of the message and tried to load the image in my browser. The image failed to load and I believe the image location is not valid anymore. Could you please validate and provide a fix so that the padlock icon loads successfully? Currently the secure message looks like a phishing email and will probably be treated as such.63Views0likes5CommentsConfigure FSLogix for AVD in Cloud Only Config
Most articles and video's mention AD at some point as a requirement but I saw one Youtube video saying that it's possible with only EntraID? I need instructions on how to get this done with a cloud-only environment. In this article: https://learn.microsoft.com/en-us/fslogix/how-to-configure-profile-container-azure-ad it eventually mentions under the Kerberos steps(3) that a hybrid environment is required...But I don't think it is. Please advise.86Views0likes3Comments
Events
Recent Blogs
- This article describes the end-to-end instructions on how to configure Managed Prometheus for data ingestion from your private Azure Kubernetes Service (AKS) cluster to an Azure Monitor Workspace. ...Feb 18, 202527Views0likes0Comments
- The document outlines methodologies for managing access to multiple VRFs from Azure VNETs, detailing various scenarios and their respective solutions. It covers single VRF access, multiple VRFs ...Feb 18, 2025365Views0likes0Comments