azure arc
286 TopicsAzure Arc | On-prem + Multi-cloud Management
In this video, we explore how Azure Arc simplifies hybrid and multi-cloud operations by providing a single, consistent control plane for managing your entire infrastructure across Linux and Windows, on-prem, in Azure, or in any cloud. Once connected, you can patch Windows and Linux together with Azure Update Manager, enforce CIS benchmarks and Azure Security Baselines through Azure Policy, and pull consistent inventory, tags, and RBAC across your whole estate. Auto-recover unbootable Windows Server 2025 machines with Quick Machine Recovery, audit and configure WinRE using built-in Azure Policy. Run your virtual machines as Azure Virtual Desktop session hosts on Nutanix, VMware, Hyper-V, or using physical Windows hardware. Satya Vel, Azure Arc Principal Group PDM Manager, shares how to make Azure your operational standard for every workload, anywhere it runs. Learn more about Azure Arc at https://aka.ms/AzureArcServer, or join the community at https://aka.ms/ArcServerForumSignup Organize, filter, & manage inventory at scale. Centralize visibility into servers, VMs, and Kubernetes clusters across on‑prem, AWS, GCP, and Azure from a single control plane. Check out Azure Arc. Policy-as-code, everywhere your servers run. Azure Arc extends Azure Policy to on-prem, AWS, and GCP resources — pre-built CIS and security baselines included. Try it. AVD, off-Azure. Azure Virtual Desktop for hybrid environments turns any Azure Arc-enabled Windows VM or physical server into a session host. Get started. QUICK LINKS: 00:00 — Azure Arc in hybrid environments 00:46 — Transitioning to Azure Arc 02:35 — Unified management 03:43 — How to bring in servers and containers 04:48 — Inventory management 05:30 — Patching 06:48 — Auto-manage future updates 08:25 — One-time update 09:32 — Configuration in a hybrid environment 11:05 — Auditing Windows machines 11:34 — Microsoft Defender for Cloud 13:06 — Desktop virtualization 13:51 — Wrap up Link References For more information go to https://aka.ms/AzureArc Unfamiliar with Microsoft Mechanics? As Microsoft’s official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast Keep getting this insider knowledge, join us on social: Follow us on Twitter: https://twitter.com/MSFTMechanics Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics Video Transcript: - If you’re managing servers and containers today, you’re probably operating across on-prem multiple clouds and using different tools for each. Azure Arc changes that by providing a single way to manage servers, Kubernetes, and containers across Linux and Windows, on-prem, in any cloud, and at the edge. Since launching in 2019, Azure Arc has gained strong momentum, enabling consistent patching, configuration, compliance, and advanced resilience features like remote recovery even for machines that cannot boot and more. And to explore how Azure Arc works in real hybrid environments, I’m joined by our resident management expert, Satya Vel. Welcome. - Hi, Jeremy. It’s great to be on the show. It’s been a while. - Yeah, it has been a while. Thanks for joining us today. And why don’t we jump right into this? So if I’m coming from maybe a traditional server management background using things like Ansible, VMware vSphere, maybe System Center, what does it take then to transition to Azure Arc, and why would I do it and is it worth the effort? - That’s a fair question. Those are all proven powerful tools. That said, it’s challenging moving between multiple tools to manage what you have. What we are seeing today is more of a people and process change. Most enterprises are now hybrid by default, on-prem, multi-cloud, multiple operating systems managed by a central operations team. And what those teams want most is consistency. Azure extends its management capabilities to servers and Kubernetes clusters wherever they run using Azure Arc. That’s where the value of cloud native innovation shows up, beyond basic monitoring of servers and clusters, like the health and status of each resource. With Azure Arc, you can collect richer operational and security data and query it at a massive scale. All these are now actionable insights. You can use them to improve your security posture to close vulnerabilities faster. They’ll let you more easily fix compliance drift to realign resources with your policies and maintain day-to-day operations. This includes modern patching, all applied across your multi-cloud and hybrid estate. And finally, Azure Arc centralizes governance by bringing consistent tags for grouping along with unified identity and access management using RBAC for connected resources. That way everything is controlled the same way regardless of where it runs from a single control plane without duplication or drift. So to answer your earlier question, it is totally worth it, and Azure Arc is really the glue that brings it all together. - Okay, so why don’t we make this real for everyone watching? Can you show us the unified management experience and what that looks like with Azure Arc? - Sure thing, and that’s the best part. In fact here I’m managing my on-prem and multi-cloud environment using Azure services enabled by Azure Arc. Notice I have everything from a Windows server to Kubernetes clusters running on AWS, different Linux distros. There’s even a Windows client Desktop VM and more. All right here. And I can drill into any of these items to see its specs as well as what’s configured. I can take a look at whether it’s compliant with my configuration policies. For example, this test resource has a few non-compliant policies that I might want to take a look into. And the great thing is everything is in one spot. I don’t need to move between consoles to see everything. Once these resources are enrolled, everything is automated and rule-based. I can look for servers and workloads as they are provisioned or updated, and monitor them 24/7. Then based on the configuration status it finds, it can take actions and get items into a compliant state. - Okay, so we’re going to get to what the management experiences look like in a minute, but let’s go back a step. So what happens if I’ve got infrastructure and I want to bring that into Azure Arc? What does that experience look? - This process is super straightforward and simple. Let me show you. You can bring servers and containers running in any cloud on-premises and on any hypervisor under management with Azure Arc. To onboard resources to Azure Arc, we have a few different methods. The any environment option is the most flexible, where you can use scripts for Linux and Windows, or an installer. This is a lightweight agent that you can install on your Linux and Windows servers. You can use your preferred deployment method to run the scripts on your servers and clusters, like this one for Linux, which downloads the agent, installs it and connects it to Azure Arc. And if you have existing tools like Ansible Automation Controller, formerly known as Ansible Tower, we have published a playbook that makes it super simple to onboard your machines. And this playbook is published in the Ansible Galaxy, which is the official community hub. - Okay, so now we’ve got everything in. Now moving into the next thing that people manage a lot every day, inventory. So how does Azure Arc change that? - So I briefly showed the different locations and platforms that could run under Azure Arc. But there’s more to it. All my servers and clusters are in one view. It spans on-prem as I search for Azure Local, then I’ll filter for AWS as well as GCP services. And I can see Azure VMs plus my on-prem servers listed together with a consistent tagging and status information. I define everything based on their location and platforms in Azure, so it’s super easy to see where everything is running, and there’s less chance that any infrastructure falls through the cracks. - Beyond inventory management, something else that we do every day is patch management. So can Azure ARC handle patch management for servers and infrastructure outside of Azure? - Absolutely. This is an area where Azure Arc can help a lot. Today, patching often means different tools for different environments: WSUS or SCCM for Windows, scripts for Linux, or separate crowd portals. And with Azure Arc, this all happens consistently from one place. You can see Azure Update Manager, which I have opened here. Each server has an update status indicating if it’s got pending updates or not. Azure Update Manager continuously assesses the update compliance of your managed servers on a schedule. And you can manually trigger assessments by selecting resources and hitting check for updates. Now, you can see I have both Linux and Windows machines missing updates, and even though these are different OS types, I can update them together with just a few clicks if I want. But before I do that, notice this on-prem Windows Server 2016 machine that needs to be updated. Here, a benefit of managing your Windows and SQL Server infrastructure on Azure is that the service offers extended security updates so you can run them longer in support without disruption to business critical applications. Let’s get back to updating these machines. The nice thing is that you only have to set the right policy and logic one time to manage updates automatically in the future. To save a little time, I’ll select every machine. From here, I can schedule updates for these resources where first I’ll fill in the basics for my subscription and resource group. Then the instance details like the configuration name and the region. The maintenance scope using the guest option lets me target my resources. Then under schedule, I can select the start date as well as the time, how many hours and minutes I want the maintenance window to be, the frequency of repeats in hours, days, weeks, or months. Then in the resources tab, if I want to add more servers, I can group everything I want in the same maintenance schedule. Likewise, you’d use this grouping for staggered rollouts. Importantly, using dynamic scopes, I can also make sure that any new resources are targeted as they come online based on defined filters like the resource groups they’re in, the resource types, locations, operating systems or tags. In updates, I can target the type of updates I want, for example, only critical and security updates. Finally, I can add pre and post events to run before and after the update, like redirecting an app to an informational page saying that the resource is being serviced and when it’ll be back online. Of course, I can tag this as well. And then I just need to review and click create. - And the favorite thing I just saw there was the dynamic scoping that you can apply as a set it and forget it setting basically. So what happens though, if I’ve got an update that’s really critical that I need to push out immediately, can I do that? - Not a problem. You can do that as well. For that, you’ll select one or more resources and choose one time updates so that it gets applied immediately. I just need to confirm the machines, then choose the update type or any exclusions that I want to define. I’ll keep everything in scope here. Then in properties I can determine the reboot behavior I want and maximum maintenance window time in minutes. From there, I can review and install. That will push the update to my selected servers, whether they are in the cloud or on-premise, so it’s one place to get resources into update compliance. And in case you want to stagger updates over a longer period of time for large patch management jobs, you can orchestrate updates using groups. - So the main thing is here you control the timing, like only patching during off hours and approvals and you get to decide which updates to apply, so it’s super flexible. Now, software updates are one type of configuration management, but what other types of configurations can you manage here? - Configuration management in hybrid environments is complex. You traditionally use group policy, desired state configuration or scripts for Windows, and then separate tools like Ansible, remote scripting or manual commands of SSH for Linux. All this can be done centrally from Azure Arc. It extends Azure policy to any resource. And you can use Microsoft provided built-in policy baselines covering common security requirements. For example, the security baseline contains best practices and controls that we’ve defined for cloud services running on Linux and Windows. And above that, you can also see CIS Benchmark policy, which is an internationally recognized standard spanning OS platforms used to protect against cyber attacks. I’ll apply this baseline, then I’ll choose the Red Hat Enterprise Linux 9 Benchmark. And searching across 300 CIS Benchmark policies, I’ll look for passwords. And there are 24 policies defined. And then for Firewall, you can see four more. And these are just a few examples that are pre-configured. So once you assign these to your resources, Azure continuously monitors each machine for compliance. So you can use policy as code across your entire state with Azure policy controls that automatically stay current as standards like CIS evolve. We also recently added the ability to audit and enable WinRE through Azure Arc, improving recoverability even for machines that can’t boot. As you can see, there are a couple of new policies for auditing machines that do not have WinRE enabled and configuring WinRE on Windows machine. With quick machine recovery on Windows Server 2025, that also means for broader issues with known fixes, we’ll automatically recover machines that are not bootable. - And that’s really a great resiliency option. But what about security, compliance, and configurations and assessments? Can we do something there? - For that, you can use Microsoft Defender for Cloud. This lets you standardize security agents and settings across machines and containers wherever they run. In the Defender portal, you can see that the same way Azure Resources spanned Azure, AWS, GCP, and other environments, those same resources are visible here too. Defender continuously assesses connected resources for security posture. This includes what I showed before in the Security Baseline and CIS Benchmark. It detects threats in real time with associated security alerts and how they are trending. You get a complete breakdown by compute with your virtual machines and their associated risks. And the same is true for your connected containers running in Kubernetes. If I move over to cloud assets here you can see all the virtual machines, Kubernetes clusters that we saw in Azure Arc. And clicking into any of these, like this Ubuntu VM will show me all of its details. Scrolling down, I get a view of its risk factors. And below that, you’ll see that this one has 82 risk-based recommendations to improve its security. - And one of the big upsides of Microsoft Defender is that shared visibility, so everything logs to the same place. So if you think about assumed breach, it means that you won’t have any blind spots then as attackers are moving laterally through your environment. So that means security teams, they see what you see. So why don’t we move on though to desktop virtualization. What can Azure Arc do to help me there? - Sure, Azure Arc unlocks the ability to run Azure Virtual Desktop, or AVD, for short, outside of Azure so it can run on your own infrastructure, either via Azure Local or something new we recently announced: Azure Virtual Desktop for hybrid environments. This means any existing on-prem server can be configured as a AVD session host as long as it’s attached to Azure Arc. The management is in the VM layer using a management extension. It’s flexible, and Nutanix AHV, VMware vSphere, Hyper-V, or physical Windows Server can work. So with Azure Arc, you have full control over the entire infrastructure’s lifecycle from inventory, configuration management and policy enforcement all from one place. And the good news is that if you own Software Assurance, you can access services enabled by Azure Arc as part of your license for inventory, configuration, and update management. - That was a great tour and update of Azure Arc. So thanks for joining us today, Satya. And if you want to learn more about Azure Arc and try it out for yourself, just go to aka.ms/AzureArc for more information. Or as an admin search for Arc, A-R-C, in the Azure Portal to get started. And keep watching Microsoft Mechanics for the latest updates. We’ll see you again soon.164Views1like0CommentsIntroducing cert-manager for Azure Arc-enabled Kubernetes: now in Public Preview
Today we’re releasing a public preview of cert-manager for Azure Arc-enabled Kubernetes. It’s an Arc extension that automates TLS certificate and trust bundle management for edge Kubernetes clusters. If you’re running Kubernetes at the edge: in factories, retail stores, remote sites, you’ve probably hit the certificate problem already. Certificates expire. Each cluster has its own tooling. Nobody owns the renewal process until something breaks. We routinely hear from customers that certificate issues are a common source of unplanned outages and last-minute firefighting, especially as workload counts grow. This extension packages the open-source cert-manager and trust-manager into a managed Arc extension with Microsoft support. You get automated lifecycle management and trust distribution without having to run and maintain these tools yourself. What it does The extension bundles two CNCF-graduated projects: cert-manager and trust-manager, into a single Arc-K8s extension that you install once per cluster. From there: 1. You can issue, renew, and rotate certificates automatically. You do not need to manage them manually. 2. You can distribute trusted CA certificates consistently across namespaces. No more per-workload trust configuration. 3. You choose the CA issuer: built-in self-signed for dev/test, or your enterprise PKI for production. 4. The extension ships with enterprise support, regular security patches, and proactive maintenance from Microsoft team. Why we built it We built Microsoft cert-manager for Azure Arc-enabled Kubernetes to address three recurring problems we saw in real hybrid and edge environments. Problem 1: Manual certificate issuance. Many organisations still issue, install, and renew certificates through manual steps across clusters and namespaces. That creates operational overhead, slows teams down, and increases the risk of outages when certificates expire or are configured incorrectly. The answer is automation. With cert-manager running as an Arc-enabled extension, teams can automate certificate issuance, renewal, and rotation through Kubernetes-native workflows instead of relying on tickets, scripts, and manual intervention. Problem 2: Fragmented approaches to automation. Even when teams try to automate, they often end up with a mix of scripts, custom controllers, product-specific setups, and one-off operational patterns. That fragmentation makes certificate management harder to scale, harder to standardise, and harder to operate consistently across environments. The answer is to standardise on cert-manager. It provides a common, Kubernetes-native approach to certificate lifecycle management, helping teams reduce tool sprawl, align on a consistent operating model, and simplify how certificates are managed across clusters. Problem 3: Maintenance and upgrade burden for open-source cert-manager. cert-manager is a powerful open-source project, but many organisations do not want the ongoing burden of packaging, validating, patching, upgrading, and supporting it themselves as a production dependency. That can create operational risk, delay updates, and make long-term ownership unclear. The answer is a Microsoft-supported Arc-enabled extension. Microsoft cert-manager for Azure Arc-enabled Kubernetes gives customers a supported way to use cert-manager, with Microsoft handling packaging, delivery, and ongoing maintenance so teams can adopt the capability without taking on the full operational burden of managing the OSS component themselves. What’s in the public preview Here’s what you get: Certificate lifecycle automation with cert-manager: issuance, renewal, rotation, all handled for you. Trust bundle distribution with trust-manager: push trusted CA certs to every namespace that needs them. Self-signed or external CA. Start with the built-in CA, swap in your enterprise PKI when you’re ready. Secure by default. We turned on the security settings you’d want enabled anyway: TLS enforcement, least-privilege RBAC, restricted pod security. Tested at the edge. Validated on AKS Edge Essentials, AKS on Azure Local, and several third-party Kubernetes distros. Works offline. Fits into your Arc stack If you’re already running Azure IoT Operations or Azure Monitor on Arc-enabled clusters, the extension handles TLS between those services with minimal setup. No custom certificate plumbing required: install the extension and the other Arc components pick it up. Get started The extension is available now in public preview. 👉 Documentation and quickstart203Views0likes0CommentsResource Guide: Making Physical AI Practical for Real‑World Industrial Operations
Microsoft’s adaptive cloud approach enables organizations to turn operational technology (OT) data into intelligent actions, autonomously, without requiring everything to live in the cloud by unifying cloud-to-edge management plane, data plane, and intelligence platform. At the center of this approach are key foundational technologies: Key Purpose Offering Direct-to-cloud device management + telemetry ingestion Azure IoT Hub Industrial connectivity + edge data plane Azure IoT Operations Unified analytics + real-time intelligence Microsoft Fabric On-device AI inferencing runtime Foundry Local Microsoft Azure IoT Gartner winner: Microsoft named a Leader in the 2025 Gartner® Magic Quadrant™ for Global Industrial IoT Platforms See it all come together Before diving into each component, watch this end-to-end demo showing how Azure IoT Operations, Azure IoT Hub, Microsoft Fabric, and Foundry Local work as one stack across the edge-to-cloud lifecycle - Making industrial AI practical for real-world operations with adaptive cloud. How these components work together Azure IoT Operations and Azure IoT Hub collect real-time data from operational assets and send semantically-ready, modeled data to Microsoft Fabric, where it's contextualized with enterprise data for downstream analytics. Microsoft Foundry extends to the edge through Foundry Local, so the same tooling used to deploy and manage AI models in the cloud applies to edge use cases. All of it integrates into Azure Resource Manager, bringing OT devices, assets, and edge AI models into the same management and security paradigm as every other Azure-managed resource. This blog walks through where to get started with each product capability: 1. Manage Cloud-Connected Devices and Telemetry with Azure IoT Hub Azure IoT Hub is a fully managed cloud service that enables secure bidirectional communication, device-to-cloud telemetry ingestion, cloud-to-device command execution, per-device authentication, remote management and more. Telemetry from IoT Hub can also be routed downstream into analytics platforms like Microsoft Fabric for visualization or AI modeling. Recommended Usage: Devices that utilize IoT Hub are distributed, stand-alone devices with fixed-functions. These devices typically do not require cloud-managed containerized workloads or cloud-managed proximal industrial protocol connectivity. Examples of appropriate device-to-cloud IoT Hub endpoint devices include water monitoring stations, vehicle telematics, distributed fluid level sensors, etc. Resources Current in-market services overview: IoT Hub: What is Azure IoT Hub? - Azure IoT Hub DPS: Overview of Azure IoT Hub Device Provisioning Service - Azure IoT Hub Device Provisioning Service ADU: Introduction to Device Update for Azure IoT Hub Building scalable solutions with Azure IoT platform: Best practices for large-scale IoT deployments - Azure IoT Hub Device Provisioning Service Scale Out an Azure IoT Hub-based Solution to Support Millions of Devices - Azure Architecture Center Azure IoT Hub scaling Try out our preview of new IoT Hub capabilities (integration with Azure Device Registry and Certificate Management) Learn more about these capabilities on our blog post: Azure IoT Hub + Azure Device Registry (Preview Refresh): Device Trust and Management at Fleet Scale… Integration with Azure Device Registry (preview): Integration with Azure Device Registry (preview) - Azure IoT Hub Microsoft-backed X.509 certificate management (preview): What is Microsoft-backed X.509 Certificate Management (Preview)? - Azure IoT Hub How to start with the preview: Deploy IoT Hub with ADR integration and certificate management (Preview) - Azure IoT Hub 2. Connect Industrial Assets with Azure IoT Operations Azure IoT Operations provides a unified data plane for the edge that runs on Azure Arc–enabled Kubernetes clusters and supports open industrial standards. It allows organizations to connect and capture equipment telemetry, normalize OT data locally, route hot-path signals to real-time analytics, securely manage layered industrial networks, and more. Edge‑processed data can then be sent upstream to Microsoft Fabric for AI‑driven analysis. Recommended Usage: Azure IoT Operations is intended to be the data plane for an adaptive cloud deployment extending the management, data, and AI capabilities of the Microsoft cloud to an on-prem device. This device binds to these cloud planes providing a platform for local data processing and intermittent connectivity. The target for these devices range from a small-gateway-style PC to a full data center. Azure IoT Operations endpoints enable cloud-managed containerized workloads and cloud-managed proximal industrial protocol connectivity. Examples of appropriate adaptive cloud and Azure IoT Operations endpoints include, on-robot computers, industrial machine controllers, retail store sensor/vision processing, and top-of-factory site infrastructure for line of business applications. Resources Azure IoT Operations Overview Azure IoT Operations Documentation Hub Quickstart: explore-iot-operations/quickstart at main · Azure-Samples/explore-iot-operations Open-source framework for scaling robotics from simulation to production on Azure + NVIDIA: microsoft/physical-ai-toolchain Demo video showcasing How we built the demo: explore-iot-operations/quickstart at main · Azure-Samples/explore-iot-operations Edge-AI: microsoft/edge-ai: Production-ready Infrastructure as Code, applications, pluggable components, and… Latest Announcements & Blogs Making Physical AI Practical for Real-World Industrial Operations: Part 1 | Microsoft Community Hub Making Physical AI Practical for Real-World Industrial Operations: Part 2 | Microsoft Community Hub Unlock Industrial Intelligence | Microsoft Hannover Messe 2026 From pilots to production: How Microsoft and partners are accelerating intelligent operations 3. Advanced Analytics with Microsoft Fabric Microsoft Fabric delivers a unified, end‑to‑end analytics platform that transforms streaming OT telemetry into real‑time insights and live dashboards. Fabric Operations Agents monitor industrial signals to recommend targeted actions, while Fabric IQ provides a shared semantic foundation that enables AI agents to reason over enterprise data with business context. Together, Fabric turns live industrial data into AI‑powered operational intelligence. Resources Get Started with Microsoft Fabric Learning Path Fabric Real-Time Intelligence documentation - Microsoft Fabric | Microsoft Learn Create and Configure Operations Agents - Microsoft Fabric | Microsoft Learn Fabric IQ documentation - Microsoft Fabric | Microsoft Learn 4.Run AI Models On‑Device with Foundry Local Foundry Local extends on‑device AI to Arc‑enabled Kubernetes edge clusters, providing a Microsoft‑validated inferencing layer for running AI models in industrial, disconnected or sovereign environments. Resources Foundry Local on Azure Local Documentation Participate in Foundry Local on Azure Local preview form Foundry Local on Azure Local: HELM deployment Demo Customer Stories Chevron: Chevron plans facilities of the future with Azure IoT Operations Husqvarna: Husqvarna Group Boosts Operational Efficiency with Azure Adaptive Cloud Ecopetrol: Azure IoT Operations and Azure IoT for energy help Ecopetrol optimize energy distribution while lowering operational costs P&G: Procter & Gamble cuts model deployment time up to 90% with Azure IoT Operations Toyota: Toyota Industries innovates its paint shop processes with Azure industrial AI and Azure IoT Hub619Views1like0CommentsAnnouncing Public Preview of Argo CD extension on AKS and Azure Arc enabled Kubernetes clusters
We are excited to announce public preview of the Argo CD extension for Azure Kubernetes Service (AKS) and Azure Arc-enabled Kubernetes clusters. As GitOps becomes the standard for deploying and operating applications at scale, enterprises need a way to implement GitOps while staying compliant with best practices for security and identity management. Argo CD extension delivers on this need across 3 pillars - Trusted Identity and Secure Access The Argo CD extension integrates with Microsoft Entra ID to provide a secure, enterprise-ready experience for: Secure authentication using Workload Identity federation to Azure Container Registry (ACR) and Azure DevOps. This removes the need for long-lived credentials or hard-coded secrets in Git Repos, moving your CD pipelines closer to a true zero-trust architecture. Single Sign-On (SSO) using existing Azure identities. Enterprise-Grade Hardening and Security This preview introduces several enhancements to improve your security posture: To minimize the attack surface, the extension’s images are built on Azure Linux, specifically engineered for reduced CVEs and improved baseline security. Opt-in to automatic patch releases to stay current on security fixes while maintaining full control over your change management processes. Parity with upstream Argo CD Argo CD extension is designed to remain fully aligned with the upstream Argo CD open‑source project, so teams can use Argo CD as they do today with support for Configuring Argo CD extension with High availability (HA) for production‑grade deployments of critical workloads. Using hub‑and‑spoke architecture for multi‑cluster GitOps scenarios. Application and ApplicationSet, enabling automated and scalable application delivery across large fleets of clusters. Getting Started We invite you to explore the Argo CD extension and provide feedback as we continue to evolve GitOps capabilities for Kubernetes. To get started today, you can enable the extension on your clusters using the Azure CLI. Argo CD extension management via the Azure Portal will be available in a few weeks.1.2KViews1like1CommentAzure Arc Server Mar 2026 Forum Recap
Please find the recording for the monthly Azure Arc Server Forum on YouTube! During the March 2026 Azure Arc Server Forum, we discussed: Deploying Ansible Playbooks through Machine Configuration as Azure Policy (Learn more: Announcing Private Preview: Deploy Ansible Playbooks using Azure Policy via Machine Configuration) and sign up at https://aka.ms/ansible-arc-signup New MECM (SCCM) connector supporting Cloud Native Server Management, sign up for Private Preview at aka.ms/arc-mecm/preview Automatic Agent Upgrade at Scale Enablement (Learn more: Run the latest Azure Arc agent with Automatic Agent Upgrade (Public Preview)) TPM-backed Identity for Secure Onboarding, sign up for Private Preview at https://aka.ms/arc-tpm-backed-identity/preview/ To sign up for the Azure Arc Server Forum and newsletter, please register with contact details at https://aka.ms/arcserverforumsignup/. For the latest agent release notes, check out What's new with Azure Connected Machine agent - Azure Arc | Microsoft Learn. Our April 2026 forum will be held on Thursday, April 16 at 9:30 AM PST / 12:30 PM EST. We look forward to you joining us, thank you!532Views0likes1CommentAzure Local expands to sovereign-scale infrastructure with disaggregated deployments
As organizations accelerate digital transformation across datacenters, sovereign environments, and edge locations, infrastructure architectures must evolve to meet new operational and regulatory demands. The first feature update of Azure Local in CY 2026 (version 2604) marks a significant step forward—expanding Azure Local as a platform for sovereign private cloud infrastructure, introducing larger scale, disaggregated deployment architectures, expanded storage ecosystem partnerships, and simplified identity capabilities that unlock entirely new infrastructure scenarios from edge locations to enterprise-scale environments. This release is focused on enabling: Sovereign private cloud deployments at scale from single node up to multi-rack infrastructure Infrastructure modernization through SAN reuse and disaggregated architectures Simplified edge deployment without Microsoft Active Directory dependencies Faster lifecycle operations across deployment and update workflows Introducing disaggregated larger scale deployments using SAN storage Azure Local now supports a disaggregated infrastructure architecture, allowing customers to deploy compute and storage resources independently—while continuing to benefit from an Azure-consistent management and operational experience. This enables organizations to scale infrastructure more flexibly separating compute and storage to align with workload demands and long-term growth. This architecture enables: Independent scaling of compute nodes and storage infrastructure SAN‑only and hybrid storage architectures for Azure Local infrastructure and workloads Fibre Channel (FC) connectivity support beginning with 2604 (iSCSI coming soon) With disaggregated deployments and SAN storage, Azure Local clusters can now scale from a single node at the edge to multi-rack environments spanning beyond 16 nodes and up to thousands of nodes, addressing growing demand for large-scale deployments across sovereign, government, defense, and regulated environments. This unlocks new class of Azure -consistent infrastructure deployments at sovereign scale. This unlocks a new class of Azure-consistent infrastructure deployments at sovereign scale. This new capability is generally available with the release of Azure Local 2604. General Availability of SAN Support for Azure Local Support for attaching SAN storage to Azure Local was introduced as public preview back in November 2025. Today this brownfield expansion capability is generally available and allows external SAN devices to be introduced into already deployed Azure Local instances via Fibre Channel (FC)—supporting virtual machines, Kubernetes environments, and Azure Virtual Desktop workloads without requiring disruptive infrastructure changes or full system refresh. Azure Local instances now support the coexistence of Storage Spaces Direct volumes and external SAN volumes. Support for SAN-attached deployments allows organizations to: Reuse existing enterprise SAN investments Modernize infrastructure without replacing existing storage estates Manage rising disk costs associated with hyperconverged architectures Enable workload scenarios that depend on massive storage requirements These innovative capabilities supporting disaggregated deployments and SAN storage are supported by a strong ecosystem of hardware partners. DataON, Dell Technologies, Everpure, HPE, Hitachi Vantara, Lenovo and NetApp are working with Microsoft to deliver configurations, giving customers more flexibility in how they design and scale their infrastructure. General Availability of Local Identity with Azure Key Vault While disaggregated architectures primarily target sovereign and centralized datacenter deployments, Azure Local 2604 also introduces a major advancement for distributed and edge scenarios. With the General Availability of Local Identity with Key Vault, Azure Local can now be provisioned without infrastructure dependencies on Microsoft Active Directory, enabling simplified deployment in disconnected, air-gapped, and regulated environments. This simplifies deployment and adoption, by removing the need for extra hardware running domain controllers and removing the complexity of firewall configurations when installing in isolated network environments. Azure Local 2604 adds support for deploying rack-aware clusters using Local Identity with Azure Key Vault. This combines reduced requirements with the high availability that customers demand across manufacturing, energy, and other industries. This capability removes one of the key barriers to deploying Azure-consistent infrastructure in sovereign and edge environments. Pricing Changes Pricing for multi-rack and sovereign-scale deployments is being introduced as part of this release. Customers should connect with their Microsoft account team to learn more about pricing, configuration options, and early access programs as these offerings continue to actively evolve. Getting started Release 2604 is available for both existing and new Azure Local instances. Review the release note for Azure Local 2604 release here Learn more about disaggregated deployments here Learn more about SAN attach here Learn more about Local Identity with Azure Key Vault here. Learn more about hardware configurations that support disaggregated deployments using the solutions catalog or learn directly from our partners: o DataON: “DataON Premier Solutions for Azure Local provide a premium Azure Local experience that includes deployment, integration, training, and white glove service & support. Our goal is to not only get you up and running quickly but also to help your team to be confident in managing Azure Local.” o Dell Technologies: “Coming Soon, Dell Private Cloud–Microsoft enables a modern disaggregated architecture, simplifying operations across Dell PowerEdge compute, Dell PowerStore storage, and Azure Local.” “Available now, Dell PowerStore delivers high-performance, scalable, and resilient storage for Azure Local, with support for Dell Private Cloud coming soon to make it easier to streamline operations for storage, compute, and your Azure Local license.” o Everpure: “Azure Local now supports external storage with Everpure FlashArray, offering Azure Local customers unprecedented levels of scale, performance and efficiency with the added benefit of seamless hybrid cloud integration with Everpure Cloud in Azure.” o Hitachi Vantara: “Hitachi Vantara VSP and VSP One Block, fully validated to meet Microsoft's Azure Local storage requirements, deliver enterprise SAN reliability for Azure Local.” o HPE: “HPE ProLiant Compute Premier Solutions for Azure Local enable customers to gain full control over data residency, and accelerate innovation with industry-leading performance, security, and management automation.” “HPE Alletra Storage MP B10000 integrated with Azure Local delivers a unified, Azure managed experience with the simplicity of Azure Local plus the advanced data services of a modern enterprise storage platform.” o Lenovo: “Lenovo is expanding its Azure Local portfolio to support disaggregated infrastructure designs that deliver greater choice across compute and storage. The ThinkAgile Disaggregated Solution for Microsoft Azure Local with new compute-only configurations on ThinkAgile MX Series enables customers to integrate ThinkSystem DM, DS, and DG Series storage arrays or bring their own Azure Local validated third party SAN arrays into new or existing Azure Local environments, allowing fully disaggregated, independent scaling using enterprise class Lenovo solutions for sovereign private cloud deployments and emerging AI workloads.” o NetApp: “With Azure Local, NetApp delivers support across NetApp® AFF, ASA, and FAS systems.” Thank you! This first feature release of 2026 is packed with innovation for Azure Local, and we can’t wait for you to try it and share feedback. We are committed to listening to your feedback and delivering the next wave of capabilities in a continuously evolving world. Thank you to all our customers who trust Azure Local to run their business—and to our engineering partners for the incredible collaboration in building solutions together.4KViews7likes0CommentsBringing AI to the Factory Floor with Foundry Local - Now in Public Preview on Azure Local
Key capabilities in this preview Foundry Local exposes standard REST and OpenAI‑compatible APIs, enabling IT and AI teams to deploy and operate local AI workloads using familiar, cloud‑aligned patterns across edge and on‑prem environments. In this public preview, we deliver the following capabilities: Azure Arc extension for Foundry Local Deploy and manage Foundry Local via an Azure Arc extension, enabling consistent install, configure, update, and governance workflows across Arc‑enabled Kubernetes clusters, in addition to Helm‑based installation. Built‑in generative models from the Foundry Local catalog Deploy pre‑built generative models directly from the Foundry Local model catalog using a simple control‑plane API request. Bring‑your‑own predictive models (ONNX) from OCI registries Deploy custom predictive models (such as ONNX models) securely pulled from customer‑managed OCI registries and run locally. REST and OpenAI‑compatible inference endpoints Consume both generative and predictive models through standard HTTP endpoints. Multi‑model orchestration for agent‑style applications Enable applications that coordinate multiple local models—for example, generative models guiding calls to predictive models—within a single Kubernetes cluster. Running Foundry Local on Azure Local single-node gives you: A validated, supported hardware foundation for running AI inference at the edge, from compact 1U nodes on the factory floor to rugged form factors in remote sites, using hardware from the Azure Local catalog AKS on Azure Local as the deployment target, so Foundry Local runs as a containerized workload managed by Kubernetes - the same operational model you use for any other workload on the cluster GPU access through the NVIDIA device plugin on AKS, giving Foundry Local's ONNX Runtime direct access to the node's discrete GPU without requiring Windows or host-OS-level configuration Two installation Options for single node deployment: The preview includes the Foundry Local Azure Arc extension, providing a consistent installation, deployment, and lifecycle management experience through Azure Arc, while also supporting Helm‑based installation Choose one of two installation paths: Option 1 - Arc-enabled Kubernetes Extension Recommended when: your organization manages multiple Azure Local instances and wants Microsoft to handle the deployment lifecycle — version updates, configuration drift detection, health monitoring — through the Azure portal without the team needing to manage Helm releases manually. Arc-enabled Kubernetes extensions deploy and manage workloads on AKS clusters registered with Azure. The extension operator runs in the cluster and reconciles the desired state declared in Azure, which means you don't need direct kubectl or helm access to the node to push updates. This is the lower-operational-overhead path for OT teams who are not Kubernetes specialists. Once installed, the extension appears in the Azure portal under your AKS cluster's Extensions blade. Model updates and configuration changes are pushed by modifying the extension configuration in Azure — no shell access to the node required. For disconnected or intermittently connected deployments, the extension operator caches its desired state and continues operating; it reconciles with Azure when connectivity resumes. Option 2 - Helm Chart Recommended when: your team manages AKS workloads with Helm or GitOps (Flux), and you need precise control over GPU resource allocation, node affinity, model pre-loading, or persistent volume configuration. The Helm chart gives you full control over the deployment manifest. You decide exactly how much GPU memory is requested per pod, which node the inference pod is pinned to, and what StorageClass backs the model cache. This matters on a single-node Azure Local deployment where you're sharing one physical GPU between the inference workload and potentially other AKS workloads. With Helm you can also integrate with Flux for GitOps-managed deployment — useful when you manage multiple Azure Local single-node instances across plant sites and want to push model or configuration updates from a central Git repository. Note: Verify the chart repository URL, chart name, and exact values.yaml parameters from the official Foundry Local documentation before deploying to production. Choosing Between the Two Helm Chart Arc Extension authentication API key EntraID Version upgrades Manual helm upgrade or Flux Automatic, managed by Microsoft GitOps compatible Yes (Flux HelmRelease) Yes (via Azure Policy / desired state) Requires cluster access Yes No (after initial registration) Best for Platform engineers, custom configs OT-managed sites, multi-site fleet Disconnected operation Works after initial deploy Works; reconciles on reconnect Control plane K8S native management (kubectl) K8S native management + REST API control plane Early Customer Validation and Key Scenarios Early customer validation is shaping the preview -helping ensure Foundry Local meets real-world requirements for latency, data control, and operating in constrained or disconnected environments across industries such as energy, manufacturing, government, financial services, and retail. Based on this early feedback, customers are prioritizing scenarios such as: Sovereign and regulated o On-site inference with data, models, and processing under customer control o Decision support in disconnected or restricted-network environments o In-jurisdiction processing for sensitive records and casework o Real-time detection and situational awareness within secure facilities Industrial and critical infrastructure o Edge operations assistants combining sensor telemetry with conversational AI o Low-latency quality inspection and process verification on factory floors o Predictive maintenance for remote or intermittently connected equipment o Local safety monitoring and operational oversight close to systems This input is guiding improvements across deployment flows, model catalog experience, hardware coverage, telemetry visibility, and documentation -so teams can evaluate and adopt Foundry Local more quickly and confidently in the environments above. Examples: CNC Anomaly Explanation: A machine vision system on a CNC line classifies a surface defect and passes the classification JSON to the Foundry Local endpoint. Phi-4-mini generates a plain-language root-cause hypothesis for the operator, referencing the specific machining parameters. Disconnected Safety Procedure Lookup: An offshore platform or remote mine site loses WAN connectivity. The Foundry Local pods continue serving requests from the AKS cluster on the Azure Local node - Kubernetes keeps the pods running, the model is already on the local PersistentVolume, and no external dependency is required. Workers query safety procedures (LOTO sequences, chemical handling) from an intranet application backed by the same inference endpoint. Qwen2.5-7B fits within 8–12 GB VRAM and supports a 32K token context window, making it viable for inline procedure retrieval without a separate vector database - useful when plant-floor infrastructure is minimal. Foundry Local for Devices and Foundry Local on Azure Local: What's Different Foundry Local for devices reached general availability for developer devices -Windows 10/11, macOS (Apple Silicon), and Android. That release targets a specific scenario: a developer or end user running AI inference on their own machine, with the model executing locally on their CPU, GPU, or NPU. The install is a single command (winget or brew), the service runs directly on the host OS, and there is no Azure subscription or infrastructure required. It is a developer tool and an application-embedded runtime. General overview of Foundry Local is available here: What is Foundry Local? - Foundry Local | Microsoft Learn The public preview for Azure Local single node is a different deployment target built for a different operational context. The runtime is the same - ONNX Runtime, the same model catalog, the same OpenAI-compatible API - but where it runs, how it is deployed, and how it is managed are entirely different. Foundry Local for Devices (GA) Foundry Local on Azure Local Single Node (Preview) Target Developer machines, end-user devices Enterprise edge servers on the factory floor or remote site OS Windows 10/11, macOS, Android Linux container on AKS on Azure Local Hardware Laptops, workstations, NPU-equipped devices Validated server hardware from the Azure Local catalog GPU access Direct host GPU (CUDA, DirectML, Apple Neural Engine) NVIDIA device plugin on Kubernetes Installation winget install or brew install Arc-enabled Kubernetes extension or Helm chart Lifecycle management Manual update via winget upgrade Managed via Helm/Flux or Arc extension operator Intended consumers One developer or one application on one machine Multiple applications sharing one inference endpoint on the plant network Disconnected operation Supported after model download; primarily online Designed for persistent disconnected operation with NVMe-cached models Model persistence Local device cache Kubernetes PersistentVolume on local storage Operational model Developer installs and manages it Platform team deploys it; applications consume it as a service The short version: the GA device release is for building and running AI-enabled applications on a single machine. The Azure Local single-node preview is for deploying Foundry Local as a shared, production inference service that runs continuously on validated industrial hardware, survives WAN outages, and is consumed by multiple workloads running on the same edge cluster. If you are prototyping an application on your laptop using the GA release, the same application code - specifically the OpenAI-compatible API calls - runs unchanged against the Azure Local deployment. You change only the base_url from localhost to the Kubernetes Service Built for Secure Industrial and Sovereign Operations Foundry Local supports Microsoft’s sovereign cloud principles—allowing AI workloads to operate fully locally, with customer‑controlled data boundaries and governance. Integration with Azure Arc provides unified management, configuration, and monitoring across hybrid and disconnected landscapes, enabling organizations to meet stringent compliance and operational requirements while adopting advanced AI capabilities. Learn more about Foundry Local on Azure Local RECOMMENDED participate in Foundry Local on Azure Local preview form link Foundry Local on Azure Local Documentation link Reach out to the team for support requests, feedback or suggestions here: FoundryLocal_Support@microsoft.com Foundry Local on Azure Local: HELM deployment Demo - link Foundry Local is now Generally Available link1.4KViews0likes0CommentsSQL Server enabled by Azure Arc Overview
Table of Contents What is Azure Arc-enabled SQL Server? Connecting SQL Server to Azure Arc (4-step onboarding) Your SQL Server is Now in Azure (unified management) SQL Best Practices Assessment Monitoring and Governance Troubleshooting Guide Azure Arc Demo What You Can Learn from This Article This article walks you through the end-to-end journey of bringing external SQL Servers (on-prem, AWS, GCP, edge) under Azure management using Azure Arc. Specifically, you'll learn how to onboard SQL Server instances via the Arc agent and PowerShell script, navigate the unified Azure Portal experience for hybrid SQL estates, enable and interpret SQL Best Practices Assessments with Log Analytics, apply Azure Policy and performance monitoring across all environments, leverage Azure Hybrid Benefit for cost savings, and troubleshoot common issues like assessment upload failures, Wire Server 403 errors, and IMDS connectivity problem, with a real case study distinguishing Azure VM vs. Arc-enabled server scenarios. 1. What is Azure Arc-enabled SQL Server? Azure Arc helps you connect your SQL Server to Azure wherever it runs. Whether your SQL Server is running on-premises in your datacenter, on AWS EC2, Google Cloud, or at an edge location Azure Arc brings it under Azure management. This means you get the same governance, security, and monitoring capabilities as native Azure resources and streamline migration journey to Azure, effectively manage SQL estate at scale and strengthen security and governance posture Cloud innovation. Anywhere. SQL Server migration in Azure Arc includes an end-to-end migration journey with the following capabilities: Continuous database migration assessments with Azure SQL target recommendations and cost estimates. Seamless provisioning of Azure SQL Managed Instance as destination target, also with an option of free instance evaluation. Option to choose between two built-in migration methods: real-time database replication using Distributed Availability Groups (powered by the Managed Instance link feature), or log shipping via backup and restore (powered by Log Replay Service feature). Unified interface that eliminates the need to use multiple tools or to jump between various places in Azure portal. Microsoft Copilot is integrated to assist you at select points during the migration journey. learn more in SQL Server migration in Azure Arc – Generally Available | Microsoft Community Hub 1.1 The Problem Azure Arc Solves Organizations typically have SQL Servers scattered across multiple environments: Location Challenge Without Azure Arc On-premises datacenter Separate management tools, no unified view AWS EC2 instances Multi-cloud complexity, different monitoring Google Cloud VMs Inconsistent governance and policies Edge / Branch offices Limited visibility, manual compliance VMware / Hyper-V No cloud-native management features Azure Arc solves this by extending a single Azure control plane to ALL your SQL Servers regardless of where they physically run Azure Arc Overview Microsoft Learn: https://learn.microsoft.com/en-us/azure/azure-arc/overview Architecture Reference — Administer SQL Server with Azure Arc Microsoft Learn: https://learn.microsoft.com/en-us/azure/architecture/hybrid/azure-arc-sql-server Documentation Index — SQL Server enabled by Azure Arc Microsoft Learn: https://learn.microsoft.com/en-us/sql/sql-server/azure-arc/?view=sql-server-ver17 SQL Server migration in Azure Arc (Community Hub): https://techcommunity.microsoft.com/blog/azuresqlblog/sql-server-migration-in-azure-arc-generally-av... 2. Connecting SQL Server to Azure Arc Connecting SQL Server to Azure Arc This section shows how to onboard your SQL Server to Azure Arc. Once connected, your SQL Server appears in Azure Portal alongside your other Azure resources. 2.1 Step 1: Access Azure Arc Portal Navigation: Azure Portal → Azure Arc → Machines Figure 1: Azure Arc | Machines, Starting Point for Onboarding Description: The Azure Arc Machines blade is your entry point for connecting servers outside Azure. Click 'Onboard/Create' dropdown and select 'Onboard existing machines' to begin. The left menu shows Azure Arc capabilities: Machines, Kubernetes clusters, Data services, Licenses, etc. This is where ALL your Azure Arc-enabled servers will appear after onboarding. 2.2 Step 2: Configure Onboarding Options Select your operating system, enable SQL Server auto-discovery, and choose connectivity method: Figure 2: Onboarding Configuration, Enable SQL Server Auto-Discovery Description: Key settings: (1) Operating System select Windows or Linux, (2) SQL Server checkbox, 'Automatically connect any SQL Server instances to Azure Arc' enables auto-discovery of SQL instances on the server, (3) Connectivity method, 'Public endpoint' for direct internet access or 'Private endpoint' for VPN/ExpressRoute. The SQL Server checkbox is crucial, it installs the SQL Server extension automatically. 💡 Important: Check the 'Connect SQL Server' option! This ensures SQL Server instances are automatically discovered and connected to Azure Arc. 2.3 Step 3: Download the Onboarding Script Azure generates a customized PowerShell script containing your subscription details and configuration: Figure 3: Generated Onboarding Script, Ready to Download Description: The portal generates a PowerShell script customized for your environment. Key components: (1) Agent download from Azure CDN, (2) Installation commands, (3) Pre-configured connection parameters (subscription, resource group, location). Click 'Download' to save the script. Requirements note: Server needs HTTPS (port 443) access to Azure endpoints. 2.4 Step 4: Run the Script on Your Server Copy the script to your SQL Server and execute it in PowerShell as Administrator: Figure 4: Executing OnboardingScript.ps1 on the SQL Server Description: PowerShell console showing script execution from D:\Azure Arch directory. The script (OnboardingScript.ps1, 3214 bytes) installs the Azure Connected Machine Agent and registers the server with Azure Arc. During execution, a browser window opens for Azure authentication. After completion, the server appears in Azure Arc within minutes. What happens during onboarding: Azure Connected Machine Agent is downloaded and installed Agent establishes secure connection to Azure Server is registered as an Azure Arc resource SQL Server extension is installed (if checkbox was enabled) SQL Server instance appears in Azure Arc → SQL Server Connect Your SQL Server to Azure Arc Microsoft Learn: https://learn.microsoft.com/en-us/sql/sql-server/azure-arc/connect?view=sql-server-ver17 Prerequisites — SQL Server enabled by Azure Arc Microsoft Learn: https://learn.microsoft.com/en-us/sql/sql-server/azure-arc/prerequisites?view=sql-server-ver17 Manage Automatic Connection — SQL Server enabled by Azure Arc Microsoft Learn: https://learn.microsoft.com/en-us/sql/sql-server/azure-arc/manage-autodeploy?view=sql-server-ver17 3. Your SQL Server is Now Visible in the Azure Control Plane Once connected via Azure Arc, your SQL Server is projected as a resource in the Azure Portal,right alongside your native Azure SQL resources. This is the power of Azure Arc: your SQL Server remains where it runs (on-premises, in AWS, or anywhere else), but Azure's management plane now extends to it. You can govern, monitor, and secure it with the same tools you use for Azure-native resources, without migrating the workload. 3.1 Unified View in Azure Portal After onboarding, you can see your Azure Arc-enabled SQL Server through two paths: Navigation Path What You See Azure Arc → SQL Server All Azure Arc-enabled SQL instances Azure Arc → Machines The host server with extensions 3.2 Management Experience Similar to SQL Server on Azure VM The management capabilities for Azure Arc-enabled SQL Server are very similar to SQL Server on Azure VM. The screenshots below show the SQL Server on Azure VM experience Azure Arc-enabled SQL Server provides nearly identical functionality. Whether your SQL Server runs natively on an Azure VM or is connected from outside Azure via Azure Arc, you get access to a consistent management experience including: Figure 5: SQL Server Management Overview — Consistent Experience Description: This shows the management experience for SQL Server in Azure. Whether connected via Azure Arc or running on Azure VM, you see: SQL Server version and edition, VM details, License type configuration, Storage configuration, and feature status. Azure Arc-enabled SQL Server provides a nearly identical dashboard experience, extending this unified view to your on-premises and multi-cloud servers. 3.3 Azure Hybrid Benefit - Use Your Existing Licenses One of the key cost-saving advantages which is you can apply Azure Hybrid Benefit (AHB) to Azure SQL Database and Azure SQL Managed Instance, saving up to 30% or more on licensing costs by leveraging your existing Software Assurance-enabled SQL Server licenses. Note: Azure Hybrid Benefit applies to Azure SQL Database and SQL Managed Instance. For SQL Server running on-premises or in other clouds managed via Azure Arc, AHB does not apply directly. However, Arc-enabled SQL Server provides other benefits such as centralized management, Azure-integrated security, and access to Extended Security Updates (ESUs). Figure 6: Azure Hybrid Benefit Configuration Description: License configuration for SQL Server on Azure VM, showing three options: Pay As You Go, Azure Hybrid Benefit (selected), and HA/DR. With Azure Hybrid Benefit, organizations with existing SQL Server licenses and active Software Assurance can save up to 30% or more on SQL Server licensing costs running on Azure VMs (as reflected in the Azure portal configuration blade). Free SQL Server licenses for High Availability and Disaster Recovery are also available for Standard and Enterprise editions. Configure SQL Server enabled by Azure Arc Microsoft Learn: https://learn.microsoft.com/en-us/sql/sql-server/azure-arc/manage-configuration?view=sql-server-ver1... Manage Licensing and Billing — SQL Server enabled by Azure Arc Microsoft Learn: https://learn.microsoft.com/en-us/sql/sql-server/azure-arc/manage-license-billing?view=sql-server-ve... 4. SQL Best Practices Assessment One of the most valuable features available to Azure Arc-enabled SQL Server is the Best Practices Assessment — automatically evaluating your SQL Server configuration against Microsoft's recommendations. 4.1 Prerequisites: Log Analytics Workspace Before enabling assessment, you need a Log Analytics Workspace to store the results: Figure 7: Create Log Analytics Workspace Description: Log Analytics workspace creation form. Fill in: Subscription, Resource Group, Name (green checkmark indicates valid name), and Region (choose same region as your resources). This workspace stores assessment results, performance metrics, and logs from ALL your SQL Servers both Azure Arc-enabled and Azure VMs. Figure 8: Log Analytics Workspace Ready for Use Description: Workspace overview showing: Status (Active), Pricing tier (Pay-as-you-go), and Operational issues (OK). The 'Get Started' section guides you through: (1) Connect a data source, (2) Configure monitoring solutions, (3) Monitor workspace health. This workspace becomes the central repository for all your SQL Server insights. 4.2 Enable SQL Best Practices Assessment Navigate to your SQL Server (Azure Arc-enabled or Azure VM) and enable the assessment: Figure 9: SQL Best Practices Assessment Enable Feature Description: Assessment landing page explaining the feature: evaluates indexes, deprecated features, trace flags, statistics, etc. Results are uploaded via Azure Monitor Agent (AMA). Click 'Enable SQL best practices assessments' to begin configuration. This feature is available for BOTH Azure Arc-enabled SQL Server and Azure SQL VMs. Figure 10: Assessment Configuration Select Log Analytics Workspace Description: Configuration panel requiring: (1) Enable checkbox, (2) Log Analytics workspace selection, (3) Resource group for AMA. The warning 'No Log Analytics workspace is found' appears if you haven't created one yet, see Section 4.1. Once configured, assessments run on schedule and upload results to your workspace. 4.3 Run and Review Assessment Figure 11: Run Assessment Button Description: After configuration, click 'Run assessment' to start evaluation. Assessment duration varies: 5-10 minutes for small environments, 30-60 minutes for large ones. The 'View latest successful assessment' button (disabled until first run completes) opens the results workbook. Figure 12: Assessment Results History Description: Assessment history showing multiple runs with different statuses: 'Scheduled' (pending), 'Completed' (results available), 'Failed - result expired' (data retention exceeded). Regular assessments help catch configuration drift over time. If you see 'Failed - upload failed', see the Troubleshooting section. Figure 13: Assessment Recommendations Actionable Insights Description: Best practices workbook showing three panels: (1) Recommendation Summary with severity (High, Medium) and categories (DBConfiguration, Performance, Index, Backup), (2) Recommendation Details with target and name, (3) Details panel showing selected item — example: 'Enable instant file initialization' for performance improvement. High severity items should be addressed immediately. Severity Levels: Severity Description Action Timeline 🔴 High Critical issues affecting performance or security Address immediately 🟡 Medium Important optimizations recommended Within 30 days 🟢 Low Nice-to-have improvements As time permits ℹ️ Info Informational findings Review and acknowledge Configure Best Practices Assessment — SQL Server enabled by Azure Arc Microsoft Learn: https://learn.microsoft.com/en-us/sql/sql-server/azure-arc/assess?view=sql-server-ver17 Troubleshoot Best Practices Assessment — SQL Server enabled by Azure Arc Microsoft Learn: https://learn.microsoft.com/en-us/sql/sql-server/azure-arc/troubleshoot-assessment?view=sql-server-v... Assess Migration Readiness — SQL Server enabled by Azure Arc Microsoft Learn: https://learn.microsoft.com/en-us/sql/sql-server/azure-arc/migration-assessment?view=sql-server-ver1... Log Analytics Workspace creation: https://learn.microsoft.com/en-us/azure/azure-monitor/logs/quick-create-workspace 5. Monitoring and Governance With your SQL Servers connected to Azure (via Azure Arc or native), you gain access to Azure's full monitoring and governance capabilities. 5.1 Azure Policy Compliance Apply consistent governance policies across ALL your SQL Servers — regardless of where they run: Figure 14: Azure Policy Compliance Dashboard Description: Compliance dashboard showing: 28% overall compliance (5 of 18 resources), pie chart with Compliant (green), Exempt, and Non-compliant (red). The table lists non-compliant resources (microsoft.hybridcompute type = Azure Arc-enabled servers). Use this to ensure ALL SQL Servers, on-premises, cloud, edge meet your organization's standards. 5.2 Performance Monitoring Figure 15: Performance Monitoring Unified Dashboard Description: Performance dashboard showing: Logical Disk Performance (C: drive 30% used), CPU Utilization (1.75% average, 5.73% 95th percentile), Available Memory (3.1GB average). This same dashboard works for Azure Arc-enabled servers, giving you consistent visibility across your entire SQL Server estate. 5.3 Service Dependency Mapping Figure 16: Service Map Visualize Dependencies Description: Map view showing server FNPSVR01 with 17 processes connecting to Port 443 (7 servers) and Port 53 (1 server). Machine Summary shows FQDN, OS (Windows Server 2016), IP address. Use this to understand application dependencies before maintenance or migration available for both Azure Arc-enabled and Azure-native servers. 6. Troubleshooting Guide This section covers common issues encountered when working with Azure Arc-enabled SQL Server and Azure SQL VMs. 6.1 Common Issues Overview Issue Symptoms Azure Arc-enabled Azure VM Assessment Upload Failed Status: 'Failed - upload failed' ✅ Applies ✅ Applies Wire Server 403 Agent cannot connect ❌ N/A ✅ Applies IMDS Disabled Cannot obtain token ❌ N/A ✅ Applies Azure Arc Agent Connectivity Server not appearing ✅ Applies ❌ N/A SQL Login Failed Machine account denied ✅ Applies ✅ Applies 6.2 Real Case Study: Assessment Upload Failed on Azure VM Note: This case study is from an Azure VM (not Azure Arc-enabled). The Wire Server and IMDS issues are specific to Azure VMs. Azure Arc-enabled servers use different connectivity mechanisms. Symptoms observed: Assessment status: 'Failed - upload failed' Local data collected successfully (415 issues) Data not appearing in Log Analytics workspace Root causes identified from logs: Error 1 (ExtensionLog ): [ERROR] Customer disable the IMDS service, cannot obtain IMDS token. Error 2 (WaAppAgent.log): [WARN] GetMachineGoalState() failed: 403 (Forbidden) to 168.63.129.16 Resolution for Azure VMs Fix Wire Server (168.63.129.16) connectivity: # Test connectivity Test-NetConnection -ComputerName 168.63.129.16 -Port 80 # Add route if missing route add 168.63.129.16 mask 255.255.255.255 <gateway> -p # Add firewall rule if needed New-NetFirewallRule -DisplayName "Allow Azure Wire Server" -Direction Outbound -RemoteAddress 168.63.129.16 -Action Allow Fix IMDS (169.254.169.254) connectivity: # Test IMDS Invoke-RestMethod -Uri "http://169.254.169.254/metadata/instance?api-version=2021-02-01" -Headers @{Metadata="true"} # Add firewall rule if blocked New-NetFirewallRule -DisplayName "Allow Azure IMDS" -Direction Outbound -RemoteAddress 169.254.169.254 -Action Allow Test Azure Arc agent connectivity: # Check Arc agent status & "$env:ProgramW6432\AzureConnectedMachineAgent\azcmagent.exe" show # Test connectivity to Azure endpoints & "$env:ProgramW6432\AzureConnectedMachineAgent\azcmagent.exe" check 6.3 Azure Arc-enabled SQL Server Connectivity Issues For Azure Arc-enabled servers (not Azure VMs), connectivity issues are different: Required Azure endpoints for Azure Arc agent: Endpoint Port Purpose management.azure.com 443 Azure Resource Manager login.microsoftonline.com 443 Azure AD authentication *.his.arc.azure.com 443 Azure Arc Hybrid Identity *.guestconfiguration.azure.com 443 Guest configuration Troubleshoot Best Practices Assessment Microsoft Learn: https://learn.microsoft.com/en-us/sql/sql-server/azure-arc/troubleshoot-assessment?view=sql-server-v... What is IP Address 168.63.129.16 (Wire Server) Microsoft Learn: https://learn.microsoft.com/en-us/azure/virtual-network/what-is-ip-address-168-63-129-16 Azure Instance Metadata Service (IMDS) Microsoft Learn: https://learn.microsoft.com/en-us/azure/virtual-machines/instance-metadata-service Troubleshoot IMDS Connection Issues on Windows VMs Microsoft Learn: https://learn.microsoft.com/en-us/troubleshoot/azure/virtual-machines/windows/windows-vm-imds-connec... Troubleshoot Azure Windows VM Agent Issues Microsoft Learn: https://learn.microsoft.com/en-us/troubleshoot/azure/virtual-machines/windows/windows-azure-guest-ag... 7. Troubleshooting Guide Demo Deck: Azure Arc for Windows Server and SQL Server More Additional Resources : Learn more about the new migration capability in Azure Arc on Microsoft Learn. Onboard your SQL Server to Azure Arc today. Learn more about continuous migration assessment from SQL Server enabled by Azure Arc. Download resources on github.com/microsoft/sql-server-samples1KViews0likes0CommentsAdvancing Firmware Security: Fleet Visibility and New Capabilities in Firmware Analysis
When we announced general availability of firmware analysis enabled by Azure Arc last October, our goal was clear: help organizations gain deep visibility into the security of the firmware that powers their IoT, OT, and network devices. Since then, adoption has continued to grow as customers use firmware analysis to uncover vulnerabilities, inventory software components, and secure their software supply chain. Leading into the Hannover Messe (HMI) 2026 conference, we’re excited to share the next wave of firmware analysis capabilities, delivering enhancements that help customers connect firmware risk to real-world fleet impact, prioritize vulnerabilities more effectively, scale to larger and more complex firmware images, and expand security analysis for UEFI-based platforms. These updates are driven directly by customer feedback and by the rapidly evolving threat landscape facing embedded and edge devices. Connecting Firmware Risk to Your Deployed Fleet with Azure Device Registry (Preview) Securing connected devices doesn’t stop at identifying vulnerabilities in firmware—it requires understanding where those vulnerabilities exist in your deployed fleet and which devices are affected. We’re excited to announce a new preview integration between firmware analysis enabled by Azure Arc and Azure Device Registry, bringing fleet-level visibility of IoT and OT devices directly into the firmware analysis experience. This helps customers quickly understand how many devices and assets are running a given firmware image, and which ones may be exposed to known security issues. From firmware insights to fleet impact Firmware analysis helps customers uncover security risks hidden deep inside the firmware running IoT, OT, and network devices—risks such as known CVEs, outdated open-source components, weak cryptography, and insecure configurations. Until now, these insights were primarily scoped to the firmware image itself. With this new preview integration, firmware analysis now connects directly to Azure Device Registry, allowing customers to: See how many devices from IoT Hub integration with ADR (preview) and assets from Azure IoT Operations are associated with a specific analyzed firmware image Understand the real-world blast radius of vulnerabilities discovered in firmware Quickly identify which devices may require patching, mitigation, or isolation This preview bridges an important gap between security analysis and operational decision-making. What’s included in this preview With this release, we’re introducing new fleet-level context directly into the firmware analysis experience: A new Devices + Assets count column in the firmware analysis workspace showing how many Azure Device Registry devices and assets are running each analyzed firmware image A click-through experience that lets users view the list of affected devices and assets in Azure Device Registry Visibility spanning both: Devices connected via IoT Hub Assets managed through Azure IoT Operations This information is derived by correlating firmware metadata with device and asset inventory in Azure Device Registry, giving customers immediate insight into deployment exposure. Key use cases Identify vulnerable devices at scale: When critical CVEs are discovered in a firmware image, customers can immediately see how many deployed devices are impacted—without manually correlating spreadsheets, tools, or inventories. Prioritize remediation actions: With fleet visibility, teams can decide whether to patch devices, temporarily isolate affected devices from the network, or disable devices that pose unacceptable risk. Bridge security and operations teams: Security teams gain clear insight into where vulnerabilities exist, while operations teams can quickly act on specific devices and assets—all within the Azure portal. This integration is especially valuable in environments where downtime, safety, or regulatory compliance matter—such as manufacturing, energy, telecommunications, and critical infrastructure. Prioritizing Vulnerabilities with Enhanced CVE Metadata (Preview) The number of publicly disclosed vulnerabilities continues to rise year over year, making it increasingly difficult for security teams to determine which CVEs truly require urgent action. Simply knowing that a vulnerability exists is no longer enough—teams need context to prioritize remediation efforts. With this release, firmware analysis now provides richer metadata for each discovered CVE, helping customers focus on vulnerabilities that pose the greatest real-world risk. New CVE metadata includes: CISA Known Exploited Vulnerabilities (KEV) status – Indicates whether a CVE is listed in the CISA KEV catalog, signaling that the vulnerability is actively exploited in the wild. EPSS score (Exploit Prediction Scoring System) – A data-driven probability score that estimates the likelihood of a vulnerability being exploited in the next 30 days, complementing traditional severity metrics by focusing on exploitation likelihood rather than impact alone. Additional vulnerability context, including CVSS vectors and base scores, CWE classifications, and expanded metadata to support filtering and analysis. Together, these enhancements make it easier to triage findings, align remediation with risk, and communicate priorities across security, engineering, and product teams. Faster Performance for Large and Complex Firmware Images As firmware analysis adoption has grown, we’ve seen customers analyze increasingly large and complex firmware images—particularly in domains like networking equipment, where a single image can generate thousands of findings. To support these scenarios, we’ve made architectural enhancements to the service that significantly improve performance when working with large result sets. Key improvements include: Up to 90% reduction in load times of analysis results, especially for firmware images producing 10,000+ findings More responsive filtering and exploration of results These changes ensure that firmware analysis remains fast and usable at scale, even for complex network and infrastructure firmware images. Expanding UEFI Firmware Analysis (Preview) Modern devices increasingly rely on UEFI firmware as a foundational security boundary. In this release, we’re expanding our UEFI analysis capabilities to provide deeper visibility into UEFI executables and components. New UEFI-focused capabilities include: Detection of OpenSSL libraries and related CVEs within UEFI firmware Binary hardening analysis for UEFI executables, including detection of proper configuration of Data Execution Prevention (DEP) memory protection Continued support for discovering cryptographic material in UEFI images, including embedded certificates and keys This preview allows customers to evaluate the new capabilities, provide feedback, and help shape future enhancements in this area. Note: UEFI SBOM and binary analysis features are currently in preview and intended for evaluation and feedback. Bulk Export of Analysis Results for Supply Chain Collaboration We also recently released a highly requested feature that makes it easier to share firmware analysis results with partners and suppliers. Customers can now: Bulk download analysis results across one or more firmware images Export results as CSV files packaged into a ZIP archive This capability simplifies workflows such as sharing findings with device manufacturers or firmware suppliers, integrating results into downstream analysis or reporting pipelines, and supporting software supply chain security and compliance processes. Looking Ahead We’re excited about the progress we’ve made with this release and what it means for customers securing IoT, OT, and network devices. From connecting firmware risk to fleet-level impact with Azure Device Registry, to richer vulnerability prioritization, improved scalability, and deeper UEFI analysis—these enhancements reinforce firmware analysis as a critical tool for addressing some of the most challenging blind spots in modern infrastructure security. Firmware security is foundational to trustworthy systems—especially as edge devices continue to play a central role in industrial operations, networking, and data collection. If you’re already using firmware analysis and Azure Device Registry, the ADR integration preview will appear directly within the firmware analysis experience as it rolls out. We look forward to your feedback as we continue building secure, observable, and manageable digital operations with Azure. As always, we value your feedback, so please let us know what you think.260Views0likes0CommentsAutomating Arc-enabled SQL Server license type configuration with Azure Policy
Azure Arc enables customers to onboard SQL Server instances - hosted on Linux or Windows - into Azure, regardless of where they are hosted: on‑premises, in multicloud environments, or at the edge. Once onboarded, these resources can be managed through the Azure Portal using services like Azure Monitor, Azure Policy, and Microsoft Defender for Cloud. An important part of this onboarding is configuring the license type on each Arc-enabled resource to match your licensing agreement with Microsoft. For SQL Server, the LicenseType property on the Arc extension determines how the instance is licensed: Paid (you have a SQL Server license with Software Assurance or a SQL Server subscription), PAYG (you are paying for SQL Server software on a pay-as-you-go basis), or LicenseOnly (you have a perpetual SQL Server license). Setting this correctly matters for two reasons: Unlocking additional benefits: customers with Paid or PAYG license type gain access to some Azure services at no extra cost - such as Azure Update Manager and Machine Configuration - as well as exclusive capabilities like Best Practices Assessment and Remote Support Enabling pay-as-you-go billing: customers who do not have Software Assurance can pay for SQL Server software only when they use it via their Azure subscription by setting the license type to PAYG Configure the license types at scale using Azure Policy Configuring the license type on each Arc-enabled SQL Server instance can be done manually in the Azure Portal, but for large scale operations, automation is essential. One way to implement automation is via PowerShell, as explained here: Configure SQL Server - SQL Server enabled by Azure Arc | Microsoft Learn. But here we will focus on how this can be automated using Azure Policy. An existing article, written by Jeff Pigott, explains this process for Windows Server, which inspired extending the same approach to SQL Server. How to deploy the policy? Deployment has two steps: Create/update the Azure Policy definition and assignment Start a remediation task so existing Arc-enabled SQL Server extensions are brought into compliance You can deploy Azure Policy in multiple ways. In this article, we use PowerShell. See also: Tutorial: Build policies to enforce compliance - Azure Policy | Microsoft Learn. Source code: microsoft/sql-server-samples/.../arc-sql-license-type-compliance. Personal repository: claestom/sql-arc-policy-license-config. Definition and assignment creation Download the required files: # Optional: create and enter a local working directory mkdir sql-arc-lt-compliance cd sql-arc-lt-compliance $baseUrl = "https://raw.githubusercontent.com/microsoft/sql-server-samples/master/samples/manage/azure-arc-enabled-sql-server/compliance/arc-sql-license-type-compliance" New-Item -ItemType Directory -Path policy, scripts -Force | Out-Null curl -sLo policy/azurepolicy.json "$baseUrl/policy/azurepolicy.json" curl -sLo scripts/deployment.ps1 "$baseUrl/scripts/deployment.ps1" curl -sLo scripts/start-remediation.ps1 "$baseUrl/scripts/start-remediation.ps1" Note: On Windows PowerShell 5.1, curl is an alias for Invoke-WebRequest . Use curl.exe instead, or run the commands in PowerShell 7+. Authenticate to Azure: Connect-AzAccount Set your variables. Only TargetLicenseType is required - all others are optional: # Required $TargetLicenseType = "PAYG" # "Paid" or "PAYG" # Optional (uncomment to override defaults) # $ManagementGroupId = "<management-group-id>" # Default: tenant root management group # $SubscriptionId = "<subscription-id>" # Default: policy assigned at management group scope # $ExtensionType = "Both" # "Windows", "Linux", or "Both" (default) # $LicenseTypesToOverwrite = @("Unspecified","Paid","PAYG","LicenseOnly") # Default: all Run the deployment script: # Minimal: uses defaults for management group, platform, and overwrite targets .\scripts\deployment.ps1 -TargetLicenseType $TargetLicenseType # With subscription scope .\scripts\deployment.ps1 -TargetLicenseType $TargetLicenseType -SubscriptionId $SubscriptionId # With all options .\scripts\deployment.ps1 ` -ManagementGroupId $ManagementGroupId ` -SubscriptionId $SubscriptionId ` -ExtensionType $ExtensionType ` -TargetLicenseType $TargetLicenseType ` -LicenseTypesToOverwrite $LicenseTypesToOverwrite Parameter notes: ManagementGroupId (optional): management group where the policy definition is created. Defaults to the tenant root management group when not specified ExtensionType (optional, default Both ): Windows , Linux , or Both . When Both , a single policy definition and assignment covers both platforms SubscriptionId (optional): if provided, assignment scope is subscription (otherwise management group scope) TargetLicenseType (required): Paid or PAYG LicenseTypesToOverwrite (optional, default all): controls which current states are eligible for update Unspecified = no current LicenseType Paid , PAYG , LicenseOnly = explicit current values The script also creates a system-assigned managed identity on the policy assignment and assigns required roles automatically. Role assignments include retry logic (5 attempts, 10-second delay) to handle managed identity replication delays, which helps prevent common PolicyAuthorizationFailed errors. Remediation task creation After deployment, allow a few minutes for Azure Policy to run a compliance scan for the selected scope. You can monitor this in Azure Policy → Compliance. More info: Get policy compliance data - Azure Policy | Microsoft Learn. Set your variables. TargetLicenseType is required and must match the value used during deployment: # Required $TargetLicenseType = "PAYG" # Must match the deployment target # Optional (uncomment to override defaults) # $ManagementGroupId = "<management-group-id>" # Default: tenant root management group # $SubscriptionId = "<subscription-id>" # Default: remediation runs at management group scope # $ExtensionType = "Both" # Must match the platform used for deployment Then start remediation: # Minimal: uses defaults for management group and platform .\scripts\start-remediation.ps1 -TargetLicenseType $TargetLicenseType -GrantMissingPermissions # With subscription scope .\scripts\start-remediation.ps1 -TargetLicenseType $TargetLicenseType -SubscriptionId $SubscriptionId -GrantMissingPermissions # With all options .\scripts\start-remediation.ps1 ` -ManagementGroupId $ManagementGroupId ` -ExtensionType $ExtensionType ` -SubscriptionId $SubscriptionId ` -TargetLicenseType $TargetLicenseType ` -GrantMissingPermissions Parameter notes: ManagementGroupId (optional): defaults to tenant root management group ExtensionType (optional, default Both ): must match the platform used for the assignment SubscriptionId (optional): run remediation at subscription scope TargetLicenseType (required): must match the assignment target GrantMissingPermissions (optional switch): checks and assigns missing required roles before remediation starts You can track remediation progress in Azure Policy → Remediation → Remediation tasks. It can take a few minutes to complete, depending on scope and resource count. Recurring Billing Consent (PAYG) When TargetLicenseType is set to PAYG , the policy automatically includes ConsentToRecurringPAYG in the extension settings with Consented: true and a UTC timestamp. For details of this requirement see: Move SQL Server license agreement to pay-as-you-go subscription - SQL Server enabled by Azure Arc | Microsoft Learn. The policy also checks for ConsentToRecurringPAYG in its compliance evaluation - resources with LicenseType: PAYG but missing the consent property are flagged as non-compliant and remediated. This applies both when transitioning to PAYG and for existing PAYG extensions that predate the consent requirement (backward compatibility). Note: Once ConsentToRecurringPAYG is set on an extension, it cannot be removed - this is enforced by the Azure resource provider. When transitioning away from PAYG, the policy changes LicenseType but leaves the consent property in place. RBAC When .\scripts\deployment.ps1 creates the policy assignment, it uses -IdentityType SystemAssigned . Azure then creates a managed identity for that assignment. The assignment identity needs these roles at assignment scope (or inherited scope): Azure Extension for SQL Server Deployment: allows updating Arc SQL extension settings, including LicenseType Reader: allows reading resource and extension state for policy evaluation Resource Policy Contributor: allows policy-driven template deployments required by DeployIfNotExists This identity is used whenever DeployIfNotExists applies changes, both during regular compliance evaluation and during remediation runs. By default, the deployment script assigns these roles automatically with built-in retry logic to handle managed identity replication delays, which helps prevent common PolicyAuthorizationFailed errors. Brownfield and Greenfield Scenarios This policy is useful in both brownfield and greenfield Azure Arc environments. Brownfield: existing Arc SQL inventory In a brownfield environment, you already have Arc-enabled SQL Server resources in inventory and the current LicenseType values might be mixed, incorrect, or missing. This is where Azure Policy is especially useful, because it gives you a controlled way to remediate the current estate at scale. Depending on how you configure targetLicenseType and licenseTypesToOverwrite, you can use the policy to: standardize all in-scope resources on a single value set LicenseType only when it is missing migrate a specific subset, such as Paid to PAYG preserve selected states while correcting only the resources that need attention Examples: Standardize everything to Paid targetLicenseType: Paid licenseTypesToOverwrite: ['Unspecified','Paid','PAYG','LicenseOnly'] Result: every in-scope Arc SQL extension is converged to LicenseType == Paid. Backfill only missing values targetLicenseType: Paid licenseTypesToOverwrite: ['Unspecified'] Result: only resources without a configured LicenseType are updated; existing Paid, PAYG, and LicenseOnly values remain unchanged. Migrate only Paid to PAYG targetLicenseType: PAYG licenseTypesToOverwrite: ['Paid'] Result: only resources currently set to Paid are updated to PAYG; missing, PAYG, and LicenseOnly remain unchanged. When transitioning to PAYG, the policy also automatically sets ConsentToRecurringPAYG with Consented: true and a UTC timestamp, as required for recurring pay-as-you-go billing. Protect existing PAYG, fix only missing or LicenseOnly targetLicenseType: Paid licenseTypesToOverwrite: ['Unspecified','LicenseOnly'] Result: resources with no LicenseType or with LicenseOnly are updated to Paid, while existing PAYG stays untouched. Greenfield: newly onboarded SQL Servers In a greenfield scenario, the main value of Azure Policy is ongoing enforcement. Once new SQL Servers are onboarded to Azure Arc and fall within the assignment scope, the policy can act as a governance control to keep LicenseType aligned with your business model. This means Azure Policy is not only a remediation mechanism for existing inventory, but also a way to continuously enforce the intended license configuration for future Arc-enabled SQL Server resources. Azure Policy vs tagging By default, Microsoft manages automatic deployment of SQL Server extension for Azure. It include an option to enforce the LicenseType setting via tags. See Manage Automatic Connection - SQL Server enabled by Azure Arc | Microsoft Learn for details. This way all newly onboarded SQL Server instance are set to the desired LicenceType from day one. The deployment of the Azure Policy is still important to ensure that the changes of the extension properties or ad-hoc additions of the SQL Server instances stay compliant to our business model. A practical way to think about it: Tagging ensures the initial compliance of the newly connected Arc-enabled SQL servers Azure Policy enforces ongoing compliance of the existing Arc-enabled SQL servers Tools Interested in gaining better visibility into LicenseType configurations across your estate? Below you'll find an insightful KQL query and an accompanying workbook to help track compliance. KQL Query resources | where type == "microsoft.hybridcompute/machines" | where properties.detectedProperties.mssqldiscovered == "true" | extend machineIdHasSQLServerDiscovered = id | project name, machineIdHasSQLServerDiscovered, resourceGroup, subscriptionId | join kind= leftouter ( resources | where type == "microsoft.hybridcompute/machines/extensions" | where properties.type in ("WindowsAgent.SqlServer","LinuxAgent.SqlServer") | extend machineIdHasSQLServerExtensionInstalled = iff(id contains "/extensions/WindowsAgent.SqlServer" or id contains "/extensions/LinuxAgent.SqlServer", substring(id, 0, indexof(id, "/extensions/")), "") | project License_Type = properties.settings.LicenseType, machineIdHasSQLServerExtensionInstalled)on $left.machineIdHasSQLServerDiscovered == $right.machineIdHasSQLServerExtensionInstalled | where isnotempty(machineIdHasSQLServerExtensionInstalled) | project-away machineIdHasSQLServerDiscovered, machineIdHasSQLServerExtensionInstalled Source: Configure SQL Server - SQL Server enabled by Azure Arc | Microsoft Learn. Azure Workbook claestom/azure-arc-sa-workbook: Azure Workbook for monitoring Software Assurance compliance across Arc-enabled servers and SQL Server instances. Resources Configure SQL Server - SQL Server enabled by Azure Arc | Microsoft Learn Azure Policy documentation | Microsoft Learn Automating Windows Server Licensing Benefits with Azure Arc Policy | Microsoft Community Hub Recurring billing consent - SQL Server enabled by Azure Arc | Microsoft Learn claestom/azure-arc-sa-workbook: Azure Workbook for monitoring Software Assurance compliance across Arc-enabled servers and SQL Server instances microsoft/sql-server-samples/.../arc-sql-license-type-compliance claestom/sql-arc-policy-license-config Thank you!689Views2likes0Comments