azure arc
219 TopicsSimplify Azure Arc Server Onboarding with Ansible and the New Onboarding Role
If you’re already using Ansible to manage your infrastructure, there’s now a simpler—and more secure—way to bring machines under Azure Arc management. We’ve introduced a new Azure Arc onboarding role designed specifically for automated scenarios like Ansible playbooks. This role follows the principle of least privilege, giving your automation exactly what it needs to onboard servers—nothing more. A better way to onboard at scale Many customers want to standardize Azure Arc onboarding across hybrid and multicloud environments, but run into common challenges: Over‑privileged service principals Manual steps that don’t scale Inconsistent onboarding across environments By combining Ansible with the Azure Arc onboarding role, you can: Automate server onboarding end‑to‑end Reduce permissions risk with a purpose‑built role Scale confidently across thousands of machines Integrate Arc onboarding into existing Ansible workflows Built for automation, designed for security The new onboarding role removes the need to assign broader Azure roles just to connect servers to Azure Arc. Instead, your Ansible automation can authenticate using a tightly scoped identity that’s purpose‑built for Arc onboarding—making security teams happier without slowing down operations. Whether you’re modernizing existing datacenters or managing servers across multiple clouds, this new approach makes Azure Arc onboarding simpler, safer, and repeatable. Get started in minutes Our Microsoft Learn documentation provides guidance to help you get started quickly: Connect machines to Azure Arc at scale with Ansible Check out the Arc onboarding role, part of the Azure collection in Ansible Galaxy: Ansible Galaxy - azure.azcollection - Arc onboarding role Anything else you’d like to see with Azure Arc + Linux? Drop us a comment!21Views0likes0CommentsRun the latest Azure Arc agent with Automatic Agent Upgrade (Public Preview)
Customers managing large fleets of Azure Arc servers need a scalable way to ensure the Azure Arc agent stays up to date without manual intervention. Per server configuration does not scale, and gaps in upgrade coverage can lead to operational drift, missed features, and delayed security updates. To address this, we’re introducing two new options to help customers enable Automatic Agent Upgrade at scale: applied as a built-in Azure Policy and a new onboarding CLI flag. The built-in policy makes it easy to check whether Automatic Agent Upgrade is enabled across a given scope and automatically remediates servers that are not compliant. For servers being newly onboarded, customers can enable the feature at onboarding by adding the --enable-automatic-upgrade flag to the azcmagent connect command, ensuring the agent is configured correctly from the start. What is Automatic Agent Upgrade? Automatic Agent Upgrade is a feature, in public preview, that automatically keeps the Azure Connected Machine agent (Arc agent) up to date. Updates are managed by Microsoft, so once enabled, customers no longer need to manually manage agent upgrades. By always running the latest agent version, customers receive all the newest capabilities, security updates, and bug fixes as soon as they’re released. Learn more: What's new with Azure Connected Machine agent - Azure Arc | Microsoft Learn. Getting Started Apply automatic agent upgrade policy Navigate to the ‘Policy’ blade in the Azure Portal Navigate to the ‘Compliance’ section and click ‘Assign Policy’ Fill out the required sections Scope: Subscription and resource group (optional) that policy will apply to Policy definition: Configure Azure Arc-enabled Servers to enable automatic upgrades Navigate to the ‘Remediation’ tab and check the box next to ‘Create a remediation task’ Navigate to the ‘Review + create’ tab and press ‘Create’. The Policy has been successfully applied to the scope. For more information on this process, please visit this article Quickstart: Create policy assignment using Azure portal - Azure Policy | Microsoft Learn. Apply automatic agent upgrade CLI Flag Adding the following flag enables automatic agent upgrade during onboarding --enable-automatic-upgrade While this flag can be used on a single server, it can also be applied at scale using one of the existing Azure Arc at scale onboarding methods and adding the flag Connect hybrid machines to Azure at scale - Azure Arc | Microsoft Learn. Here is an at scale onboarding sample using a basic script. azcmagent connect --resource-group {rg} --location {location} --subscription-id {subid} --service-principal-id {service principal id} --service-principal-secret {service principal secret} --tenant-id {tenant id} --enable-automatic-upgrade To get started with this feature or learn more, please refer to this article Manage and maintain the Azure Connected Machine agent - Azure Arc | Microsoft Learn.306Views1like2CommentsAnnouncing Private Preview: Deploy Ansible Playbooks using Azure Policy via Machine Configuration
Azure Arc is on a mission to unify security, compliance, and management for Windows and Linux machines—anywhere. By extending Azure’s control plane beyond the cloud, Azure Arc enables organizations to unify governance, compliance, security and management of servers across on‑premises, edge, and multicloud environments using a consistent set of Azure tools and policies. Building on this mission, we’re excited to announce the private preview of deploying Ansible playbooks through Azure Policy using Machine Configuration, bringing Ansible‑driven automation into Azure Arc’s policy‑based governance model for Azure and Arc‑enabled Linux machines. This new capability enables you to orchestrate Ansible playbook execution directly from Azure Policy (via Machine Configuration) without requiring an Ansible control node, while benefiting from built‑in compliance reporting and remediation. Why this matters As organizations manage increasingly diverse server estates, they often rely on different tools for Windows and Linux, cloud, on-premises, or at the edge—creating fragmented security, compliance, and operational workflows. Many organizations rely on Ansible for OS configuration and application setup, but struggle with: Enforcing consistent configuration across distributed environments Detecting and correcting drift over time Integrating Ansible automation with centralized governance and compliance workflows With this private preview, Azure Policy becomes the single control plane for applying and monitoring Ansible‑based configuration, bringing Linux automation into the same governance model already used for Windows. Configuration is treated as policy—declarative, auditable, and continuously enforced—with compliance results surfaced in familiar Azure dashboards. What’s included in the private preview In this preview, you can: Use Azure Policy to trigger Ansible playbook execution on Azure and Azure Arc–enabled Linux machines Eliminate the need for a dedicated Ansible control node Enable drift detection and automatic remediation by default View playbook execution status and compliance results directly in the Azure Policy compliance dashboard, alongside your other policies This provides a unified security, compliance and management experience across Windows and Linux machines—whether they’re running in Azure or connected through Azure Arc—while using your existing Ansible investments. Join the private preview If you’re interested in helping shape the future of Ansible‑based configuration management in Azure Arc, we’d love to partner with you. We’re especially interested in hearing your stories around usability, compliance reporting, and real‑world operational workflows. 👉 Sign up for the private preview and we'll reach out to you. We’ll continue investing in deeper Linux parity, broader scenarios, and tighter integration across Azure Arc’s security, governance and compliance experiences. We look forward to enhancing your unified Azure Arc experience for deploying, governing, and remediating configuration with Ansible—bringing consistent security, compliance, and management to Windows and Linux machines not only in Azure, but also across on‑premises and other public clouds.291Views0likes0CommentsAnnouncing Public Preview of Argo CD extension on AKS and Azure Arc enabled Kubernetes clusters
We are excited to announce public preview of the Argo CD extension for Azure Kubernetes Service (AKS) and Azure Arc-enabled Kubernetes clusters. As GitOps becomes the standard for deploying and operating applications at scale, enterprises need a way to implement GitOps while staying compliant with best practices for security and identity management. Argo CD extension delivers on this need across 3 pillars - Trusted Identity and Secure Access The Argo CD extension integrates with Microsoft Entra ID to provide a secure, enterprise-ready experience for: Secure authentication using Workload Identity federation to Azure Container Registry (ACR) and Azure DevOps. This removes the need for long-lived credentials or hard-coded secrets in Git Repos, moving your CD pipelines closer to a true zero-trust architecture. Single Sign-On (SSO) using existing Azure identities. Enterprise-Grade Hardening and Security This preview introduces several enhancements to improve your security posture: To minimize the attack surface, the extension’s images are built on Azure Linux, specifically engineered for reduced CVEs and improved baseline security. Opt-in to automatic patch releases to stay current on security fixes while maintaining full control over your change management processes. Parity with upstream Argo CD Argo CD extension is designed to remain fully aligned with the upstream Argo CD open‑source project, so teams can use Argo CD as they do today with support for Configuring Argo CD extension with High availability (HA) for production‑grade deployments of critical workloads. Using hub‑and‑spoke architecture for multi‑cluster GitOps scenarios. Application and ApplicationSet, enabling automated and scalable application delivery across large fleets of clusters. Getting Started We invite you to explore the Argo CD extension and provide feedback as we continue to evolve GitOps capabilities for Kubernetes. To get started today, you can enable the extension on your clusters using the Azure CLI. Argo CD extension management via the Azure Portal will be available in a few weeks.340Views0likes0CommentsBuilding Microsoft’s Sovereign AI on Azure Local with NVIDIA RTX PRO and Next Gen NVIDIA Rubin
This blog explores how Azure Local, in partnership with NVIDIA, enables governments and regulated industries to build and operate Sovereign AI within their own trusted boundaries. From enterprise AI acceleration available today with NVIDIA RTX PRO™ Blackwell GPUs to a forward‑looking preview of next‑generation NVIDIA Rubin support, Azure Local provides a consistent platform to run advanced AI workloads—connected or fully disconnected—without sacrificing control, compliance, or governance. Together with Foundry Local, AKS on Azure Local, and Azure Arc, customers can bring AI closer to sensitive data and evolve their Sovereign Private Cloud strategies over time with confidence.901Views3likes0CommentsAzure Arc Server Feb 2026 Forum Recap
Please find the recording for the monthly Azure Arc Server Forum at YouTube! During the February 2026 Azure Arc Server Forum, we discussed: Arc Server Reporting & Dashboard (Jeff Pigot, Sr. Solution Engineer): Check out this awesome visual reporting bringing together different management services and experiences across Azure Arc-enabled servers on GitHub at Arc Software Assurance Benefits Dashboard. VM Applications (Yunis Hussein, Product Manager): Shared private preview experience and capabilities for 3P Application Deployment and Patching on Azure Arc-enabled servers. Please fill out this form to participate in Private Preview. Windows Server 2016 ESUs enabled by Azure Arc: Portal Experience Feedback (George Enninful): Please sign up on the feedback form. To sign up for the Azure Arc Server Forum and newsletter, please register with contact details at https://aka.ms/arcserverforumsignup/. For the latest agent release notes, check out What's new with Azure Connected Machine agent - Azure Arc | Microsoft Learn. Our March 2026 forum will be held on Thursday, March 26 at 9:30 AM PST / 12:30 PM EST. We look forward to you joining us, thank you!453Views0likes0CommentsAnnouncing the General Availability of the Azure Arc Gateway for Arc-enabled Kubernetes!
We’re excited to announce the General Availability of Arc gateway for Arc‑enabled Kubernetes. Arc gateway dramatically simplifies the network configuration required to use Azure Arc by consolidating outbound connectivity through a small, predictable set of endpoints. For customers operating behind enterprise proxies or firewalls, this means faster onboarding, fewer change requests, and a smoother path to value with Azure Arc. What’s new: To Arc‑enable a Kubernetes Cluster, customers previously had to allow 18 distinct endpoints. With Arc gateway GA, you can do the same with just 9, a 50% reduction that removes friction for security and networking teams. Why This Matters Organizations with strict outbound controls often spend days, or weeks, coordinating approvals for multiple URLs before they can onboard resources to Azure Arc. By consolidating traffic to a smaller set of destinations, Arc gateway: Accelerates onboarding for Arc‑enabled Kubernetes by cutting down the proxy/firewall approvals needed to get started. Simplifies operations with a consistent, repeatable pattern for routing Arc agent and extension traffic to Azure. How Arc gateway works Arc gateway introduces two components that work together to streamline connectivity: Arc gateway (Azure resource): A single, unique endpoint in your Azure tenant that receives incoming traffic from on‑premises Arc workloads and forwards it to the right Azure services. You configure your enterprise environment to allow this endpoint. Azure Arc Proxy (on every Arc‑enabled Kubernetes Cluster): A component of the Arc K8s agent that routes agent and extension traffic to Azure via the Arc gateway endpoint. It’s part of the core Arc agent; no separate install is required. At a high level, traffic flows: Arc-enabled Kubernetes agent → Arc Proxy → Enterprise Proxy → Arc gateway → Target Azure service. Scenario Coverage As part of this GA release, Arc-enabled Kubernetes Onboarding and other common Arc‑enabled Kubernetes scenarios are supported through Arc gateway, including: Arc-enabled Kubernetes Cluster Connect Arc-enabled Kubernetes Resource View Custom Location Azure Policy's Extension for Azure Arc For other scenarios, including Microsoft Defender for Containers, Azure Key Vault, Container Insights in Azure Monitor, etc., some customer‑specific data plane destinations (e.g., your Log Analytics workspaces, Storage Accounts, or Key Vault URLs) still need to be allow‑listed per your environment. Please consult the Arc gateway documentation for the current scenario‑by‑scenario coverage and any remaining per‑service URLs. Get started Create an Arc gateway resource using the Azure portal, Azure CLI, or PowerShell. Allow the Arc gateway endpoint (and the small set of core endpoints) in your enterprise proxy/firewall. Onboard or update clusters to use your Arc gateway resource. For step‑by‑step guidance, see the Arc gateway documentation on Microsoft Learn. FAQs Does Arc gateway require new software on my clusters? No additional installation - Arc Proxy is part of the standard Arc-enabled Kubernetes Agent. Will every Arc scenario route through the gateway today? Arc-enablement, and other scenarios are covered at GA; some customer‑specific data plane endpoints (for example, Log Analytics workspace FQDNs) may still need to be allowed. Check the docs for the latest coverage details. What is the status of Arc gateway for other infrastructure types? Arc gateway is already GA for Arc-enabled Servers, and Azure Local. Tell us what you think We’d love your feedback on Arc gateway GA for Kubernetes - what worked well, what could be improved, and which scenarios you want next. Use the Arc gateway feedback form to share your input with the product team.882Views3likes0CommentsAnnouncing Public Preview: Simplified Machine Provisioning for Azure Local
Deploying infrastructure at the edge has always been challenging. Whether it’s retail stores, factories, branch offices, or remote sites, getting servers racked, configured, and ready for workloads often require skilled IT staff on-site. That process is slow, expensive, and error-prone, especially when deployments need to happen at scale. To address this, we’re introducing Public Preview of Simplified Machine Provisioning for Azure Local - a new way to provision Azure Local hardware with minimal onsite interaction, while maintaining centralized control through Azure. This new approach enables customers to provision hardware by racking, powering on, and letting Azure do the rest. New Machine Provisioning Simplified machine provisioning shifts configuration to Azure, reducing the need for technical expertise on-site. Instead of manually configuring each server locally, IT teams can now: Define provisioning configuration centrally in Azure Securely complete provisioning remotely with minimal steps Automate provisioning workflows using ARM templates and ensure consistency across sites Built on Open Standards Simplified machine provisioning on Azure Local is based on the FIDO Device Onboarding (FDO) specification, an industry-standard approach for securely onboarding devices at scale. FDO enables: Secure device identity and ownership transfer protecting machines with zero trust supply chain security A consistent onboarding model across device classes, this foundation can extend beyond servers to broader edge scenarios. Centralized Site-Based Configuration in Azure Arc The new machine provisioning flow uses Azure Arc Site, allowing customers to define configuration once and apply it consistently across multiple machines. In Azure Arc, a site represents a physical business location (store/factory/campus) and the set of resources associated with it. It enables targeted operations and configuration at a per‑site level (or across many sites) for consistent management at scale. With site-based configuration, customers can: Create and manage machine provisioning settings centrally in the Azure portal Define networking and environment configuration at the site level Reuse the same configuration as new machines are added Minimal Onsite Interaction Simplified provisioning is designed to minimize onsite effort. The on-site staff only rack and power on the hardware and insert the prepared USB. No deep infrastructure or Azure expertise required. After exporting the ownership voucher and sharing it with IT, the remaining provisioning is completed remotely by IT teams through Azure. The prepared USB is created using a first‑party Microsoft USB Preparation Tool that comes with the maintenance environment* package available through the Azure portal, enabling consistent, repeatable creation of bootable installation media. *Maintenance environment - a lightweight bootstrap OS that connects the machine to Azure, installs required Azure Arc extensions, and then downloads and installs the Azure Local operating system. End-to-End visibility into Deployment Customers get visibility into deployment progress which helps in quickly identifying where a deployment is in the process and respond faster when issues arise. They can look into the status using Provisioning experience in Azure portal or using Configurator app. Seamless Transition to Cluster Creation and Workloads Once provisioning is complete, machines created through this flow are ready for Azure Local cluster creation. Customers can proceed with cluster setup and workload deployment. How it works? At a high level, this simpler way of machine provisioning looks like this: Minimal onsite setup Prepare a USB drive using machine provisioning software Insert the prepared USB drive & boot the machine Share the machine ownership voucher with IT team. Provision remotely Create an Azure Arc site Configure networking, subscription, and deployment settings Download provisioning artifacts from the Azure portal Deploy Azure Local cluster using existing flows in Azure Arc. Once provisioning is complete, the environment is ready for cluster creation and workload deployment on Azure Local. Status and progress are visible in both the Azure portal, and the Configurator app. IT teams can monitor, troubleshoot, and complete provisioning remotely. Available Now in Public Preview This new experience empowers organizations to deploy Azure Local infrastructure faster, more consistently, and at scale, while minimizing on-site complexity. We invite customers and partners to explore the preview and help us shape the future of edge infrastructure deployment. Try it at https://aka.ms/provision/tryit. Refer documentation for more details.2.7KViews7likes4CommentsUpgrade Azure Local operating system to new version
11/14/2025 Revision The recommended upgrade paths have changed with the Azure Local 2510 release, and the information in this blog is now outdated. Please refer to the following release notes for the latest information: Azure Local release information Today, we’re sharing more details about the end of support for Azure Local, with OS version 25398.xxxx (23H2) on October 31, 2025. After this date, monthly security and quality updates stop, and Microsoft Support remains available only for upgrade assistance. Your billing continues, and your systems keep working, including registration and repair. There are several options to upgrade to Azure Local, with OS version 26100.xxxx (24H2) depending on which scenario applies to you. Scenario #1: You are on Azure Local solution, with OS version 25398.xxxx If you're already running the Azure Local solution, with OS version 25398.xxxx, there is no action required. You will automatically receive the upgrade to OS version 26100.xxxx via a solution update to 2509. Azure Local, version 23H2 and 24H2 release information - Azure Local | Microsoft Learn for the latest version of the diagram. If you are interested in upgrading to OS version 26100.xxxx before the 2509 release, there will be an opt-in process available in the future with production support. Scenario #2: You are on Azure Stack HCI and haven’t performed the solution upgrade yet Scenario #2a: You are still on Azure Stack HCI, version 22H2 With the 2505 release, a direct upgrade path from version 22H2 OS (20349.xxxx) to 24H2 OS (26100.xxxx) has been made available. To ensure a validated, consistent experience, we have reduced the process to using the downloadable media and PowerShell to install the upgrade. If you’re running Azure Stack HCI, version 22H2 OS, we recommend taking this direct upgrade path to the version 24H2 OS. Skipping the upgrade to the version 23H2 OS will be one less upgrade hop and will help reduce reboots and maintenance planning prior to the solution upgrade. After then, perform post-OS upgrade tasks and validate the solution upgrade readiness. Consult with your hardware vendor to determine if version 24H2 OS is supported before performing the direct upgrade path. The solution upgrade for systems on the 24H2 OS is not yet supported but will be available soon. Scenario #2b: You are on Azure Stack HCI, version 23H2 OS If you performed the upgrade from Azure Stack HCI, version 22H2 OS to version 23H2 OS (25398.xxxx), but haven’t applied the solution upgrade, then we recommend that you perform post-OS upgrade tasks, validate the solution upgrade readiness, and apply the solution upgrade. Diagram of Upgrade Paths Conclusion We invite you to identify which scenarios apply to you and take action to upgrade your systems. On behalf of the Azure Local team, we thank you for your continuous trust and feedback! Learn more To learn more, refer to the upgrade documentation. For known issues and remediation guidance, see the Azure Local Supportability GitHub repository.4.4KViews4likes10CommentsAnnouncing the preview of Azure Local rack aware cluster
As of 1/22/2026, Azure Local rack aware cluster is now generally available! To learn more: Overview of Azure Local rack aware clustering - Azure Local | Microsoft Learn We are excited to announce the public preview of Azure Local rack aware cluster! We previously published a blog post with a sneak peek of Azure Local rack aware cluster and now, we're excited to share more details about its architecture, features, and benefits. Overview of Azure Local rack aware cluster Azure Local rack aware cluster is an advanced architecture designed to enhance fault tolerance and data distribution within an Azure Local instance. This solution enables you to cluster machines that are strategically placed across two physical racks in different rooms or buildings, connected by high bandwidth and low latency within the same location. Each rack functions as a local availability zone, spanning layers from the operating system to Azure Local management, including Azure Local VMs. The architecture leverages top-of-rack (ToR) switches to connect machines between rooms. This direct connection supports a single storage pool, with rack aware clusters distributing data copies evenly between the two racks. Even if an entire rack encounters an issue, the other rack maintains the integrity and accessibility of the data. This design is valuable for environments needing high availability, particularly where it is essential to avoid rack-level data loss or downtime from failures like fires or power outages. Key features Starting in Azure Local version 2510, this release includes the following key features for rack aware clusters: Rack-Level Fault Tolerance & High Availability Clusters span two physical racks in separate rooms, connected by high bandwidth and low latency. Each rack acts as a local availability zone. If one rack fails, the other maintains data integrity and accessibility. Support for Multiple Configurations Architecture supports 2 machines up to 8 machines, enabling scalable deployments for a wide range of workloads. Scale-Out by Adding Machines Easily expand cluster capacity by adding machines, supporting growth and dynamic workload requirements without redeployment. Unified Storage Pool with Even Data Distribution Rack aware clusters offer a unified storage pool with Storage Spaces Direct (S2D) volume replication, automatically distributing data copies evenly across both racks. This ensures smooth failover and reduces the risk of data loss. Azure Arc Integration and Management Experience Enjoy native integration with Azure Arc, enabling consistent management and monitoring across hybrid environments—including Azure Local VMs and AKS—while maintaining the familiar Azure deployment and operational experience. Deployment Options Deploy via Azure portal or ARM templates, with new inputs and properties in the Azure portal for rack aware clusters. Provision VMs in Local Availability Zones via the Azure Portal Provision Azure Local virtual machines directly into specific local availability zones using the Azure portal, allowing for granular workload placement and enhanced resilience. Upgrade Path from Preview to GA Deploy rack aware clusters with the 2510 public preview build and update to General Availability (GA) without redeployment—protecting your investment and ensuring operational continuity. Get started The preview of rack aware cluster is now available to all interested customers. We encourage you to try it out and share your valuable feedback. To get started, visit our documentation: Overview of Azure Local rack aware clustering (Preview) - Azure Local | Microsoft Learn Stay tuned for more updates as we work towards general availability in 2026. We look forward to seeing how you leverage Azure Local rack aware cluster to power your edge workloads!1.3KViews4likes4Comments