azure arc
283 TopicsResource Guide: Making Physical AI Practical for Real‑World Industrial Operations
Microsoft’s adaptive cloud approach enables organizations to turn operational technology (OT) data into intelligent actions, autonomously, without requiring everything to live in the cloud by unifying cloud-to-edge management plane, data plane, and intelligence platform. At the center of this approach are key foundational technologies: Key Purpose Offering Direct-to-cloud device management + telemetry ingestion Azure IoT Hub Industrial connectivity + edge data plane Azure IoT Operations Unified analytics + real-time intelligence Microsoft Fabric On-device AI inferencing runtime Foundry Local Microsoft Azure IoT Gartner winner: Microsoft named a Leader in the 2025 Gartner® Magic Quadrant™ for Global Industrial IoT Platforms This blog walks through where to get started with each: 1. Manage Cloud-Connected Devices and Telemetry with Azure IoT Hub Azure IoT Hub is a fully managed cloud service that enables secure bidirectional communication, device-to-cloud telemetry ingestion, cloud-to-device command execution, per-device authentication, remote management and more. Telemetry from IoT Hub can also be routed downstream into analytics platforms like Microsoft Fabric for visualization or AI modeling. Recommended Usage: Devices that utilize IoT Hub are distributed, stand-alone devices with fixed-functions. These devices typically do not require cloud-managed containerized workloads or cloud-managed proximal industrial protocol connectivity. Examples of appropriate device-to-cloud IoT Hub endpoint devices include water monitoring stations, vehicle telematics, distributed fluid level sensors, etc. Resources Current in-market services overview: IoT Hub: What is Azure IoT Hub? - Azure IoT Hub DPS: Overview of Azure IoT Hub Device Provisioning Service - Azure IoT Hub Device Provisioning Service ADU: Introduction to Device Update for Azure IoT Hub Building scalable solutions with Azure IoT platform: Best practices for large-scale IoT deployments - Azure IoT Hub Device Provisioning Service Scale Out an Azure IoT Hub-based Solution to Support Millions of Devices - Azure Architecture Center Azure IoT Hub scaling Try out our preview of new IoT Hub capabilities (integration with Azure Device Registry and Certificate Management) Learn more about these capabilities on our blog post: Azure IoT Hub + Azure Device Registry (Preview Refresh): Device Trust and Management at Fleet Scale… Integration with Azure Device Registry (preview): Integration with Azure Device Registry (preview) - Azure IoT Hub Microsoft-backed X.509 certificate management (preview): What is Microsoft-backed X.509 Certificate Management (Preview)? - Azure IoT Hub How to start with the preview: Deploy IoT Hub with ADR integration and certificate management (Preview) - Azure IoT Hub 2. Connect Industrial Assets with Azure IoT Operations Azure IoT Operations provides a unified data plane for the edge that runs on Azure Arc–enabled Kubernetes clusters and supports open industrial standards. It allows organizations to connect and capture equipment telemetry, normalize OT data locally, route hot-path signals to real-time analytics, securely manage layered industrial networks, and more. Edge‑processed data can then be sent upstream to Microsoft Fabric for AI‑driven analysis. Recommended Usage: Azure IoT Operations is intended to be the data plane for an adaptive cloud deployment extending the management, data, and AI capabilities of the Microsoft cloud to an on-prem device. This device binds to these cloud planes providing a platform for local data processing and intermittent connectivity. The target for these devices range from a small-gateway-style PC to a full data center. Azure IoT Operations endpoints enable cloud-managed containerized workloads and cloud-managed proximal industrial protocol connectivity. Examples of appropriate adaptive cloud and Azure IoT Operations endpoints include, on-robot computers, industrial machine controllers, retail store sensor/vision processing, and top-of-factory site infrastructure for line of business applications. Resources Azure IoT Operations Overview Azure IoT Operations Documentation Hub Quickstart: explore-iot-operations/quickstart at main · Azure-Samples/explore-iot-operations Open-source framework for scaling robotics from simulation to production on Azure + NVIDIA: microsoft/physical-ai-toolchain How we built the demo: explore-iot-operations/quickstart at main · Azure-Samples/explore-iot-operations Edge-AI: microsoft/edge-ai: Production-ready Infrastructure as Code, applications, pluggable components, and… Latest Announcements & Blogs Making Physical AI Practical for Real-World Industrial Operations: Part 1 | Microsoft Community Hub Making Physical AI Practical for Real-World Industrial Operations: Part 2 | Microsoft Community Hub Unlock Industrial Intelligence | Microsoft Hannover Messe 2026 From pilots to production: How Microsoft and partners are accelerating intelligent operations 3. Advanced Analytics with Microsoft Fabric Microsoft Fabric delivers a unified, end‑to‑end analytics platform that transforms streaming OT telemetry into real‑time insights and live dashboards. Fabric Operations Agents monitor industrial signals to recommend targeted actions, while Fabric IQ provides a shared semantic foundation that enables AI agents to reason over enterprise data with business context. Together, Fabric turns live industrial data into AI‑powered operational intelligence. Get Started Get Started with Microsoft Fabric Learning Path Fabric Real-Time Intelligence documentation - Microsoft Fabric | Microsoft Learn Create and Configure Operations Agents - Microsoft Fabric | Microsoft Learn Fabric IQ documentation - Microsoft Fabric | Microsoft Learn 4.Run AI Models On‑Device with Foundry Local Foundry Local extends on‑device AI to Arc‑enabled Kubernetes edge clusters, providing a Microsoft‑validated inferencing layer for running AI models in industrial, disconnected or sovereign environments. Get Started Foundry Local on Azure Local Documentation - link Participate in Foundry Local on Azure Local preview form - link Foundry Local on Azure Local: HELM deployment Demo - link Customer Stories Chevron: Chevron plans facilities of the future with Azure IoT Operations Husqvarna: Husqvarna Group Boosts Operational Efficiency with Azure Adaptive Cloud Ecopetrol: Azure IoT Operations and Azure IoT for energy help Ecopetrol optimize energy distribution while lowering operational costs P&G: Procter & Gamble cuts model deployment time up to 90% with Azure IoT Operations Toyota: Toyota Industries innovates its paint shop processes with Azure industrial AI and Azure IoT Hub342Views0likes0CommentsBringing AI to the Factory Floor with Foundry Local - Now in Public Preview on Azure Local
Key capabilities in this preview Foundry Local exposes standard REST and OpenAI‑compatible APIs, enabling IT and AI teams to deploy and operate local AI workloads using familiar, cloud‑aligned patterns across edge and on‑prem environments. In this public preview, we deliver the following capabilities: Azure Arc extension for Foundry Local Deploy and manage Foundry Local via an Azure Arc extension, enabling consistent install, configure, update, and governance workflows across Arc‑enabled Kubernetes clusters, in addition to Helm‑based installation. Built‑in generative models from the Foundry Local catalog Deploy pre‑built generative models directly from the Foundry Local model catalog using a simple control‑plane API request. Bring‑your‑own predictive models (ONNX) from OCI registries Deploy custom predictive models (such as ONNX models) securely pulled from customer‑managed OCI registries and run locally. REST and OpenAI‑compatible inference endpoints Consume both generative and predictive models through standard HTTP endpoints. Multi‑model orchestration for agent‑style applications Enable applications that coordinate multiple local models—for example, generative models guiding calls to predictive models—within a single Kubernetes cluster. Running Foundry Local on Azure Local single-node gives you: A validated, supported hardware foundation for running AI inference at the edge, from compact 1U nodes on the factory floor to rugged form factors in remote sites, using hardware from the Azure Local catalog AKS on Azure Local as the deployment target, so Foundry Local runs as a containerized workload managed by Kubernetes - the same operational model you use for any other workload on the cluster GPU access through the NVIDIA device plugin on AKS, giving Foundry Local's ONNX Runtime direct access to the node's discrete GPU without requiring Windows or host-OS-level configuration Two installation Options for single node deployment: The preview includes the Foundry Local Azure Arc extension, providing a consistent installation, deployment, and lifecycle management experience through Azure Arc, while also supporting Helm‑based installation Choose one of two installation paths: Option 1 - Arc-enabled Kubernetes Extension Recommended when: your organization manages multiple Azure Local instances and wants Microsoft to handle the deployment lifecycle — version updates, configuration drift detection, health monitoring — through the Azure portal without the team needing to manage Helm releases manually. Arc-enabled Kubernetes extensions deploy and manage workloads on AKS clusters registered with Azure. The extension operator runs in the cluster and reconciles the desired state declared in Azure, which means you don't need direct kubectl or helm access to the node to push updates. This is the lower-operational-overhead path for OT teams who are not Kubernetes specialists. Once installed, the extension appears in the Azure portal under your AKS cluster's Extensions blade. Model updates and configuration changes are pushed by modifying the extension configuration in Azure — no shell access to the node required. For disconnected or intermittently connected deployments, the extension operator caches its desired state and continues operating; it reconciles with Azure when connectivity resumes. Option 2 - Helm Chart Recommended when: your team manages AKS workloads with Helm or GitOps (Flux), and you need precise control over GPU resource allocation, node affinity, model pre-loading, or persistent volume configuration. The Helm chart gives you full control over the deployment manifest. You decide exactly how much GPU memory is requested per pod, which node the inference pod is pinned to, and what StorageClass backs the model cache. This matters on a single-node Azure Local deployment where you're sharing one physical GPU between the inference workload and potentially other AKS workloads. With Helm you can also integrate with Flux for GitOps-managed deployment — useful when you manage multiple Azure Local single-node instances across plant sites and want to push model or configuration updates from a central Git repository. Note: Verify the chart repository URL, chart name, and exact values.yaml parameters from the official Foundry Local documentation before deploying to production. Choosing Between the Two Helm Chart Arc Extension authentication API key EntraID Version upgrades Manual helm upgrade or Flux Automatic, managed by Microsoft GitOps compatible Yes (Flux HelmRelease) Yes (via Azure Policy / desired state) Requires cluster access Yes No (after initial registration) Best for Platform engineers, custom configs OT-managed sites, multi-site fleet Disconnected operation Works after initial deploy Works; reconciles on reconnect Control plane K8S native management (kubectl) K8S native management + REST API control plane Early Customer Validation and Key Scenarios Early customer validation is shaping the preview -helping ensure Foundry Local meets real-world requirements for latency, data control, and operating in constrained or disconnected environments across industries such as energy, manufacturing, government, financial services, and retail. Based on this early feedback, customers are prioritizing scenarios such as: Sovereign and regulated o On-site inference with data, models, and processing under customer control o Decision support in disconnected or restricted-network environments o In-jurisdiction processing for sensitive records and casework o Real-time detection and situational awareness within secure facilities Industrial and critical infrastructure o Edge operations assistants combining sensor telemetry with conversational AI o Low-latency quality inspection and process verification on factory floors o Predictive maintenance for remote or intermittently connected equipment o Local safety monitoring and operational oversight close to systems This input is guiding improvements across deployment flows, model catalog experience, hardware coverage, telemetry visibility, and documentation -so teams can evaluate and adopt Foundry Local more quickly and confidently in the environments above. Examples: CNC Anomaly Explanation: A machine vision system on a CNC line classifies a surface defect and passes the classification JSON to the Foundry Local endpoint. Phi-4-mini generates a plain-language root-cause hypothesis for the operator, referencing the specific machining parameters. Disconnected Safety Procedure Lookup: An offshore platform or remote mine site loses WAN connectivity. The Foundry Local pods continue serving requests from the AKS cluster on the Azure Local node - Kubernetes keeps the pods running, the model is already on the local PersistentVolume, and no external dependency is required. Workers query safety procedures (LOTO sequences, chemical handling) from an intranet application backed by the same inference endpoint. Qwen2.5-7B fits within 8–12 GB VRAM and supports a 32K token context window, making it viable for inline procedure retrieval without a separate vector database - useful when plant-floor infrastructure is minimal. Foundry Local for Devices and Foundry Local on Azure Local: What's Different Foundry Local for devices reached general availability for developer devices -Windows 10/11, macOS (Apple Silicon), and Android. That release targets a specific scenario: a developer or end user running AI inference on their own machine, with the model executing locally on their CPU, GPU, or NPU. The install is a single command (winget or brew), the service runs directly on the host OS, and there is no Azure subscription or infrastructure required. It is a developer tool and an application-embedded runtime. General overview of Foundry Local is available here: What is Foundry Local? - Foundry Local | Microsoft Learn The public preview for Azure Local single node is a different deployment target built for a different operational context. The runtime is the same - ONNX Runtime, the same model catalog, the same OpenAI-compatible API - but where it runs, how it is deployed, and how it is managed are entirely different. Foundry Local for Devices (GA) Foundry Local on Azure Local Single Node (Preview) Target Developer machines, end-user devices Enterprise edge servers on the factory floor or remote site OS Windows 10/11, macOS, Android Linux container on AKS on Azure Local Hardware Laptops, workstations, NPU-equipped devices Validated server hardware from the Azure Local catalog GPU access Direct host GPU (CUDA, DirectML, Apple Neural Engine) NVIDIA device plugin on Kubernetes Installation winget install or brew install Arc-enabled Kubernetes extension or Helm chart Lifecycle management Manual update via winget upgrade Managed via Helm/Flux or Arc extension operator Intended consumers One developer or one application on one machine Multiple applications sharing one inference endpoint on the plant network Disconnected operation Supported after model download; primarily online Designed for persistent disconnected operation with NVMe-cached models Model persistence Local device cache Kubernetes PersistentVolume on local storage Operational model Developer installs and manages it Platform team deploys it; applications consume it as a service The short version: the GA device release is for building and running AI-enabled applications on a single machine. The Azure Local single-node preview is for deploying Foundry Local as a shared, production inference service that runs continuously on validated industrial hardware, survives WAN outages, and is consumed by multiple workloads running on the same edge cluster. If you are prototyping an application on your laptop using the GA release, the same application code - specifically the OpenAI-compatible API calls - runs unchanged against the Azure Local deployment. You change only the base_url from localhost to the Kubernetes Service Built for Secure Industrial and Sovereign Operations Foundry Local supports Microsoft’s sovereign cloud principles—allowing AI workloads to operate fully locally, with customer‑controlled data boundaries and governance. Integration with Azure Arc provides unified management, configuration, and monitoring across hybrid and disconnected landscapes, enabling organizations to meet stringent compliance and operational requirements while adopting advanced AI capabilities. Learn more about Foundry Local on Azure Local RECOMMENDED participate in Foundry Local on Azure Local preview form link Foundry Local on Azure Local Documentation link Reach out to the team for support requests, feedback or suggestions here: FoundryLocal_Support@microsoft.com Foundry Local on Azure Local: HELM deployment Demo - link Foundry Local is now Generally Available link700Views0likes0CommentsSQL Server enabled by Azure Arc Overview
Table of Contents What is Azure Arc-enabled SQL Server? Connecting SQL Server to Azure Arc (4-step onboarding) Your SQL Server is Now in Azure (unified management) SQL Best Practices Assessment Monitoring and Governance Troubleshooting Guide Azure Arc Demo What You Can Learn from This Article This article walks you through the end-to-end journey of bringing external SQL Servers (on-prem, AWS, GCP, edge) under Azure management using Azure Arc. Specifically, you'll learn how to onboard SQL Server instances via the Arc agent and PowerShell script, navigate the unified Azure Portal experience for hybrid SQL estates, enable and interpret SQL Best Practices Assessments with Log Analytics, apply Azure Policy and performance monitoring across all environments, leverage Azure Hybrid Benefit for cost savings, and troubleshoot common issues like assessment upload failures, Wire Server 403 errors, and IMDS connectivity problem, with a real case study distinguishing Azure VM vs. Arc-enabled server scenarios. 1. What is Azure Arc-enabled SQL Server? Azure Arc helps you connect your SQL Server to Azure wherever it runs. Whether your SQL Server is running on-premises in your datacenter, on AWS EC2, Google Cloud, or at an edge location Azure Arc brings it under Azure management. This means you get the same governance, security, and monitoring capabilities as native Azure resources and streamline migration journey to Azure, effectively manage SQL estate at scale and strengthen security and governance posture Cloud innovation. Anywhere. SQL Server migration in Azure Arc includes an end-to-end migration journey with the following capabilities: Continuous database migration assessments with Azure SQL target recommendations and cost estimates. Seamless provisioning of Azure SQL Managed Instance as destination target, also with an option of free instance evaluation. Option to choose between two built-in migration methods: real-time database replication using Distributed Availability Groups (powered by the Managed Instance link feature), or log shipping via backup and restore (powered by Log Replay Service feature). Unified interface that eliminates the need to use multiple tools or to jump between various places in Azure portal. Microsoft Copilot is integrated to assist you at select points during the migration journey. learn more in SQL Server migration in Azure Arc – Generally Available | Microsoft Community Hub 1.1 The Problem Azure Arc Solves Organizations typically have SQL Servers scattered across multiple environments: Location Challenge Without Azure Arc On-premises datacenter Separate management tools, no unified view AWS EC2 instances Multi-cloud complexity, different monitoring Google Cloud VMs Inconsistent governance and policies Edge / Branch offices Limited visibility, manual compliance VMware / Hyper-V No cloud-native management features Azure Arc solves this by extending a single Azure control plane to ALL your SQL Servers regardless of where they physically run Azure Arc Overview Microsoft Learn: https://learn.microsoft.com/en-us/azure/azure-arc/overview Architecture Reference — Administer SQL Server with Azure Arc Microsoft Learn: https://learn.microsoft.com/en-us/azure/architecture/hybrid/azure-arc-sql-server Documentation Index — SQL Server enabled by Azure Arc Microsoft Learn: https://learn.microsoft.com/en-us/sql/sql-server/azure-arc/?view=sql-server-ver17 SQL Server migration in Azure Arc (Community Hub): https://techcommunity.microsoft.com/blog/azuresqlblog/sql-server-migration-in-azure-arc-generally-av... 2. Connecting SQL Server to Azure Arc Connecting SQL Server to Azure Arc This section shows how to onboard your SQL Server to Azure Arc. Once connected, your SQL Server appears in Azure Portal alongside your other Azure resources. 2.1 Step 1: Access Azure Arc Portal Navigation: Azure Portal → Azure Arc → Machines Figure 1: Azure Arc | Machines, Starting Point for Onboarding Description: The Azure Arc Machines blade is your entry point for connecting servers outside Azure. Click 'Onboard/Create' dropdown and select 'Onboard existing machines' to begin. The left menu shows Azure Arc capabilities: Machines, Kubernetes clusters, Data services, Licenses, etc. This is where ALL your Azure Arc-enabled servers will appear after onboarding. 2.2 Step 2: Configure Onboarding Options Select your operating system, enable SQL Server auto-discovery, and choose connectivity method: Figure 2: Onboarding Configuration, Enable SQL Server Auto-Discovery Description: Key settings: (1) Operating System select Windows or Linux, (2) SQL Server checkbox, 'Automatically connect any SQL Server instances to Azure Arc' enables auto-discovery of SQL instances on the server, (3) Connectivity method, 'Public endpoint' for direct internet access or 'Private endpoint' for VPN/ExpressRoute. The SQL Server checkbox is crucial, it installs the SQL Server extension automatically. 💡 Important: Check the 'Connect SQL Server' option! This ensures SQL Server instances are automatically discovered and connected to Azure Arc. 2.3 Step 3: Download the Onboarding Script Azure generates a customized PowerShell script containing your subscription details and configuration: Figure 3: Generated Onboarding Script, Ready to Download Description: The portal generates a PowerShell script customized for your environment. Key components: (1) Agent download from Azure CDN, (2) Installation commands, (3) Pre-configured connection parameters (subscription, resource group, location). Click 'Download' to save the script. Requirements note: Server needs HTTPS (port 443) access to Azure endpoints. 2.4 Step 4: Run the Script on Your Server Copy the script to your SQL Server and execute it in PowerShell as Administrator: Figure 4: Executing OnboardingScript.ps1 on the SQL Server Description: PowerShell console showing script execution from D:\Azure Arch directory. The script (OnboardingScript.ps1, 3214 bytes) installs the Azure Connected Machine Agent and registers the server with Azure Arc. During execution, a browser window opens for Azure authentication. After completion, the server appears in Azure Arc within minutes. What happens during onboarding: Azure Connected Machine Agent is downloaded and installed Agent establishes secure connection to Azure Server is registered as an Azure Arc resource SQL Server extension is installed (if checkbox was enabled) SQL Server instance appears in Azure Arc → SQL Server Connect Your SQL Server to Azure Arc Microsoft Learn: https://learn.microsoft.com/en-us/sql/sql-server/azure-arc/connect?view=sql-server-ver17 Prerequisites — SQL Server enabled by Azure Arc Microsoft Learn: https://learn.microsoft.com/en-us/sql/sql-server/azure-arc/prerequisites?view=sql-server-ver17 Manage Automatic Connection — SQL Server enabled by Azure Arc Microsoft Learn: https://learn.microsoft.com/en-us/sql/sql-server/azure-arc/manage-autodeploy?view=sql-server-ver17 3. Your SQL Server is Now Visible in the Azure Control Plane Once connected via Azure Arc, your SQL Server is projected as a resource in the Azure Portal,right alongside your native Azure SQL resources. This is the power of Azure Arc: your SQL Server remains where it runs (on-premises, in AWS, or anywhere else), but Azure's management plane now extends to it. You can govern, monitor, and secure it with the same tools you use for Azure-native resources, without migrating the workload. 3.1 Unified View in Azure Portal After onboarding, you can see your Azure Arc-enabled SQL Server through two paths: Navigation Path What You See Azure Arc → SQL Server All Azure Arc-enabled SQL instances Azure Arc → Machines The host server with extensions 3.2 Management Experience Similar to SQL Server on Azure VM The management capabilities for Azure Arc-enabled SQL Server are very similar to SQL Server on Azure VM. The screenshots below show the SQL Server on Azure VM experience Azure Arc-enabled SQL Server provides nearly identical functionality. Whether your SQL Server runs natively on an Azure VM or is connected from outside Azure via Azure Arc, you get access to a consistent management experience including: Figure 5: SQL Server Management Overview — Consistent Experience Description: This shows the management experience for SQL Server in Azure. Whether connected via Azure Arc or running on Azure VM, you see: SQL Server version and edition, VM details, License type configuration, Storage configuration, and feature status. Azure Arc-enabled SQL Server provides a nearly identical dashboard experience, extending this unified view to your on-premises and multi-cloud servers. 3.3 Azure Hybrid Benefit - Use Your Existing Licenses One of the key cost-saving advantages which is you can apply Azure Hybrid Benefit (AHB) to Azure SQL Database and Azure SQL Managed Instance, saving up to 30% or more on licensing costs by leveraging your existing Software Assurance-enabled SQL Server licenses. Note: Azure Hybrid Benefit applies to Azure SQL Database and SQL Managed Instance. For SQL Server running on-premises or in other clouds managed via Azure Arc, AHB does not apply directly. However, Arc-enabled SQL Server provides other benefits such as centralized management, Azure-integrated security, and access to Extended Security Updates (ESUs). Figure 6: Azure Hybrid Benefit Configuration Description: License configuration for SQL Server on Azure VM, showing three options: Pay As You Go, Azure Hybrid Benefit (selected), and HA/DR. With Azure Hybrid Benefit, organizations with existing SQL Server licenses and active Software Assurance can save up to 30% or more on SQL Server licensing costs running on Azure VMs (as reflected in the Azure portal configuration blade). Free SQL Server licenses for High Availability and Disaster Recovery are also available for Standard and Enterprise editions. Configure SQL Server enabled by Azure Arc Microsoft Learn: https://learn.microsoft.com/en-us/sql/sql-server/azure-arc/manage-configuration?view=sql-server-ver1... Manage Licensing and Billing — SQL Server enabled by Azure Arc Microsoft Learn: https://learn.microsoft.com/en-us/sql/sql-server/azure-arc/manage-license-billing?view=sql-server-ve... 4. SQL Best Practices Assessment One of the most valuable features available to Azure Arc-enabled SQL Server is the Best Practices Assessment — automatically evaluating your SQL Server configuration against Microsoft's recommendations. 4.1 Prerequisites: Log Analytics Workspace Before enabling assessment, you need a Log Analytics Workspace to store the results: Figure 7: Create Log Analytics Workspace Description: Log Analytics workspace creation form. Fill in: Subscription, Resource Group, Name (green checkmark indicates valid name), and Region (choose same region as your resources). This workspace stores assessment results, performance metrics, and logs from ALL your SQL Servers both Azure Arc-enabled and Azure VMs. Figure 8: Log Analytics Workspace Ready for Use Description: Workspace overview showing: Status (Active), Pricing tier (Pay-as-you-go), and Operational issues (OK). The 'Get Started' section guides you through: (1) Connect a data source, (2) Configure monitoring solutions, (3) Monitor workspace health. This workspace becomes the central repository for all your SQL Server insights. 4.2 Enable SQL Best Practices Assessment Navigate to your SQL Server (Azure Arc-enabled or Azure VM) and enable the assessment: Figure 9: SQL Best Practices Assessment Enable Feature Description: Assessment landing page explaining the feature: evaluates indexes, deprecated features, trace flags, statistics, etc. Results are uploaded via Azure Monitor Agent (AMA). Click 'Enable SQL best practices assessments' to begin configuration. This feature is available for BOTH Azure Arc-enabled SQL Server and Azure SQL VMs. Figure 10: Assessment Configuration Select Log Analytics Workspace Description: Configuration panel requiring: (1) Enable checkbox, (2) Log Analytics workspace selection, (3) Resource group for AMA. The warning 'No Log Analytics workspace is found' appears if you haven't created one yet, see Section 4.1. Once configured, assessments run on schedule and upload results to your workspace. 4.3 Run and Review Assessment Figure 11: Run Assessment Button Description: After configuration, click 'Run assessment' to start evaluation. Assessment duration varies: 5-10 minutes for small environments, 30-60 minutes for large ones. The 'View latest successful assessment' button (disabled until first run completes) opens the results workbook. Figure 12: Assessment Results History Description: Assessment history showing multiple runs with different statuses: 'Scheduled' (pending), 'Completed' (results available), 'Failed - result expired' (data retention exceeded). Regular assessments help catch configuration drift over time. If you see 'Failed - upload failed', see the Troubleshooting section. Figure 13: Assessment Recommendations Actionable Insights Description: Best practices workbook showing three panels: (1) Recommendation Summary with severity (High, Medium) and categories (DBConfiguration, Performance, Index, Backup), (2) Recommendation Details with target and name, (3) Details panel showing selected item — example: 'Enable instant file initialization' for performance improvement. High severity items should be addressed immediately. Severity Levels: Severity Description Action Timeline 🔴 High Critical issues affecting performance or security Address immediately 🟡 Medium Important optimizations recommended Within 30 days 🟢 Low Nice-to-have improvements As time permits ℹ️ Info Informational findings Review and acknowledge Configure Best Practices Assessment — SQL Server enabled by Azure Arc Microsoft Learn: https://learn.microsoft.com/en-us/sql/sql-server/azure-arc/assess?view=sql-server-ver17 Troubleshoot Best Practices Assessment — SQL Server enabled by Azure Arc Microsoft Learn: https://learn.microsoft.com/en-us/sql/sql-server/azure-arc/troubleshoot-assessment?view=sql-server-v... Assess Migration Readiness — SQL Server enabled by Azure Arc Microsoft Learn: https://learn.microsoft.com/en-us/sql/sql-server/azure-arc/migration-assessment?view=sql-server-ver1... Log Analytics Workspace creation: https://learn.microsoft.com/en-us/azure/azure-monitor/logs/quick-create-workspace 5. Monitoring and Governance With your SQL Servers connected to Azure (via Azure Arc or native), you gain access to Azure's full monitoring and governance capabilities. 5.1 Azure Policy Compliance Apply consistent governance policies across ALL your SQL Servers — regardless of where they run: Figure 14: Azure Policy Compliance Dashboard Description: Compliance dashboard showing: 28% overall compliance (5 of 18 resources), pie chart with Compliant (green), Exempt, and Non-compliant (red). The table lists non-compliant resources (microsoft.hybridcompute type = Azure Arc-enabled servers). Use this to ensure ALL SQL Servers, on-premises, cloud, edge meet your organization's standards. 5.2 Performance Monitoring Figure 15: Performance Monitoring Unified Dashboard Description: Performance dashboard showing: Logical Disk Performance (C: drive 30% used), CPU Utilization (1.75% average, 5.73% 95th percentile), Available Memory (3.1GB average). This same dashboard works for Azure Arc-enabled servers, giving you consistent visibility across your entire SQL Server estate. 5.3 Service Dependency Mapping Figure 16: Service Map Visualize Dependencies Description: Map view showing server FNPSVR01 with 17 processes connecting to Port 443 (7 servers) and Port 53 (1 server). Machine Summary shows FQDN, OS (Windows Server 2016), IP address. Use this to understand application dependencies before maintenance or migration available for both Azure Arc-enabled and Azure-native servers. 6. Troubleshooting Guide This section covers common issues encountered when working with Azure Arc-enabled SQL Server and Azure SQL VMs. 6.1 Common Issues Overview Issue Symptoms Azure Arc-enabled Azure VM Assessment Upload Failed Status: 'Failed - upload failed' ✅ Applies ✅ Applies Wire Server 403 Agent cannot connect ❌ N/A ✅ Applies IMDS Disabled Cannot obtain token ❌ N/A ✅ Applies Azure Arc Agent Connectivity Server not appearing ✅ Applies ❌ N/A SQL Login Failed Machine account denied ✅ Applies ✅ Applies 6.2 Real Case Study: Assessment Upload Failed on Azure VM Note: This case study is from an Azure VM (not Azure Arc-enabled). The Wire Server and IMDS issues are specific to Azure VMs. Azure Arc-enabled servers use different connectivity mechanisms. Symptoms observed: Assessment status: 'Failed - upload failed' Local data collected successfully (415 issues) Data not appearing in Log Analytics workspace Root causes identified from logs: Error 1 (ExtensionLog ): [ERROR] Customer disable the IMDS service, cannot obtain IMDS token. Error 2 (WaAppAgent.log): [WARN] GetMachineGoalState() failed: 403 (Forbidden) to 168.63.129.16 Resolution for Azure VMs Fix Wire Server (168.63.129.16) connectivity: # Test connectivity Test-NetConnection -ComputerName 168.63.129.16 -Port 80 # Add route if missing route add 168.63.129.16 mask 255.255.255.255 <gateway> -p # Add firewall rule if needed New-NetFirewallRule -DisplayName "Allow Azure Wire Server" -Direction Outbound -RemoteAddress 168.63.129.16 -Action Allow Fix IMDS (169.254.169.254) connectivity: # Test IMDS Invoke-RestMethod -Uri "http://169.254.169.254/metadata/instance?api-version=2021-02-01" -Headers @{Metadata="true"} # Add firewall rule if blocked New-NetFirewallRule -DisplayName "Allow Azure IMDS" -Direction Outbound -RemoteAddress 169.254.169.254 -Action Allow Test Azure Arc agent connectivity: # Check Arc agent status & "$env:ProgramW6432\AzureConnectedMachineAgent\azcmagent.exe" show # Test connectivity to Azure endpoints & "$env:ProgramW6432\AzureConnectedMachineAgent\azcmagent.exe" check 6.3 Azure Arc-enabled SQL Server Connectivity Issues For Azure Arc-enabled servers (not Azure VMs), connectivity issues are different: Required Azure endpoints for Azure Arc agent: Endpoint Port Purpose management.azure.com 443 Azure Resource Manager login.microsoftonline.com 443 Azure AD authentication *.his.arc.azure.com 443 Azure Arc Hybrid Identity *.guestconfiguration.azure.com 443 Guest configuration Troubleshoot Best Practices Assessment Microsoft Learn: https://learn.microsoft.com/en-us/sql/sql-server/azure-arc/troubleshoot-assessment?view=sql-server-v... What is IP Address 168.63.129.16 (Wire Server) Microsoft Learn: https://learn.microsoft.com/en-us/azure/virtual-network/what-is-ip-address-168-63-129-16 Azure Instance Metadata Service (IMDS) Microsoft Learn: https://learn.microsoft.com/en-us/azure/virtual-machines/instance-metadata-service Troubleshoot IMDS Connection Issues on Windows VMs Microsoft Learn: https://learn.microsoft.com/en-us/troubleshoot/azure/virtual-machines/windows/windows-vm-imds-connec... Troubleshoot Azure Windows VM Agent Issues Microsoft Learn: https://learn.microsoft.com/en-us/troubleshoot/azure/virtual-machines/windows/windows-azure-guest-ag... 7. Troubleshooting Guide Demo Deck: Azure Arc for Windows Server and SQL Server More Additional Resources : Learn more about the new migration capability in Azure Arc on Microsoft Learn. Onboard your SQL Server to Azure Arc today. Learn more about continuous migration assessment from SQL Server enabled by Azure Arc. Download resources on github.com/microsoft/sql-server-samples501Views0likes0CommentsAdvancing Firmware Security: Fleet Visibility and New Capabilities in Firmware Analysis
When we announced general availability of firmware analysis enabled by Azure Arc last October, our goal was clear: help organizations gain deep visibility into the security of the firmware that powers their IoT, OT, and network devices. Since then, adoption has continued to grow as customers use firmware analysis to uncover vulnerabilities, inventory software components, and secure their software supply chain. Leading into the Hannover Messe (HMI) 2026 conference, we’re excited to share the next wave of firmware analysis capabilities, delivering enhancements that help customers connect firmware risk to real-world fleet impact, prioritize vulnerabilities more effectively, scale to larger and more complex firmware images, and expand security analysis for UEFI-based platforms. These updates are driven directly by customer feedback and by the rapidly evolving threat landscape facing embedded and edge devices. Connecting Firmware Risk to Your Deployed Fleet with Azure Device Registry (Preview) Securing connected devices doesn’t stop at identifying vulnerabilities in firmware—it requires understanding where those vulnerabilities exist in your deployed fleet and which devices are affected. We’re excited to announce a new preview integration between firmware analysis enabled by Azure Arc and Azure Device Registry, bringing fleet-level visibility of IoT and OT devices directly into the firmware analysis experience. This helps customers quickly understand how many devices and assets are running a given firmware image, and which ones may be exposed to known security issues. From firmware insights to fleet impact Firmware analysis helps customers uncover security risks hidden deep inside the firmware running IoT, OT, and network devices—risks such as known CVEs, outdated open-source components, weak cryptography, and insecure configurations. Until now, these insights were primarily scoped to the firmware image itself. With this new preview integration, firmware analysis now connects directly to Azure Device Registry, allowing customers to: See how many devices from IoT Hub integration with ADR (preview) and assets from Azure IoT Operations are associated with a specific analyzed firmware image Understand the real-world blast radius of vulnerabilities discovered in firmware Quickly identify which devices may require patching, mitigation, or isolation This preview bridges an important gap between security analysis and operational decision-making. What’s included in this preview With this release, we’re introducing new fleet-level context directly into the firmware analysis experience: A new Devices + Assets count column in the firmware analysis workspace showing how many Azure Device Registry devices and assets are running each analyzed firmware image A click-through experience that lets users view the list of affected devices and assets in Azure Device Registry Visibility spanning both: Devices connected via IoT Hub Assets managed through Azure IoT Operations This information is derived by correlating firmware metadata with device and asset inventory in Azure Device Registry, giving customers immediate insight into deployment exposure. Key use cases Identify vulnerable devices at scale: When critical CVEs are discovered in a firmware image, customers can immediately see how many deployed devices are impacted—without manually correlating spreadsheets, tools, or inventories. Prioritize remediation actions: With fleet visibility, teams can decide whether to patch devices, temporarily isolate affected devices from the network, or disable devices that pose unacceptable risk. Bridge security and operations teams: Security teams gain clear insight into where vulnerabilities exist, while operations teams can quickly act on specific devices and assets—all within the Azure portal. This integration is especially valuable in environments where downtime, safety, or regulatory compliance matter—such as manufacturing, energy, telecommunications, and critical infrastructure. Prioritizing Vulnerabilities with Enhanced CVE Metadata (Preview) The number of publicly disclosed vulnerabilities continues to rise year over year, making it increasingly difficult for security teams to determine which CVEs truly require urgent action. Simply knowing that a vulnerability exists is no longer enough—teams need context to prioritize remediation efforts. With this release, firmware analysis now provides richer metadata for each discovered CVE, helping customers focus on vulnerabilities that pose the greatest real-world risk. New CVE metadata includes: CISA Known Exploited Vulnerabilities (KEV) status – Indicates whether a CVE is listed in the CISA KEV catalog, signaling that the vulnerability is actively exploited in the wild. EPSS score (Exploit Prediction Scoring System) – A data-driven probability score that estimates the likelihood of a vulnerability being exploited in the next 30 days, complementing traditional severity metrics by focusing on exploitation likelihood rather than impact alone. Additional vulnerability context, including CVSS vectors and base scores, CWE classifications, and expanded metadata to support filtering and analysis. Together, these enhancements make it easier to triage findings, align remediation with risk, and communicate priorities across security, engineering, and product teams. Faster Performance for Large and Complex Firmware Images As firmware analysis adoption has grown, we’ve seen customers analyze increasingly large and complex firmware images—particularly in domains like networking equipment, where a single image can generate thousands of findings. To support these scenarios, we’ve made architectural enhancements to the service that significantly improve performance when working with large result sets. Key improvements include: Up to 90% reduction in load times of analysis results, especially for firmware images producing 10,000+ findings More responsive filtering and exploration of results These changes ensure that firmware analysis remains fast and usable at scale, even for complex network and infrastructure firmware images. Expanding UEFI Firmware Analysis (Preview) Modern devices increasingly rely on UEFI firmware as a foundational security boundary. In this release, we’re expanding our UEFI analysis capabilities to provide deeper visibility into UEFI executables and components. New UEFI-focused capabilities include: Detection of OpenSSL libraries and related CVEs within UEFI firmware Binary hardening analysis for UEFI executables, including detection of proper configuration of Data Execution Prevention (DEP) memory protection Continued support for discovering cryptographic material in UEFI images, including embedded certificates and keys This preview allows customers to evaluate the new capabilities, provide feedback, and help shape future enhancements in this area. Note: UEFI SBOM and binary analysis features are currently in preview and intended for evaluation and feedback. Bulk Export of Analysis Results for Supply Chain Collaboration We also recently released a highly requested feature that makes it easier to share firmware analysis results with partners and suppliers. Customers can now: Bulk download analysis results across one or more firmware images Export results as CSV files packaged into a ZIP archive This capability simplifies workflows such as sharing findings with device manufacturers or firmware suppliers, integrating results into downstream analysis or reporting pipelines, and supporting software supply chain security and compliance processes. Looking Ahead We’re excited about the progress we’ve made with this release and what it means for customers securing IoT, OT, and network devices. From connecting firmware risk to fleet-level impact with Azure Device Registry, to richer vulnerability prioritization, improved scalability, and deeper UEFI analysis—these enhancements reinforce firmware analysis as a critical tool for addressing some of the most challenging blind spots in modern infrastructure security. Firmware security is foundational to trustworthy systems—especially as edge devices continue to play a central role in industrial operations, networking, and data collection. If you’re already using firmware analysis and Azure Device Registry, the ADR integration preview will appear directly within the firmware analysis experience as it rolls out. We look forward to your feedback as we continue building secure, observable, and manageable digital operations with Azure. As always, we value your feedback, so please let us know what you think.192Views0likes0CommentsAzure Arc Server Mar 2026 Forum Recap
Please find the recording for the monthly Azure Arc Server Forum on YouTube! During the March 2026 Azure Arc Server Forum, we discussed: Deploying Ansible Playbooks through Machine Configuration as Azure Policy (Learn more: Announcing Private Preview: Deploy Ansible Playbooks using Azure Policy via Machine Configuration) and sign up at https://aka.ms/ansible-arc-signup New MECM (SCCM) connector supporting Cloud Native Server Management, sign up for Private Preview at aka.ms/arc-mecm/preview Automatic Agent Upgrade at Scale Enablement (Learn more: Run the latest Azure Arc agent with Automatic Agent Upgrade (Public Preview)) TPM-backed Identity for Secure Onboarding, sign up for Private Preview at https://aka.ms/arc-tpm-backed-identity/preview/ To sign up for the Azure Arc Server Forum and newsletter, please register with contact details at https://aka.ms/arcserverforumsignup/. For the latest agent release notes, check out What's new with Azure Connected Machine agent - Azure Arc | Microsoft Learn. Our April 2026 forum will be held on Thursday, April 16 at 9:30 AM PST / 12:30 PM EST. We look forward to you joining us, thank you!384Views0likes0CommentsAutomating Arc-enabled SQL Server license type configuration with Azure Policy
Azure Arc enables customers to onboard SQL Server instances - hosted on Linux or Windows - into Azure, regardless of where they are hosted: on‑premises, in multicloud environments, or at the edge. Once onboarded, these resources can be managed through the Azure Portal using services like Azure Monitor, Azure Policy, and Microsoft Defender for Cloud. An important part of this onboarding is configuring the license type on each Arc-enabled resource to match your licensing agreement with Microsoft. For SQL Server, the LicenseType property on the Arc extension determines how the instance is licensed: Paid (you have a SQL Server license with Software Assurance or a SQL Server subscription), PAYG (you are paying for SQL Server software on a pay-as-you-go basis), or LicenseOnly (you have a perpetual SQL Server license). Setting this correctly matters for two reasons: Unlocking additional benefits: customers with Paid or PAYG license type gain access to some Azure services at no extra cost - such as Azure Update Manager and Machine Configuration - as well as exclusive capabilities like Best Practices Assessment and Remote Support Enabling pay-as-you-go billing: customers who do not have Software Assurance can pay for SQL Server software only when they use it via their Azure subscription by setting the license type to PAYG Configure the license types at scale using Azure Policy Configuring the license type on each Arc-enabled SQL Server instance can be done manually in the Azure Portal, but for large scale operations, automation is essential. One way to implement automation is via PowerShell, as explained here: Configure SQL Server - SQL Server enabled by Azure Arc | Microsoft Learn. But here we will focus on how this can be automated using Azure Policy. An existing article, written by Jeff Pigott, explains this process for Windows Server, which inspired extending the same approach to SQL Server. How to deploy the policy? Deployment has two steps: Create/update the Azure Policy definition and assignment Start a remediation task so existing Arc-enabled SQL Server extensions are brought into compliance You can deploy Azure Policy in multiple ways. In this article, we use PowerShell. See also: Tutorial: Build policies to enforce compliance - Azure Policy | Microsoft Learn. Source code: microsoft/sql-server-samples/.../arc-sql-license-type-compliance. Personal repository: claestom/sql-arc-policy-license-config. Definition and assignment creation Download the required files: # Optional: create and enter a local working directory mkdir sql-arc-lt-compliance cd sql-arc-lt-compliance $baseUrl = "https://raw.githubusercontent.com/microsoft/sql-server-samples/master/samples/manage/azure-arc-enabled-sql-server/compliance/arc-sql-license-type-compliance" New-Item -ItemType Directory -Path policy, scripts -Force | Out-Null curl -sLo policy/azurepolicy.json "$baseUrl/policy/azurepolicy.json" curl -sLo scripts/deployment.ps1 "$baseUrl/scripts/deployment.ps1" curl -sLo scripts/start-remediation.ps1 "$baseUrl/scripts/start-remediation.ps1" Note: On Windows PowerShell 5.1, curl is an alias for Invoke-WebRequest . Use curl.exe instead, or run the commands in PowerShell 7+. Authenticate to Azure: Connect-AzAccount Set your variables. Only TargetLicenseType is required - all others are optional: # Required $TargetLicenseType = "PAYG" # "Paid" or "PAYG" # Optional (uncomment to override defaults) # $ManagementGroupId = "<management-group-id>" # Default: tenant root management group # $SubscriptionId = "<subscription-id>" # Default: policy assigned at management group scope # $ExtensionType = "Both" # "Windows", "Linux", or "Both" (default) # $LicenseTypesToOverwrite = @("Unspecified","Paid","PAYG","LicenseOnly") # Default: all Run the deployment script: # Minimal: uses defaults for management group, platform, and overwrite targets .\scripts\deployment.ps1 -TargetLicenseType $TargetLicenseType # With subscription scope .\scripts\deployment.ps1 -TargetLicenseType $TargetLicenseType -SubscriptionId $SubscriptionId # With all options .\scripts\deployment.ps1 ` -ManagementGroupId $ManagementGroupId ` -SubscriptionId $SubscriptionId ` -ExtensionType $ExtensionType ` -TargetLicenseType $TargetLicenseType ` -LicenseTypesToOverwrite $LicenseTypesToOverwrite Parameter notes: ManagementGroupId (optional): management group where the policy definition is created. Defaults to the tenant root management group when not specified ExtensionType (optional, default Both ): Windows , Linux , or Both . When Both , a single policy definition and assignment covers both platforms SubscriptionId (optional): if provided, assignment scope is subscription (otherwise management group scope) TargetLicenseType (required): Paid or PAYG LicenseTypesToOverwrite (optional, default all): controls which current states are eligible for update Unspecified = no current LicenseType Paid , PAYG , LicenseOnly = explicit current values The script also creates a system-assigned managed identity on the policy assignment and assigns required roles automatically. Role assignments include retry logic (5 attempts, 10-second delay) to handle managed identity replication delays, which helps prevent common PolicyAuthorizationFailed errors. Remediation task creation After deployment, allow a few minutes for Azure Policy to run a compliance scan for the selected scope. You can monitor this in Azure Policy → Compliance. More info: Get policy compliance data - Azure Policy | Microsoft Learn. Set your variables. TargetLicenseType is required and must match the value used during deployment: # Required $TargetLicenseType = "PAYG" # Must match the deployment target # Optional (uncomment to override defaults) # $ManagementGroupId = "<management-group-id>" # Default: tenant root management group # $SubscriptionId = "<subscription-id>" # Default: remediation runs at management group scope # $ExtensionType = "Both" # Must match the platform used for deployment Then start remediation: # Minimal: uses defaults for management group and platform .\scripts\start-remediation.ps1 -TargetLicenseType $TargetLicenseType -GrantMissingPermissions # With subscription scope .\scripts\start-remediation.ps1 -TargetLicenseType $TargetLicenseType -SubscriptionId $SubscriptionId -GrantMissingPermissions # With all options .\scripts\start-remediation.ps1 ` -ManagementGroupId $ManagementGroupId ` -ExtensionType $ExtensionType ` -SubscriptionId $SubscriptionId ` -TargetLicenseType $TargetLicenseType ` -GrantMissingPermissions Parameter notes: ManagementGroupId (optional): defaults to tenant root management group ExtensionType (optional, default Both ): must match the platform used for the assignment SubscriptionId (optional): run remediation at subscription scope TargetLicenseType (required): must match the assignment target GrantMissingPermissions (optional switch): checks and assigns missing required roles before remediation starts You can track remediation progress in Azure Policy → Remediation → Remediation tasks. It can take a few minutes to complete, depending on scope and resource count. Recurring Billing Consent (PAYG) When TargetLicenseType is set to PAYG , the policy automatically includes ConsentToRecurringPAYG in the extension settings with Consented: true and a UTC timestamp. For details of this requirement see: Move SQL Server license agreement to pay-as-you-go subscription - SQL Server enabled by Azure Arc | Microsoft Learn. The policy also checks for ConsentToRecurringPAYG in its compliance evaluation - resources with LicenseType: PAYG but missing the consent property are flagged as non-compliant and remediated. This applies both when transitioning to PAYG and for existing PAYG extensions that predate the consent requirement (backward compatibility). Note: Once ConsentToRecurringPAYG is set on an extension, it cannot be removed - this is enforced by the Azure resource provider. When transitioning away from PAYG, the policy changes LicenseType but leaves the consent property in place. RBAC When .\scripts\deployment.ps1 creates the policy assignment, it uses -IdentityType SystemAssigned . Azure then creates a managed identity for that assignment. The assignment identity needs these roles at assignment scope (or inherited scope): Azure Extension for SQL Server Deployment: allows updating Arc SQL extension settings, including LicenseType Reader: allows reading resource and extension state for policy evaluation Resource Policy Contributor: allows policy-driven template deployments required by DeployIfNotExists This identity is used whenever DeployIfNotExists applies changes, both during regular compliance evaluation and during remediation runs. By default, the deployment script assigns these roles automatically with built-in retry logic to handle managed identity replication delays, which helps prevent common PolicyAuthorizationFailed errors. Brownfield and Greenfield Scenarios This policy is useful in both brownfield and greenfield Azure Arc environments. Brownfield: existing Arc SQL inventory In a brownfield environment, you already have Arc-enabled SQL Server resources in inventory and the current LicenseType values might be mixed, incorrect, or missing. This is where Azure Policy is especially useful, because it gives you a controlled way to remediate the current estate at scale. Depending on how you configure targetLicenseType and licenseTypesToOverwrite, you can use the policy to: standardize all in-scope resources on a single value set LicenseType only when it is missing migrate a specific subset, such as Paid to PAYG preserve selected states while correcting only the resources that need attention Examples: Standardize everything to Paid targetLicenseType: Paid licenseTypesToOverwrite: ['Unspecified','Paid','PAYG','LicenseOnly'] Result: every in-scope Arc SQL extension is converged to LicenseType == Paid. Backfill only missing values targetLicenseType: Paid licenseTypesToOverwrite: ['Unspecified'] Result: only resources without a configured LicenseType are updated; existing Paid, PAYG, and LicenseOnly values remain unchanged. Migrate only Paid to PAYG targetLicenseType: PAYG licenseTypesToOverwrite: ['Paid'] Result: only resources currently set to Paid are updated to PAYG; missing, PAYG, and LicenseOnly remain unchanged. When transitioning to PAYG, the policy also automatically sets ConsentToRecurringPAYG with Consented: true and a UTC timestamp, as required for recurring pay-as-you-go billing. Protect existing PAYG, fix only missing or LicenseOnly targetLicenseType: Paid licenseTypesToOverwrite: ['Unspecified','LicenseOnly'] Result: resources with no LicenseType or with LicenseOnly are updated to Paid, while existing PAYG stays untouched. Greenfield: newly onboarded SQL Servers In a greenfield scenario, the main value of Azure Policy is ongoing enforcement. Once new SQL Servers are onboarded to Azure Arc and fall within the assignment scope, the policy can act as a governance control to keep LicenseType aligned with your business model. This means Azure Policy is not only a remediation mechanism for existing inventory, but also a way to continuously enforce the intended license configuration for future Arc-enabled SQL Server resources. Azure Policy vs tagging By default, Microsoft manages automatic deployment of SQL Server extension for Azure. It include an option to enforce the LicenseType setting via tags. See Manage Automatic Connection - SQL Server enabled by Azure Arc | Microsoft Learn for details. This way all newly onboarded SQL Server instance are set to the desired LicenceType from day one. The deployment of the Azure Policy is still important to ensure that the changes of the extension properties or ad-hoc additions of the SQL Server instances stay compliant to our business model. A practical way to think about it: Tagging ensures the initial compliance of the newly connected Arc-enabled SQL servers Azure Policy enforces ongoing compliance of the existing Arc-enabled SQL servers Tools Interested in gaining better visibility into LicenseType configurations across your estate? Below you'll find an insightful KQL query and an accompanying workbook to help track compliance. KQL Query resources | where type == "microsoft.hybridcompute/machines" | where properties.detectedProperties.mssqldiscovered == "true" | extend machineIdHasSQLServerDiscovered = id | project name, machineIdHasSQLServerDiscovered, resourceGroup, subscriptionId | join kind= leftouter ( resources | where type == "microsoft.hybridcompute/machines/extensions" | where properties.type in ("WindowsAgent.SqlServer","LinuxAgent.SqlServer") | extend machineIdHasSQLServerExtensionInstalled = iff(id contains "/extensions/WindowsAgent.SqlServer" or id contains "/extensions/LinuxAgent.SqlServer", substring(id, 0, indexof(id, "/extensions/")), "") | project License_Type = properties.settings.LicenseType, machineIdHasSQLServerExtensionInstalled)on $left.machineIdHasSQLServerDiscovered == $right.machineIdHasSQLServerExtensionInstalled | where isnotempty(machineIdHasSQLServerExtensionInstalled) | project-away machineIdHasSQLServerDiscovered, machineIdHasSQLServerExtensionInstalled Source: Configure SQL Server - SQL Server enabled by Azure Arc | Microsoft Learn. Azure Workbook claestom/azure-arc-sa-workbook: Azure Workbook for monitoring Software Assurance compliance across Arc-enabled servers and SQL Server instances. Resources Configure SQL Server - SQL Server enabled by Azure Arc | Microsoft Learn Azure Policy documentation | Microsoft Learn Automating Windows Server Licensing Benefits with Azure Arc Policy | Microsoft Community Hub Recurring billing consent - SQL Server enabled by Azure Arc | Microsoft Learn claestom/azure-arc-sa-workbook: Azure Workbook for monitoring Software Assurance compliance across Arc-enabled servers and SQL Server instances microsoft/sql-server-samples/.../arc-sql-license-type-compliance claestom/sql-arc-policy-license-config Thank you!627Views2likes0CommentsAzure IoT Operations 2603 is now available: Powering the next era of Physical AI
Industrial AI is entering a new phase. For years, AI innovation has largely lived in dashboards, analytics, and digital decision support. Today, that intelligence is moving into the real world, onto factory floors, oil fields, and production lines, where AI systems don’t just analyze data, but sense, reason, and act in physical environments. This shift is increasingly described as Physical AI: intelligence that operates reliably where safety, latency, and real‑world constraints matter most. With the Azure IoT Operations 2603 (v1.3.38) release, Microsoft is delivering one of its most significant updates to date, strengthening the platform foundation required to build, deploy, and operate Physical AI systems at industrial scale. Why Physical AI needs a new kind of platform Physical AI systems are fundamentally different from digital‑only AI. They require: Real‑time, low‑latency decision‑making at the edge Tight integration across devices, assets, and OT systems End‑to‑end observability, health, and lifecycle management Secure cloud‑to‑edge control planes with governance built in Industry leaders and researchers increasingly agree that success in Physical AI depends less on isolated models, and more on software platforms that orchestrate data, assets, actions, and AI workloads across the physical world. Azure IoT Operations was built for exactly this challenge. What’s new in Azure IoT Operations 2603 The 2603 release delivers major advancements across data pipelines, connectivity, reliability, and operational control, enabling customers to move faster from experimentation to production‑grade Physical AI. Cloud‑to‑edge management actions Cloud‑to‑edge management actions enable teams to securely execute control and configuration operations on on‑premises assets, such as invoking methods, writing values, or adjusting settings, using Azure Resource Manager and Event Grid–based MQTT messaging. This capability extends the Azure control plane beyond the cloud, allowing intent, policy, and actions to be delivered reliably to physical systems while remaining decoupled from protocol and device specifics. For Physical AI, this closes the loop between perception and action: insights and decisions derived from models can be translated into governed, auditable changes in the physical world, even when assets operate in distributed or intermittently connected environments. Built‑in RBAC, managed identity, and activity logs ensure every action is authorized, traceable, and compliant, preserving safety, accountability, and human oversight as intelligence increasingly moves from observation to autonomous execution at the edge. No‑code dataflow graphs Azure IoT Operations makes it easier to build real‑time data pipelines at the edge without writing custom code. No‑code data flow graphs let teams design visual processing pipelines using built‑in transforms, with improved reliability, validation, and observability. Visual Editor – Build multi-stage data processing systems in the Operations Experience canvas. Drag and connect sources, transforms, and destinations visually. Configure map rules, filter conditions, and window durations inline. Deploy directly from the browser or define in Bicep/YAML for GitOps. Composable Transforms, Any Order – Chain map, filter, branch, concatenate, and window transforms in any sequence. Branch splits messages down parallel paths based on conditions. Concatenate merges them back. Route messages to different MQTT topics based on content. No fixed pipeline shape. Expressions, Enrichment, and Aggregation – Unit conversions, math, string operations, regex, conditionals, and last-known-value lookups, all built into the expression language. Enrich messages with external data from a state store. Aggregate high-frequency sensor data over tumbling time windows to compute averages, min/max, and counts. Open and Extensible – Connect to MQTT, Kafka, and OpenTelemetry (OTel) endpoints with built-in security through Azure Key Vault and managed identities. Need logic beyond what no-code covers? Drop a custom Wasm module (even embed and run ONNX AI ML models) into the middle of any graph alongside built-in transforms. You're never locked into declarative configuration. Together, these capabilities allow teams to move from raw telemetry to actionable signals directly at the edge without custom code or fragile glue logic. Expanded, production‑ready connectivity The MQTT connector enables customers to onboard MQTT devices as assets and route data to downstream workloads using familiar MQTT topics, with the flexibility to support unified namespace (UNS) patterns when desired. By leveraging MQTT’s lightweight publish/subscribe model, teams can simplify connectivity and share data across consumers without tight coupling between producers and applications. This is especially important for Physical AI, where intelligent systems must continuously sense state changes in the physical world and react quickly based on a consistent, authoritative operational context rather than fragmented data pipelines. Alongside MQTT, Azure IoT Operations continues to deliver broad, industrial‑grade connectivity across OPC UA, ONVIF, Media, REST/HTTP, and other connectors, with improved asset discovery, payload transformation, and lifecycle stability, providing the dependable connectivity layer Physical AI systems rely on to understand and respond to real‑world conditions. Unified health and observability Physical AI systems must be trustworthy. Azure IoT Operations 2603 introduces unified health status reporting across brokers, dataflows, assets, connectors, and endpoints, using consistent states and surfaced through both Kubernetes and Azure Resource Manager. This enables operators to see—not guess—when systems are ready to act in the physical world. Optional OPC UA connector deployment Azure IoT Operations 2603 introduces optional OPC UA connector deployment, reinforcing a design goal to keep deployments as streamlined as possible for scenarios that don’t require OPC UA from day one. The OPC UA connector is a discrete, native component of Azure IoT Operations that can be included during initial instance creation or added later as needs evolve, allowing teams to avoid unnecessary footprint and complexity in MQTT‑only or non‑OPC deployments. This reflects the broader architectural principle behind Azure IoT Operations: a platform built for composability and decomposability, where capabilities are assembled based on scenario requirements rather than assumed defaults, supporting faster onboarding, lower resource consumption, and cleaner production rollouts without limiting future expansion. Broker reliability and platform hardening The 2603 release significantly improves broker reliability through graceful upgrades, idempotent replication, persistence correctness, and backpressure isolation—capabilities essential for always‑on Physical AI systems operating in production environments. Physical AI in action: What customers are achieving today Azure IoT Operations is already powering real‑world Physical AI across industries, helping customers move beyond pilots to repeatable, scalable execution. Procter & Gamble Consumer goods leader P&G continually looks for ways to drive manufacturing efficiency and improve overall equipment effectiveness—a KPI encompassing availability, performance, and quality that’s tracked in P&G facilities around the world. P&G deployed Azure IoT Operations, enabled by Azure Arc, to capture real-time data from equipment at the edge, analyze it in the cloud, and deploy predictive models that enhance manufacturing efficiency and reduce unplanned downtime. Using Azure IoT Operations and Azure Arc, P&G is extrapolating insights and correlating them across plants to improve efficiency, reduce loss, and continue to drive global manufacturing technology forward. More info. Husqvarna Husqvarna Group faced increasing pressure to modernize its fragmented global infrastructure, gain real-time operational insights, and improve efficiency across its supply chain to stay competitive in a rapidly evolving digital and manufacturing landscape. Husqvarna Group implemented a suite of Microsoft Azure solutions—including Azure Arc, Azure IoT Operations, and Azure OpenAI—to unify cloud and on-premises systems, enable real-time data insights, and drive innovation across global manufacturing operations. With Azure, Husqvarna Group achieved 98% faster data deployment and 50% lower infrastructure imaging costs, while improving productivity, reducing downtime, and enabling real-time insights across a growing network of smart, connected factories. More info. Chevron With its Facilities and Operations of the Future initiative, Chevron is reimagining the monitoring of its physical operations to support remote and autonomous operations through enhanced capabilities and real-time access to data. Chevron adopted Microsoft Azure IoT Operations, enabled by Azure Arc, to manage and analyze data locally at remote facilities at the edge, while still maintaining a centralized, cloud-based management plane. Real-time insights enhance worker safety while lowering operational costs, empowering staff to focus on complex, higher-value tasks rather than routine inspections. More info. A platform purpose‑built for Physical AI Across manufacturing, energy, and infrastructure, the message is clear: the next wave of AI value will be created where digital intelligence meets the physical world. Azure IoT Operations 2603 strengthens Microsoft’s commitment to that future—providing the secure, observable, cloud‑connected edge platform required to build Physical AI systems that are not only intelligent, but dependable. Get started To explore the full Azure IoT Operations 2603 release, review the public documentation and release notes, and start building Physical AI solutions that operate and scale confidently in the real world.608Views3likes0CommentsAnnouncing Private Preview: Deploy Ansible Playbooks using Azure Policy via Machine Configuration
Azure Arc is on a mission to unify security, compliance, and management for Windows and Linux machines—anywhere. By extending Azure’s control plane beyond the cloud, Azure Arc enables organizations to unify governance, compliance, security and management of servers across on‑premises, edge, and multicloud environments using a consistent set of Azure tools and policies. Building on this mission, we’re excited to announce the private preview of deploying Ansible playbooks through Azure Policy using Machine Configuration, bringing Ansible‑driven automation into Azure Arc’s policy‑based governance model for Azure and Arc‑enabled Linux machines. This new capability enables you to orchestrate Ansible playbook execution directly from Azure Policy (via Machine Configuration) without requiring an Ansible control node, while benefiting from built‑in compliance reporting and remediation. Why this matters As organizations manage increasingly diverse server estates, they often rely on different tools for Windows and Linux, cloud, on-premises, or at the edge—creating fragmented security, compliance, and operational workflows. Many organizations rely on Ansible for OS configuration and application setup, but struggle with: Enforcing consistent configuration across distributed environments Detecting and correcting drift over time Integrating Ansible automation with centralized governance and compliance workflows With this private preview, Azure Policy becomes the single control plane for applying and monitoring Ansible‑based configuration, bringing Linux automation into the same governance model already used for Windows. Configuration is treated as policy—declarative, auditable, and continuously enforced—with compliance results surfaced in familiar Azure dashboards. What’s included in the private preview In this preview, you can: Use Azure Policy to trigger Ansible playbook execution on Azure and Azure Arc–enabled Linux machines Execute playbooks locally on each target machine, triggered by policy. Enable drift detection and automatic remediation by default View playbook execution status and compliance results directly in the Azure Policy compliance dashboard, alongside your other policies This provides a unified security, compliance and management experience across Windows and Linux machines—whether they’re running in Azure or connected through Azure Arc—while using your existing Ansible investments. Join the private preview If you’re interested in helping shape the future of Ansible‑based configuration management in Azure Arc, we’d love to partner with you. We’re especially interested in hearing your stories around usability, compliance reporting, and real‑world operational workflows. 👉 Sign up for the private preview and we'll reach out to you. We’ll continue investing in deeper Linux parity, broader scenarios, and tighter integration across Azure Arc’s security, governance and compliance experiences. We look forward to enhancing your unified Azure Arc experience for deploying, governing, and remediating configuration with Ansible—bringing consistent security, compliance, and management to Windows and Linux machines not only in Azure, but also across on‑premises and other public clouds.509Views1like0CommentsSimplify Azure Arc Server Onboarding with Ansible and the New Onboarding Role
If you’re already using Ansible to manage your infrastructure, there’s now a simpler—and more secure—way to bring machines under Azure Arc management. We’ve introduced a new Azure Arc onboarding role designed specifically for automated scenarios like Ansible playbooks. This role follows the principle of least privilege, giving your automation exactly what it needs to onboard servers—nothing more. A better way to onboard at scale Many customers want to standardize Azure Arc onboarding across hybrid and multicloud environments, but run into common challenges: Over‑privileged service principals Manual steps that don’t scale Inconsistent onboarding across environments By combining Ansible with the Azure Arc onboarding role, you can: Automate server onboarding end‑to‑end Reduce permissions risk with a purpose‑built role Scale confidently across thousands of machines Integrate Arc onboarding into existing Ansible workflows Built for automation, designed for security The new onboarding role removes the need to assign broader Azure roles just to connect servers to Azure Arc. Instead, your Ansible automation can authenticate using a tightly scoped identity that’s purpose‑built for Arc onboarding—making security teams happier without slowing down operations. Whether you’re modernizing existing datacenters or managing servers across multiple clouds, this new approach makes Azure Arc onboarding simpler, safer, and repeatable. Get started in minutes Our Microsoft Learn documentation provides guidance to help you get started quickly: Connect machines to Azure Arc at scale with Ansible Check out the Arc onboarding role, part of the Azure collection in Ansible Galaxy: Ansible Galaxy - azure.azcollection - Arc onboarding role Anything else you’d like to see with Azure Arc + Linux? Drop us a comment!262Views0likes0CommentsRun the latest Azure Arc agent with Automatic Agent Upgrade (Public Preview)
Customers managing large fleets of Azure Arc servers need a scalable way to ensure the Azure Arc agent stays up to date without manual intervention. Per server configuration does not scale, and gaps in upgrade coverage can lead to operational drift, missed features, and delayed security updates. To address this, we’re introducing two new options to help customers enable Automatic Agent Upgrade at scale: applied as a built-in Azure Policy and a new onboarding CLI flag. The built-in policy makes it easy to check whether Automatic Agent Upgrade is enabled across a given scope and automatically remediates servers that are not compliant. For servers being newly onboarded, customers can enable the feature at onboarding by adding the --enable-automatic-upgrade flag to the azcmagent connect command, ensuring the agent is configured correctly from the start. What is Automatic Agent Upgrade? Automatic Agent Upgrade is a feature, in public preview, that automatically keeps the Azure Connected Machine agent (Arc agent) up to date. Updates are managed by Microsoft, so once enabled, customers no longer need to manually manage agent upgrades. By always running the latest agent version, customers receive all the newest capabilities, security updates, and bug fixes as soon as they’re released. Learn more: What's new with Azure Connected Machine agent - Azure Arc | Microsoft Learn. Getting Started Apply automatic agent upgrade policy Navigate to the ‘Policy’ blade in the Azure Portal Navigate to the ‘Compliance’ section and click ‘Assign Policy’ Fill out the required sections Scope: Subscription and resource group (optional) that policy will apply to Policy definition: Configure Azure Arc-enabled Servers to enable automatic upgrades Navigate to the ‘Remediation’ tab and check the box next to ‘Create a remediation task’ Navigate to the ‘Review + create’ tab and press ‘Create’. The Policy has been successfully applied to the scope. For more information on this process, please visit this article Quickstart: Create policy assignment using Azure portal - Azure Policy | Microsoft Learn. Apply automatic agent upgrade CLI Flag Adding the following flag enables automatic agent upgrade during onboarding --enable-automatic-upgrade While this flag can be used on a single server, it can also be applied at scale using one of the existing Azure Arc at scale onboarding methods and adding the flag Connect hybrid machines to Azure at scale - Azure Arc | Microsoft Learn. Here is an at scale onboarding sample using a basic script. azcmagent connect --resource-group {rg} --location {location} --subscription-id {subid} --service-principal-id {service principal id} --service-principal-secret {service principal secret} --tenant-id {tenant id} --enable-automatic-upgrade To get started with this feature or learn more, please refer to this article Manage and maintain the Azure Connected Machine agent - Azure Arc | Microsoft Learn.722Views1like2Comments