adaptive cloud
70 TopicsBringing AI to the Factory Floor with Foundry Local - Now in Public Preview on Azure Local
Key capabilities in this preview Foundry Local exposes standard REST and OpenAI‑compatible APIs, enabling IT and AI teams to deploy and operate local AI workloads using familiar, cloud‑aligned patterns across edge and on‑prem environments. In this public preview, we deliver the following capabilities: Azure Arc extension for Foundry Local Deploy and manage Foundry Local via an Azure Arc extension, enabling consistent install, configure, update, and governance workflows across Arc‑enabled Kubernetes clusters, in addition to Helm‑based installation. Built‑in generative models from the Foundry Local catalog Deploy pre‑built generative models directly from the Foundry Local model catalog using a simple control‑plane API request. Bring‑your‑own predictive models (ONNX) from OCI registries Deploy custom predictive models (such as ONNX models) securely pulled from customer‑managed OCI registries and run locally. REST and OpenAI‑compatible inference endpoints Consume both generative and predictive models through standard HTTP endpoints. Multi‑model orchestration for agent‑style applications Enable applications that coordinate multiple local models—for example, generative models guiding calls to predictive models—within a single Kubernetes cluster. Running Foundry Local on Azure Local single-node gives you: A validated, supported hardware foundation for running AI inference at the edge, from compact 1U nodes on the factory floor to rugged form factors in remote sites, using hardware from the Azure Local catalog AKS on Azure Local as the deployment target, so Foundry Local runs as a containerized workload managed by Kubernetes - the same operational model you use for any other workload on the cluster GPU access through the NVIDIA device plugin on AKS, giving Foundry Local's ONNX Runtime direct access to the node's discrete GPU without requiring Windows or host-OS-level configuration Two installation Options for single node deployment: The preview includes the Foundry Local Azure Arc extension, providing a consistent installation, deployment, and lifecycle management experience through Azure Arc, while also supporting Helm‑based installation Choose one of two installation paths: Option 1 - Arc-enabled Kubernetes Extension Recommended when: your organization manages multiple Azure Local instances and wants Microsoft to handle the deployment lifecycle — version updates, configuration drift detection, health monitoring — through the Azure portal without the team needing to manage Helm releases manually. Arc-enabled Kubernetes extensions deploy and manage workloads on AKS clusters registered with Azure. The extension operator runs in the cluster and reconciles the desired state declared in Azure, which means you don't need direct kubectl or helm access to the node to push updates. This is the lower-operational-overhead path for OT teams who are not Kubernetes specialists. Once installed, the extension appears in the Azure portal under your AKS cluster's Extensions blade. Model updates and configuration changes are pushed by modifying the extension configuration in Azure — no shell access to the node required. For disconnected or intermittently connected deployments, the extension operator caches its desired state and continues operating; it reconciles with Azure when connectivity resumes. Option 2 - Helm Chart Recommended when: your team manages AKS workloads with Helm or GitOps (Flux), and you need precise control over GPU resource allocation, node affinity, model pre-loading, or persistent volume configuration. The Helm chart gives you full control over the deployment manifest. You decide exactly how much GPU memory is requested per pod, which node the inference pod is pinned to, and what StorageClass backs the model cache. This matters on a single-node Azure Local deployment where you're sharing one physical GPU between the inference workload and potentially other AKS workloads. With Helm you can also integrate with Flux for GitOps-managed deployment — useful when you manage multiple Azure Local single-node instances across plant sites and want to push model or configuration updates from a central Git repository. Note: Verify the chart repository URL, chart name, and exact values.yaml parameters from the official Foundry Local documentation before deploying to production. Choosing Between the Two Helm Chart Arc Extension authentication API key EntraID Version upgrades Manual helm upgrade or Flux Automatic, managed by Microsoft GitOps compatible Yes (Flux HelmRelease) Yes (via Azure Policy / desired state) Requires cluster access Yes No (after initial registration) Best for Platform engineers, custom configs OT-managed sites, multi-site fleet Disconnected operation Works after initial deploy Works; reconciles on reconnect Control plane K8S native management (kubectl) K8S native management + REST API control plane Early Customer Validation and Key Scenarios Early customer validation is shaping the preview -helping ensure Foundry Local meets real-world requirements for latency, data control, and operating in constrained or disconnected environments across industries such as energy, manufacturing, government, financial services, and retail. Based on this early feedback, customers are prioritizing scenarios such as: Sovereign and regulated o On-site inference with data, models, and processing under customer control o Decision support in disconnected or restricted-network environments o In-jurisdiction processing for sensitive records and casework o Real-time detection and situational awareness within secure facilities Industrial and critical infrastructure o Edge operations assistants combining sensor telemetry with conversational AI o Low-latency quality inspection and process verification on factory floors o Predictive maintenance for remote or intermittently connected equipment o Local safety monitoring and operational oversight close to systems This input is guiding improvements across deployment flows, model catalog experience, hardware coverage, telemetry visibility, and documentation -so teams can evaluate and adopt Foundry Local more quickly and confidently in the environments above. Examples: CNC Anomaly Explanation: A machine vision system on a CNC line classifies a surface defect and passes the classification JSON to the Foundry Local endpoint. Phi-4-mini generates a plain-language root-cause hypothesis for the operator, referencing the specific machining parameters. Disconnected Safety Procedure Lookup: An offshore platform or remote mine site loses WAN connectivity. The Foundry Local pods continue serving requests from the AKS cluster on the Azure Local node - Kubernetes keeps the pods running, the model is already on the local PersistentVolume, and no external dependency is required. Workers query safety procedures (LOTO sequences, chemical handling) from an intranet application backed by the same inference endpoint. Qwen2.5-7B fits within 8–12 GB VRAM and supports a 32K token context window, making it viable for inline procedure retrieval without a separate vector database - useful when plant-floor infrastructure is minimal. Foundry Local for Devices and Foundry Local on Azure Local: What's Different Foundry Local for devices reached general availability for developer devices -Windows 10/11, macOS (Apple Silicon), and Android. That release targets a specific scenario: a developer or end user running AI inference on their own machine, with the model executing locally on their CPU, GPU, or NPU. The install is a single command (winget or brew), the service runs directly on the host OS, and there is no Azure subscription or infrastructure required. It is a developer tool and an application-embedded runtime. General overview of Foundry Local is available here: What is Foundry Local? - Foundry Local | Microsoft Learn The public preview for Azure Local single node is a different deployment target built for a different operational context. The runtime is the same - ONNX Runtime, the same model catalog, the same OpenAI-compatible API - but where it runs, how it is deployed, and how it is managed are entirely different. Foundry Local for Devices (GA) Foundry Local on Azure Local Single Node (Preview) Target Developer machines, end-user devices Enterprise edge servers on the factory floor or remote site OS Windows 10/11, macOS, Android Linux container on AKS on Azure Local Hardware Laptops, workstations, NPU-equipped devices Validated server hardware from the Azure Local catalog GPU access Direct host GPU (CUDA, DirectML, Apple Neural Engine) NVIDIA device plugin on Kubernetes Installation winget install or brew install Arc-enabled Kubernetes extension or Helm chart Lifecycle management Manual update via winget upgrade Managed via Helm/Flux or Arc extension operator Intended consumers One developer or one application on one machine Multiple applications sharing one inference endpoint on the plant network Disconnected operation Supported after model download; primarily online Designed for persistent disconnected operation with NVMe-cached models Model persistence Local device cache Kubernetes PersistentVolume on local storage Operational model Developer installs and manages it Platform team deploys it; applications consume it as a service The short version: the GA device release is for building and running AI-enabled applications on a single machine. The Azure Local single-node preview is for deploying Foundry Local as a shared, production inference service that runs continuously on validated industrial hardware, survives WAN outages, and is consumed by multiple workloads running on the same edge cluster. If you are prototyping an application on your laptop using the GA release, the same application code - specifically the OpenAI-compatible API calls - runs unchanged against the Azure Local deployment. You change only the base_url from localhost to the Kubernetes Service Built for Secure Industrial and Sovereign Operations Foundry Local supports Microsoft’s sovereign cloud principles—allowing AI workloads to operate fully locally, with customer‑controlled data boundaries and governance. Integration with Azure Arc provides unified management, configuration, and monitoring across hybrid and disconnected landscapes, enabling organizations to meet stringent compliance and operational requirements while adopting advanced AI capabilities. Learn more about Foundry Local on Azure Local RECOMMENDED participate in Foundry Local on Azure Local preview form link Foundry Local on Azure Local Documentation link Reach out to the team for support requests, feedback or suggestions here: FoundryLocal_Support@microsoft.com Foundry Local on Azure Local: HELM deployment Demo - link Foundry Local is now Generally Available link511Views0likes0CommentsFrom fragmented sites to consistent governance: Azure Arc patterns for adaptive cloud strategy.
In Manufacturing companies, hybrid architectures aren’t transitional—they’re persistent. Most large manufacturers operate across remote plants, branch sites, private datacenters, and Azure. The main challenge manufacturers face isn’t adopting cloud services, it is preventing long‑term operational fragmentation: multiple teams, multiple tools, inconsistent security controls, and uneven governance as the estate grows. When manufacturing IT grows organically, systems end up scattered across factories, edge, and cloud—creating fragmentation instead of flow. Azure Arc addresses this as an architectural control‑plane pattern: it extends Azure management to infrastructure and Kubernetes outside Azure by projecting them into Azure Resource Manager (ARM) so they can be governed using Azure-native primitives such as policy, RBAC, and monitoring. This article describes three architecture patterns that consistently emerge in manufacturing and edge scenarios. Each pattern addresses a distinct set of constraints—ranging from centralized governance across hybrid estates, to plant‑adjacent platforms, to fully disconnected environments—and illustrates how Azure services can be composed to support these realities in a scalable, well‑governed way. Typical manufacturing environments must contend with some or many of the following components: Latency & determinism: plant-floor systems often require local execution Distributed footprint: dozens/hundreds of sites with varying maturity Connectivity variability: some sites are intermittently connected Regulatory & data constraints: some workloads must remain on premises Cloud: Native cloud applications including the AI based research applications, SAP systems, etc. As a result, the estate becomes a mix of Azure + non‑Azure infrastructure. The failure mode isn’t performance—it’s inconsistent operations: different patching methods, different monitoring stacks, and uneven security baselines. Azure Arc is positioned specifically to create unity across that operational model by bringing hybrid resources into the Azure control plane. A helpful way to think about Arc in manufacturing scenario is to separate the control plane and the data plane: Arc enables a centralized control plane by projecting resources, like the ones below, into ARM: Azure Resource Manager (resource inventory, tags, RBAC, Policy) Security posture & compliance (Defender for Cloud, policy initiatives) Observability and operations workflows (Azure Monitor, Update Manager, etc.) Whereas the data plane remains at distributed locations meaning: Workload execution remains at plants, private DCs, or edge sites Kubernetes API endpoints, runtime traffic, OT systems remain local This separation is an architectural lever allowing organizations to standardize governance without forcing workload relocation. A high-level design decision matrix Constraint Recommended starting pattern Why Many sites + inconsistent tooling Arc as distributed control plane Standardizes governance and inventory via ARM projection Plant workloads require local platform Azure Local + Arc Uses Azure Local baseline + Arc integration for operations Connectivity cannot be assumed Disconnected/intermittent design Forces control-plane boundary design + local autonomy Pattern 1 — Azure Arc as the distributed control plane (for VM, SQL severs+ Kubernetes) When this pattern fits Use this pattern when: You need consistent governance across plants, datacenters, and multicloud You can maintain at least periodic connectivity for control-plane sync You want Azure policy/security/monitoring to apply uniformly Architecture intent Azure Arc projects existing bare metal, VM, and Kubernetes infrastructure resources into Azure to handle operations with Azure management and security tools. Azure Arc simplifies governance and management by delivering a consistent multicloud and on-premises management platform experience for Azure services. Once projected, you can operate hybrid resources using Azure-native constructs (inventory, compliance reporting, policy scope) and apply standardized guardrails. From an architectural standpoint, Azure Arc establishes a centralized control plane in Azure (ARM, RBAC, Policy, Resource Graph) and decentralized data plane remaining at plants, datacenters, or edge sites. This separation enables organizations to apply management‑group–scoped policies, standardized tagging, and Defender for Cloud controls consistently across environments, while preserving local execution and latency characteristics required by manufacturing workloads. Why this pattern matters: It moves organizations from managing individual sites to governing the entire estate as one. It minimizes operational drift as environments expand across plants and edge locations. Centralized control simplifies enforcement of standards without slowing local operations. The pattern creates predictability at scale in highly distributed environments. It establishes a stable foundation for future modernization initiatives. Pattern 2 — Azure Local + Azure Arc (plant-adjacent platform pattern) When this pattern fits Use this pattern when: Workloads must run on premises for latency, sovereignty, or operational control You want cloud-consistent operations without creating a separate tooling island You need a standardized platform for virtualized + containerized workloads at sites You need the local AI inferencing where data needs to be processed at the source/plant site Architecture intent Azure Local Microsoft’s distributed infrastructure solution that extends Azure capabilities to customer-owned environments. It facilitates the local deployment of both modern and legacy applications across distributed or sovereign locations. Azure Local accelerates cloud and AI innovation by seamlessly delivering new applications, workloads, and services from cloud to edge, using Azure Arc as the unifying control plane. From an architectural perspective, Azure Local serves as the local data plane for applications—supporting general‑purpose virtual machines, managed Kubernetes (AKS), and selected Azure services—while Azure Arc extends the Azure control plane to that environment for inventory, policy, monitoring, and security integration. This separation allows workloads to run close to manufacturing systems without creating a parallel or disconnected operational model. Azure Local supports a broad spectrum of workload types on the same platform foundation, including: Traditional line‑of‑business applications on virtual machines Modern containerized workloads using AKS on Azure Local Azure‑consistent platform services that can be deployed locally, such as Azure Virtual Desktop and SQL Managed Instance GPU‑accelerated workloads for AI inferencing and computer vision scenarios Why this pattern matters: Without a platform like Azure Local integrated through Azure Arc, on‑premises manufacturing workloads tend to evolve into bespoke environments with inconsistent security, monitoring, and lifecycle management—making long‑term scale and governance increasingly difficult. Pattern 3 — Disconnected edge workloads (connectivity-constrained design) When this pattern fits Use this pattern when: Sites cannot assume continuous connectivity Local autonomy is required for safety or production continuity You still want centralized governance when connected Architecture intent In manufacturing and edge scenarios, some environments must operate without continuous internet connectivity due to regulatory constraints, physical isolation, or operational risk tolerance. In these cases, architectures must assume that cloud control‑plane access is intermittent or unavailable, while local execution must continue without disruption. Disconnected architectures shift the primary design concern from availability of services to autonomy of execution. This pattern applies to environments that are fully offline, intermittently connected, or explicitly restricted from sending data to public cloud endpoints. Azure supports this model through Disconnected-containers, where containerized services are deployed and operated fully offline. Once provisioned, these containers run entirely on local infrastructure with no runtime dependency on Azure endpoints, enabling uninterrupted execution even during extended disconnection periods. Disconnected containers are offered through commitment tier pricing, each offering a discounted rate compared to the Standard pricing model. Learn more about pricing here: Plan and Manage Costs - Microsoft Foundry | Microsoft Learn Before attempting to run a Docker container in an offline environment, make sure you know the steps to successfully download and use the container. For example: Host computer requirements and recommendations. The Docker pull command you use to download the container. How to validate that a container is running. How to send queries to the container's endpoint once it's running. Why this pattern matters: This pattern matters because not all environments can rely on continuous connectivity. It enables critical workloads to operate independently at the edge while remaining aligned to central governance when connectivity is available. The pattern prioritizes local autonomy without sacrificing architectural discipline. It reduces operational risk in constrained or disconnected sites. This approach ensures resilience and continuity in environments where connectivity cannot be assumed. Manufacturing IT will remain distributed by design. The risk is not hybrid complexity, but fragmented operations. By centralizing the control plane while keeping execution local, Arc enables consistent security, compliance, and operations across cloud, datacenter, and edge.238Views0likes0CommentsAutomating Arc-enabled SQL Server license type configuration with Azure Policy
Azure Arc enables customers to onboard SQL Server instances - hosted on Linux or Windows - into Azure, regardless of where they are hosted: on‑premises, in multicloud environments, or at the edge. Once onboarded, these resources can be managed through the Azure Portal using services like Azure Monitor, Azure Policy, and Microsoft Defender for Cloud. An important part of this onboarding is configuring the license type on each Arc-enabled resource to match your licensing agreement with Microsoft. For SQL Server, the LicenseType property on the Arc extension determines how the instance is licensed: Paid (you have a SQL Server license with Software Assurance or a SQL Server subscription), PAYG (you are paying for SQL Server software on a pay-as-you-go basis), or LicenseOnly (you have a perpetual SQL Server license). Setting this correctly matters for two reasons: Unlocking additional benefits: customers with Paid or PAYG license type gain access to some Azure services at no extra cost - such as Azure Update Manager and Machine Configuration - as well as exclusive capabilities like Best Practices Assessment and Remote Support Enabling pay-as-you-go billing: customers who do not have Software Assurance can pay for SQL Server software only when they use it via their Azure subscription by setting the license type to PAYG Configure the license types at scale using Azure Policy Configuring the license type on each Arc-enabled SQL Server instance can be done manually in the Azure Portal, but for large scale operations, automation is essential. One way to implement automation is via PowerShell, as explained here: Configure SQL Server - SQL Server enabled by Azure Arc | Microsoft Learn. But here we will focus on how this can be automated using Azure Policy. An existing article, written by Jeff Pigott, explains this process for Windows Server, which inspired extending the same approach to SQL Server. How to deploy the policy? Deployment has two steps: Create/update the Azure Policy definition and assignment Start a remediation task so existing Arc-enabled SQL Server extensions are brought into compliance You can deploy Azure Policy in multiple ways. In this article, we use PowerShell. See also: Tutorial: Build policies to enforce compliance - Azure Policy | Microsoft Learn. Source code: microsoft/sql-server-samples/.../arc-sql-license-type-compliance. Personal repository: claestom/sql-arc-policy-license-config. Definition and assignment creation Download the required files: # Optional: create and enter a local working directory mkdir sql-arc-lt-compliance cd sql-arc-lt-compliance $baseUrl = "https://raw.githubusercontent.com/microsoft/sql-server-samples/master/samples/manage/azure-arc-enabled-sql-server/compliance/arc-sql-license-type-compliance" New-Item -ItemType Directory -Path policy, scripts -Force | Out-Null curl -sLo policy/azurepolicy.json "$baseUrl/policy/azurepolicy.json" curl -sLo scripts/deployment.ps1 "$baseUrl/scripts/deployment.ps1" curl -sLo scripts/start-remediation.ps1 "$baseUrl/scripts/start-remediation.ps1" Note: On Windows PowerShell 5.1, curl is an alias for Invoke-WebRequest . Use curl.exe instead, or run the commands in PowerShell 7+. Authenticate to Azure: Connect-AzAccount Set your variables. Only TargetLicenseType is required - all others are optional: # Required $TargetLicenseType = "PAYG" # "Paid" or "PAYG" # Optional (uncomment to override defaults) # $ManagementGroupId = "<management-group-id>" # Default: tenant root management group # $SubscriptionId = "<subscription-id>" # Default: policy assigned at management group scope # $ExtensionType = "Both" # "Windows", "Linux", or "Both" (default) # $LicenseTypesToOverwrite = @("Unspecified","Paid","PAYG","LicenseOnly") # Default: all Run the deployment script: # Minimal: uses defaults for management group, platform, and overwrite targets .\scripts\deployment.ps1 -TargetLicenseType $TargetLicenseType # With subscription scope .\scripts\deployment.ps1 -TargetLicenseType $TargetLicenseType -SubscriptionId $SubscriptionId # With all options .\scripts\deployment.ps1 ` -ManagementGroupId $ManagementGroupId ` -SubscriptionId $SubscriptionId ` -ExtensionType $ExtensionType ` -TargetLicenseType $TargetLicenseType ` -LicenseTypesToOverwrite $LicenseTypesToOverwrite Parameter notes: ManagementGroupId (optional): management group where the policy definition is created. Defaults to the tenant root management group when not specified ExtensionType (optional, default Both ): Windows , Linux , or Both . When Both , a single policy definition and assignment covers both platforms SubscriptionId (optional): if provided, assignment scope is subscription (otherwise management group scope) TargetLicenseType (required): Paid or PAYG LicenseTypesToOverwrite (optional, default all): controls which current states are eligible for update Unspecified = no current LicenseType Paid , PAYG , LicenseOnly = explicit current values The script also creates a system-assigned managed identity on the policy assignment and assigns required roles automatically. Role assignments include retry logic (5 attempts, 10-second delay) to handle managed identity replication delays, which helps prevent common PolicyAuthorizationFailed errors. Remediation task creation After deployment, allow a few minutes for Azure Policy to run a compliance scan for the selected scope. You can monitor this in Azure Policy → Compliance. More info: Get policy compliance data - Azure Policy | Microsoft Learn. Set your variables. TargetLicenseType is required and must match the value used during deployment: # Required $TargetLicenseType = "PAYG" # Must match the deployment target # Optional (uncomment to override defaults) # $ManagementGroupId = "<management-group-id>" # Default: tenant root management group # $SubscriptionId = "<subscription-id>" # Default: remediation runs at management group scope # $ExtensionType = "Both" # Must match the platform used for deployment Then start remediation: # Minimal: uses defaults for management group and platform .\scripts\start-remediation.ps1 -TargetLicenseType $TargetLicenseType -GrantMissingPermissions # With subscription scope .\scripts\start-remediation.ps1 -TargetLicenseType $TargetLicenseType -SubscriptionId $SubscriptionId -GrantMissingPermissions # With all options .\scripts\start-remediation.ps1 ` -ManagementGroupId $ManagementGroupId ` -ExtensionType $ExtensionType ` -SubscriptionId $SubscriptionId ` -TargetLicenseType $TargetLicenseType ` -GrantMissingPermissions Parameter notes: ManagementGroupId (optional): defaults to tenant root management group ExtensionType (optional, default Both ): must match the platform used for the assignment SubscriptionId (optional): run remediation at subscription scope TargetLicenseType (required): must match the assignment target GrantMissingPermissions (optional switch): checks and assigns missing required roles before remediation starts You can track remediation progress in Azure Policy → Remediation → Remediation tasks. It can take a few minutes to complete, depending on scope and resource count. Recurring Billing Consent (PAYG) When TargetLicenseType is set to PAYG , the policy automatically includes ConsentToRecurringPAYG in the extension settings with Consented: true and a UTC timestamp. For details of this requirement see: Move SQL Server license agreement to pay-as-you-go subscription - SQL Server enabled by Azure Arc | Microsoft Learn. The policy also checks for ConsentToRecurringPAYG in its compliance evaluation - resources with LicenseType: PAYG but missing the consent property are flagged as non-compliant and remediated. This applies both when transitioning to PAYG and for existing PAYG extensions that predate the consent requirement (backward compatibility). Note: Once ConsentToRecurringPAYG is set on an extension, it cannot be removed - this is enforced by the Azure resource provider. When transitioning away from PAYG, the policy changes LicenseType but leaves the consent property in place. RBAC When .\scripts\deployment.ps1 creates the policy assignment, it uses -IdentityType SystemAssigned . Azure then creates a managed identity for that assignment. The assignment identity needs these roles at assignment scope (or inherited scope): Azure Extension for SQL Server Deployment: allows updating Arc SQL extension settings, including LicenseType Reader: allows reading resource and extension state for policy evaluation Resource Policy Contributor: allows policy-driven template deployments required by DeployIfNotExists This identity is used whenever DeployIfNotExists applies changes, both during regular compliance evaluation and during remediation runs. By default, the deployment script assigns these roles automatically with built-in retry logic to handle managed identity replication delays, which helps prevent common PolicyAuthorizationFailed errors. Brownfield and Greenfield Scenarios This policy is useful in both brownfield and greenfield Azure Arc environments. Brownfield: existing Arc SQL inventory In a brownfield environment, you already have Arc-enabled SQL Server resources in inventory and the current LicenseType values might be mixed, incorrect, or missing. This is where Azure Policy is especially useful, because it gives you a controlled way to remediate the current estate at scale. Depending on how you configure targetLicenseType and licenseTypesToOverwrite, you can use the policy to: standardize all in-scope resources on a single value set LicenseType only when it is missing migrate a specific subset, such as Paid to PAYG preserve selected states while correcting only the resources that need attention Examples: Standardize everything to Paid targetLicenseType: Paid licenseTypesToOverwrite: ['Unspecified','Paid','PAYG','LicenseOnly'] Result: every in-scope Arc SQL extension is converged to LicenseType == Paid. Backfill only missing values targetLicenseType: Paid licenseTypesToOverwrite: ['Unspecified'] Result: only resources without a configured LicenseType are updated; existing Paid, PAYG, and LicenseOnly values remain unchanged. Migrate only Paid to PAYG targetLicenseType: PAYG licenseTypesToOverwrite: ['Paid'] Result: only resources currently set to Paid are updated to PAYG; missing, PAYG, and LicenseOnly remain unchanged. When transitioning to PAYG, the policy also automatically sets ConsentToRecurringPAYG with Consented: true and a UTC timestamp, as required for recurring pay-as-you-go billing. Protect existing PAYG, fix only missing or LicenseOnly targetLicenseType: Paid licenseTypesToOverwrite: ['Unspecified','LicenseOnly'] Result: resources with no LicenseType or with LicenseOnly are updated to Paid, while existing PAYG stays untouched. Greenfield: newly onboarded SQL Servers In a greenfield scenario, the main value of Azure Policy is ongoing enforcement. Once new SQL Servers are onboarded to Azure Arc and fall within the assignment scope, the policy can act as a governance control to keep LicenseType aligned with your business model. This means Azure Policy is not only a remediation mechanism for existing inventory, but also a way to continuously enforce the intended license configuration for future Arc-enabled SQL Server resources. Azure Policy vs tagging By default, Microsoft manages automatic deployment of SQL Server extension for Azure. It include an option to enforce the LicenseType setting via tags. See Manage Automatic Connection - SQL Server enabled by Azure Arc | Microsoft Learn for details. This way all newly onboarded SQL Server instance are set to the desired LicenceType from day one. The deployment of the Azure Policy is still important to ensure that the changes of the extension properties or ad-hoc additions of the SQL Server instances stay compliant to our business model. A practical way to think about it: Tagging ensures the initial compliance of the newly connected Arc-enabled SQL servers Azure Policy enforces ongoing compliance of the existing Arc-enabled SQL servers Tools Interested in gaining better visibility into LicenseType configurations across your estate? Below you'll find an insightful KQL query and an accompanying workbook to help track compliance. KQL Query resources | where type == "microsoft.hybridcompute/machines" | where properties.detectedProperties.mssqldiscovered == "true" | extend machineIdHasSQLServerDiscovered = id | project name, machineIdHasSQLServerDiscovered, resourceGroup, subscriptionId | join kind= leftouter ( resources | where type == "microsoft.hybridcompute/machines/extensions" | where properties.type in ("WindowsAgent.SqlServer","LinuxAgent.SqlServer") | extend machineIdHasSQLServerExtensionInstalled = iff(id contains "/extensions/WindowsAgent.SqlServer" or id contains "/extensions/LinuxAgent.SqlServer", substring(id, 0, indexof(id, "/extensions/")), "") | project License_Type = properties.settings.LicenseType, machineIdHasSQLServerExtensionInstalled)on $left.machineIdHasSQLServerDiscovered == $right.machineIdHasSQLServerExtensionInstalled | where isnotempty(machineIdHasSQLServerExtensionInstalled) | project-away machineIdHasSQLServerDiscovered, machineIdHasSQLServerExtensionInstalled Source: Configure SQL Server - SQL Server enabled by Azure Arc | Microsoft Learn. Azure Workbook claestom/azure-arc-sa-workbook: Azure Workbook for monitoring Software Assurance compliance across Arc-enabled servers and SQL Server instances. Resources Configure SQL Server - SQL Server enabled by Azure Arc | Microsoft Learn Azure Policy documentation | Microsoft Learn Automating Windows Server Licensing Benefits with Azure Arc Policy | Microsoft Community Hub Recurring billing consent - SQL Server enabled by Azure Arc | Microsoft Learn claestom/azure-arc-sa-workbook: Azure Workbook for monitoring Software Assurance compliance across Arc-enabled servers and SQL Server instances microsoft/sql-server-samples/.../arc-sql-license-type-compliance claestom/sql-arc-policy-license-config Thank you!584Views2likes0CommentsBuilding Microsoft’s Sovereign AI on Azure Local with NVIDIA RTX PRO and Next Gen NVIDIA Rubin
This blog explores how Azure Local, in partnership with NVIDIA, enables governments and regulated industries to build and operate Sovereign AI within their own trusted boundaries. From enterprise AI acceleration available today with NVIDIA RTX PRO™ Blackwell GPUs to a forward‑looking preview of next‑generation NVIDIA Rubin support, Azure Local provides a consistent platform to run advanced AI workloads—connected or fully disconnected—without sacrificing control, compliance, or governance. Together with Foundry Local, AKS on Azure Local, and Azure Arc, customers can bring AI closer to sensitive data and evolve their Sovereign Private Cloud strategies over time with confidence.1KViews3likes0CommentsAnnouncing the General Availability of the Azure Arc Gateway for Arc-enabled Kubernetes!
We’re excited to announce the General Availability of Arc gateway for Arc‑enabled Kubernetes. Arc gateway dramatically simplifies the network configuration required to use Azure Arc by consolidating outbound connectivity through a small, predictable set of endpoints. For customers operating behind enterprise proxies or firewalls, this means faster onboarding, fewer change requests, and a smoother path to value with Azure Arc. What’s new: To Arc‑enable a Kubernetes Cluster, customers previously had to allow 18 distinct endpoints. With Arc gateway GA, you can do the same with just 9, a 50% reduction that removes friction for security and networking teams. Why This Matters Organizations with strict outbound controls often spend days, or weeks, coordinating approvals for multiple URLs before they can onboard resources to Azure Arc. By consolidating traffic to a smaller set of destinations, Arc gateway: Accelerates onboarding for Arc‑enabled Kubernetes by cutting down the proxy/firewall approvals needed to get started. Simplifies operations with a consistent, repeatable pattern for routing Arc agent and extension traffic to Azure. How Arc gateway works Arc gateway introduces two components that work together to streamline connectivity: Arc gateway (Azure resource): A single, unique endpoint in your Azure tenant that receives incoming traffic from on‑premises Arc workloads and forwards it to the right Azure services. You configure your enterprise environment to allow this endpoint. Azure Arc Proxy (on every Arc‑enabled Kubernetes Cluster): A component of the Arc K8s agent that routes agent and extension traffic to Azure via the Arc gateway endpoint. It’s part of the core Arc agent; no separate install is required. At a high level, traffic flows: Arc-enabled Kubernetes agent → Arc Proxy → Enterprise Proxy → Arc gateway → Target Azure service. Scenario Coverage As part of this GA release, Arc-enabled Kubernetes Onboarding and other common Arc‑enabled Kubernetes scenarios are supported through Arc gateway, including: Arc-enabled Kubernetes Cluster Connect Arc-enabled Kubernetes Resource View Custom Location Azure Policy's Extension for Azure Arc For other scenarios, including Microsoft Defender for Containers, Azure Key Vault, Container Insights in Azure Monitor, etc., some customer‑specific data plane destinations (e.g., your Log Analytics workspaces, Storage Accounts, or Key Vault URLs) still need to be allow‑listed per your environment. Please consult the Arc gateway documentation for the current scenario‑by‑scenario coverage and any remaining per‑service URLs. Get started Create an Arc gateway resource using the Azure portal, Azure CLI, or PowerShell. Allow the Arc gateway endpoint (and the small set of core endpoints) in your enterprise proxy/firewall. Onboard or update clusters to use your Arc gateway resource. For step‑by‑step guidance, see the Arc gateway documentation on Microsoft Learn. FAQs Does Arc gateway require new software on my clusters? No additional installation - Arc Proxy is part of the standard Arc-enabled Kubernetes Agent. Will every Arc scenario route through the gateway today? Arc-enablement, and other scenarios are covered at GA; some customer‑specific data plane endpoints (for example, Log Analytics workspace FQDNs) may still need to be allowed. Check the docs for the latest coverage details. What is the status of Arc gateway for other infrastructure types? Arc gateway is already GA for Arc-enabled Servers, and Azure Local. Tell us what you think We’d love your feedback on Arc gateway GA for Kubernetes - what worked well, what could be improved, and which scenarios you want next. Use the Arc gateway feedback form to share your input with the product team.933Views3likes0CommentsAnnouncing Public Preview: Simplified Machine Provisioning for Azure Local
Deploying infrastructure at the edge has always been challenging. Whether it’s retail stores, factories, branch offices, or remote sites, getting servers racked, configured, and ready for workloads often require skilled IT staff on-site. That process is slow, expensive, and error-prone, especially when deployments need to happen at scale. To address this, we’re introducing Public Preview of Simplified Machine Provisioning for Azure Local - a new way to provision Azure Local hardware with minimal onsite interaction, while maintaining centralized control through Azure. This new approach enables customers to provision hardware by racking, powering on, and letting Azure do the rest. New Machine Provisioning Simplified machine provisioning shifts configuration to Azure, reducing the need for technical expertise on-site. Instead of manually configuring each server locally, IT teams can now: Define provisioning configuration centrally in Azure Securely complete provisioning remotely with minimal steps Automate provisioning workflows using ARM templates and ensure consistency across sites Built on Open Standards Simplified machine provisioning on Azure Local is based on the FIDO Device Onboarding (FDO) specification, an industry-standard approach for securely onboarding devices at scale. FDO enables: Secure device identity and ownership transfer protecting machines with zero trust supply chain security A consistent onboarding model across device classes, this foundation can extend beyond servers to broader edge scenarios. Centralized Site-Based Configuration in Azure Arc The new machine provisioning flow uses Azure Arc Site, allowing customers to define configuration once and apply it consistently across multiple machines. In Azure Arc, a site represents a physical business location (store/factory/campus) and the set of resources associated with it. It enables targeted operations and configuration at a per‑site level (or across many sites) for consistent management at scale. With site-based configuration, customers can: Create and manage machine provisioning settings centrally in the Azure portal Define networking and environment configuration at the site level Reuse the same configuration as new machines are added Minimal Onsite Interaction Simplified provisioning is designed to minimize onsite effort. The on-site staff only rack and power on the hardware and insert the prepared USB. No deep infrastructure or Azure expertise required. After exporting the ownership voucher and sharing it with IT, the remaining provisioning is completed remotely by IT teams through Azure. The prepared USB is created using a first‑party Microsoft USB Preparation Tool that comes with the maintenance environment* package available through the Azure portal, enabling consistent, repeatable creation of bootable installation media. *Maintenance environment - a lightweight bootstrap OS that connects the machine to Azure, installs required Azure Arc extensions, and then downloads and installs the Azure Local operating system. End-to-End visibility into Deployment Customers get visibility into deployment progress which helps in quickly identifying where a deployment is in the process and respond faster when issues arise. They can look into the status using Provisioning experience in Azure portal or using Configurator app. Seamless Transition to Cluster Creation and Workloads Once provisioning is complete, machines created through this flow are ready for Azure Local cluster creation. Customers can proceed with cluster setup and workload deployment. How it works? At a high level, this simpler way of machine provisioning looks like this: Minimal onsite setup Prepare a USB drive using machine provisioning software Insert the prepared USB drive & boot the machine Share the machine ownership voucher with IT team. Provision remotely Create an Azure Arc site Configure networking, subscription, and deployment settings Download provisioning artifacts from the Azure portal Deploy Azure Local cluster using existing flows in Azure Arc. Once provisioning is complete, the environment is ready for cluster creation and workload deployment on Azure Local. Status and progress are visible in both the Azure portal, and the Configurator app. IT teams can monitor, troubleshoot, and complete provisioning remotely. Available Now in Public Preview This new experience empowers organizations to deploy Azure Local infrastructure faster, more consistently, and at scale, while minimizing on-site complexity. We invite customers and partners to explore the preview and help us shape the future of edge infrastructure deployment. Try it at https://aka.ms/provision/tryit. Refer documentation for more details.2.8KViews8likes4CommentsUpgrade Azure Local operating system to new version
11/14/2025 Revision The recommended upgrade paths have changed with the Azure Local 2510 release, and the information in this blog is now outdated. Please refer to the following release notes for the latest information: Azure Local release information Today, we’re sharing more details about the end of support for Azure Local, with OS version 25398.xxxx (23H2) on October 31, 2025. After this date, monthly security and quality updates stop, and Microsoft Support remains available only for upgrade assistance. Your billing continues, and your systems keep working, including registration and repair. There are several options to upgrade to Azure Local, with OS version 26100.xxxx (24H2) depending on which scenario applies to you. Scenario #1: You are on Azure Local solution, with OS version 25398.xxxx If you're already running the Azure Local solution, with OS version 25398.xxxx, there is no action required. You will automatically receive the upgrade to OS version 26100.xxxx via a solution update to 2509. Azure Local, version 23H2 and 24H2 release information - Azure Local | Microsoft Learn for the latest version of the diagram. If you are interested in upgrading to OS version 26100.xxxx before the 2509 release, there will be an opt-in process available in the future with production support. Scenario #2: You are on Azure Stack HCI and haven’t performed the solution upgrade yet Scenario #2a: You are still on Azure Stack HCI, version 22H2 With the 2505 release, a direct upgrade path from version 22H2 OS (20349.xxxx) to 24H2 OS (26100.xxxx) has been made available. To ensure a validated, consistent experience, we have reduced the process to using the downloadable media and PowerShell to install the upgrade. If you’re running Azure Stack HCI, version 22H2 OS, we recommend taking this direct upgrade path to the version 24H2 OS. Skipping the upgrade to the version 23H2 OS will be one less upgrade hop and will help reduce reboots and maintenance planning prior to the solution upgrade. After then, perform post-OS upgrade tasks and validate the solution upgrade readiness. Consult with your hardware vendor to determine if version 24H2 OS is supported before performing the direct upgrade path. The solution upgrade for systems on the 24H2 OS is not yet supported but will be available soon. Scenario #2b: You are on Azure Stack HCI, version 23H2 OS If you performed the upgrade from Azure Stack HCI, version 22H2 OS to version 23H2 OS (25398.xxxx), but haven’t applied the solution upgrade, then we recommend that you perform post-OS upgrade tasks, validate the solution upgrade readiness, and apply the solution upgrade. Diagram of Upgrade Paths Conclusion We invite you to identify which scenarios apply to you and take action to upgrade your systems. On behalf of the Azure Local team, we thank you for your continuous trust and feedback! Learn more To learn more, refer to the upgrade documentation. For known issues and remediation guidance, see the Azure Local Supportability GitHub repository.4.5KViews4likes10CommentsAnnouncing the preview of Azure Local rack aware cluster
As of 1/22/2026, Azure Local rack aware cluster is now generally available! To learn more: Overview of Azure Local rack aware clustering - Azure Local | Microsoft Learn We are excited to announce the public preview of Azure Local rack aware cluster! We previously published a blog post with a sneak peek of Azure Local rack aware cluster and now, we're excited to share more details about its architecture, features, and benefits. Overview of Azure Local rack aware cluster Azure Local rack aware cluster is an advanced architecture designed to enhance fault tolerance and data distribution within an Azure Local instance. This solution enables you to cluster machines that are strategically placed across two physical racks in different rooms or buildings, connected by high bandwidth and low latency within the same location. Each rack functions as a local availability zone, spanning layers from the operating system to Azure Local management, including Azure Local VMs. The architecture leverages top-of-rack (ToR) switches to connect machines between rooms. This direct connection supports a single storage pool, with rack aware clusters distributing data copies evenly between the two racks. Even if an entire rack encounters an issue, the other rack maintains the integrity and accessibility of the data. This design is valuable for environments needing high availability, particularly where it is essential to avoid rack-level data loss or downtime from failures like fires or power outages. Key features Starting in Azure Local version 2510, this release includes the following key features for rack aware clusters: Rack-Level Fault Tolerance & High Availability Clusters span two physical racks in separate rooms, connected by high bandwidth and low latency. Each rack acts as a local availability zone. If one rack fails, the other maintains data integrity and accessibility. Support for Multiple Configurations Architecture supports 2 machines up to 8 machines, enabling scalable deployments for a wide range of workloads. Scale-Out by Adding Machines Easily expand cluster capacity by adding machines, supporting growth and dynamic workload requirements without redeployment. Unified Storage Pool with Even Data Distribution Rack aware clusters offer a unified storage pool with Storage Spaces Direct (S2D) volume replication, automatically distributing data copies evenly across both racks. This ensures smooth failover and reduces the risk of data loss. Azure Arc Integration and Management Experience Enjoy native integration with Azure Arc, enabling consistent management and monitoring across hybrid environments—including Azure Local VMs and AKS—while maintaining the familiar Azure deployment and operational experience. Deployment Options Deploy via Azure portal or ARM templates, with new inputs and properties in the Azure portal for rack aware clusters. Provision VMs in Local Availability Zones via the Azure Portal Provision Azure Local virtual machines directly into specific local availability zones using the Azure portal, allowing for granular workload placement and enhanced resilience. Upgrade Path from Preview to GA Deploy rack aware clusters with the 2510 public preview build and update to General Availability (GA) without redeployment—protecting your investment and ensuring operational continuity. Get started The preview of rack aware cluster is now available to all interested customers. We encourage you to try it out and share your valuable feedback. To get started, visit our documentation: Overview of Azure Local rack aware clustering (Preview) - Azure Local | Microsoft Learn Stay tuned for more updates as we work towards general availability in 2026. We look forward to seeing how you leverage Azure Local rack aware cluster to power your edge workloads!1.3KViews4likes4CommentsIntroducing Azure Local: cloud infrastructure for distributed locations enabled by Azure Arc
Today at Microsoft Ignite 2024 we're introducing Azure Local, cloud-connected infrastructure that can be deployed at your physical locations and under your operational control. With Azure Local, you can run the foundational Azure compute, networking, storage, and application services locally on hardware from your preferred vendor, providing flexibility to meet your requirements and budget.91KViews24likes27CommentsMicrosoft 365 Local is Generally Available
In today’s digital landscape, organizations and governments are prioritizing data sovereignty to comply with local regulations, protect sensitive information, and safeguard national security. This growing demand for robust jurisdictional controls makes the Microsoft Sovereign Cloud offering especially compelling, providing flexibility and assurance for complex requirements. For those with the most stringent needs, Azure Local enables data and workloads to remain within jurisdictional borders, supporting mission-critical workloads and now expanding to include Microsoft’s productivity solutions—so customers can securely collaborate and communicate within a sovereign private cloud environment. Today, we’re excited to announce the general availability of Microsoft 365 Local. Microsoft 365 Local is a deployment framework for enabling core collaboration and communication tools—including Exchange Server, SharePoint Server, and Skype for Business Server—on Azure Local. Built on a validated reference architecture using Azure Local Premier Solutions , it provides compatibility and support for sovereign deployments. Partner-led services provide guidance on sizing and configuration, ensuring a full-stack deployment including best practices for networking and security. Managing infrastructure across a wide range of workloads is simplified with Azure as your control plane, offering cloud-consistent, at-scale management capabilities. In the Azure portal, you get full visibility into your Microsoft 365 Local deployment across the servers and clusters. All hosts and virtual machines (VMs) are Arc-enabled out of the box, providing built-in visibility into connectivity, health, updates, and security alerts and recommendations. Microsoft 365 Local leverages Azure Local’s best-in-class sovereign and security controls, including Network Security Groups managed with Software Defined Networking enabled by Azure Arc, to isolate networks and secure access to infrastructure and workloads. Azure Local also uses a secure by default strategy by applying a security baseline of over 300 settings on both the host infrastructure and the VMs running the productivity workloads. These security baselines incorporate best practices for network security, identity management, privileged access, data protection, and more—helping organizations maintain compliance and reduce risk. Customers who want to take advantage of Azure as the control plane for Microsoft 365 Local can now benefit from a seamless cloud-based infrastructure management experience, including Azure services like Azure Monitor and Microsoft Defender for Cloud—available today with Microsoft 365 Local connected to Azure. For organizations with the most stringent jurisdictional requirements that need to operate Microsoft 365 Local in a fully disconnected environment, support for Azure Local disconnected operations will be available in early 2026. To learn more about Microsoft 365 Local, visit https://aka.ms/M365LocalDocs. If you’d like to connect with an authorized partner for consultation and deployment support, reach out to your Microsoft account team or visit https://aka.ms/M365LocalSignup.21KViews9likes6Comments