hybrid
118 TopicsAnnouncing the General Availability of the Azure Arc Gateway for Arc-enabled Servers!
We’re excited to announce the General Availability of Arc gateway for Arc‑enabled servers. Arc gateway dramatically simplifies the network configuration required to use Azure Arc by consolidating outbound connectivity through a small, predictable set of endpoints. For customers operating behind enterprise proxies or firewalls, this means faster onboarding, fewer change requests, and a smoother path to value with Azure Arc. What’s new: To Arc‑enable a server, customers previously had to allow 19 distinct endpoints. With Arc gateway GA, you can do the same with just 7, a ~63% reduction that removes friction for security and networking teams. Why This Matters Organizations with strict outbound controls often spend days, or weeks, coordinating approvals for multiple URLs before they can onboard resources to Azure Arc. By consolidating traffic to a smaller set of destinations, Arc gateway: Accelerates onboarding for Arc‑enabled servers by cutting down the proxy/firewall approvals needed to get started. Simplifies operations with a consistent, repeatable pattern for routing Arc agent and extension traffic to Azure. How Arc gateway works Arc gateway introduces two components that work together to streamline connectivity: Arc gateway (Azure resource): A single, unique endpoint in your Azure tenant that receives incoming traffic from on‑premises Arc workloads and forwards it to the right Azure services. You configure your enterprise environment to allow this endpoint. Azure Arc Proxy (on every Arc‑enabled server): A component of the connected machine agent that routes agent and extension traffic to Azure via the Arc gateway endpoint. It’s part of the core Arc agent; no separate install is required. At a high level, traffic flows: Arc agent → Arc Proxy → Enterprise Proxy → Arc gateway → Target Azure service. Scenario Coverage As part of this GA release, common Arc‑enabled Server scenarios are supported through the gateway, including: Windows Admin Center SSH Extended Security Updates (ESU) Azure Extension for SQL Server For other scenarios, some customer‑specific data plane destinations (e.g., your Log Analytics workspace or Key Vault URLs) may still need to be allow‑listed per your environment. Please consult the Arc gateway documentation for the current scenario‑by‑scenario coverage and any remaining per‑service URLs. Over time, the number of scenarios filly covered by Arc gateway will continue to grow. Get started Create an Arc gateway resource using the Azure portal, Azure CLI, or PowerShell. Allow the Arc gateway endpoint (and the small set of core endpoints) in your enterprise proxy/firewall. Onboard or update servers to use your Arc gateway resource and start managing them with Azure Arc. For step‑by‑step guidance, see the Arc gateway documentation on Microsoft Learn. You can also watch a quick Arc gateway Jumpstart demo to see the experience end‑to‑end. FAQs Does Arc gateway require new software on my servers? No additional installation - Arc Proxy is part of the standard connected machine agent for Arc‑enabled servers. Will every Arc scenario route through the gateway today? Many high‑value server scenarios are covered at GA; some customer‑specific data plane endpoints (for example, Log Analytics workspace FQDNs) may still need to be allowed. Check the docs for the latest coverage details. When will Arc gateway for Azure Local be GA? Today! Please refer to the Arc gateway GA on Azure Local Announcement to learn more. When will Arc gateway for Arc-enabled Kubernetes be GA? We don't have an exact ETA to share quite yet for Arc gateway GA for Arc-enabled Kubernetes. The feature is currently still in Public Preview. Please refer to the Public Preview documentation for more information. Tell us what you think We’d love your feedback on Arc gateway GA for servers—what worked well, what could be improved, and which scenarios you want next. Use the Arc gateway feedback form to share your input with the product team.1KViews4likes1CommentAddressing Air Gap Requirements through Secure Azure Arc Onboarding
This blog post explores the challenges and solutions for implementing air gap environments in highly regulated sectors like finance, healthcare, and government. It discusses the complexities of air gap implementation, the importance of control and data plane separation, and provides architectural patterns for secure Azure Arc onboarding. By adopting a zero-trust approach and leveraging Azure Arc, organizations can achieve secure, compliant connectivity while modernizing their IT operations.322Views1like1CommentSQL Server enabled by Azure Arc is now generally available in the US Government Virginia region
We’re thrilled to announce that SQL Server enabled by Azure Arc on Windows is now generally available in the US Government Virginia region. With this, U.S. government agencies and organizations can manage SQL Server instances outside of Azure from the Azure Government portal, in a secure and compliant manner. SQL Server enabled by Azure Arc resources in US government Virginia can be onboarded and viewed in the Azure Government portal just like any Azure resource, giving you a single pane of glass to monitor and organize your SQL Server estate in the Gov cloud. Available Features Currently, in the US Government Virginia region, SQL Server enabled by Azure Arc provides the following features: Connect your SQL Server to Azure Arc (onboard) a SQL Server instance to Azure Arc. SQL Server inventory which includes the following capabilities in the Azure portal: View SQL Server instances as Azure resources. View databases Azure resources. View the properties for each server. For example, you can view the version, edition, and database for each instance. Subscribe to Extended Security Updates in a production environment. Manage licensing and billing of SQL Server enabled by Azure Arc. License virtual cores. Review licensing limitations. All other features aren't currently available. How to Onboard Your SQL Server Onboarding SQL Server enabled by Azure Arc in the Government cloud is a two-step process that you can initiate from the Azure (US Gov) portal. Step 1: Connect hybrid machines with Azure Arc-enabled servers Step 2: Connect your SQL Server to Azure Arc on a server already enabled by Azure Arc Limitations The following SQL Server features aren't currently available in any US Government region: Failover cluster instance (FCI) Availability group (AG) License physical cores (p-cores) with unlimited virtualization. License physical cores (p-cores) without virtual machines. SQL Server associated services: SQL Server Analysis Services SQL Server Integration Services SQL Server Reporting Services Power BI Report Server Future Plans and Roadmap This is a major first step in bringing Azure Arc’s hybrid data management to Azure Government, and we will continue to do additional enhancements to achieve service parity. Conclusion The availability of SQL Server enabled by Azure Arc in the US Gov Virginia region marks an important milestone for hybrid data management in Government. If you’re an Azure Government user managing SQL Server instances, we invite you to try out SQL Server enabled by Azure Arc in US Government in Viginia region. And please, share your feedback with us through the community forum or your Microsoft representatives. Learn More: SQL Server enabled by Azure Arc in US Government SQL Server enabled by Azure Arc Update: September 12, 2025 As part of our ongoing improvements, we’ve lifted certain limitations in US Government Virginia. You can now onboard SQL Server enabled by Azure Arc environments with: Always On availability groups Associated SQL Server services: SQL Server Analysis Services SQL Server Integration Services SQL Server Reporting Services Power BI Report Server Update: September 22, 2025 As part of our ongoing improvements, we’ve lifted more limitations in US Government Virginia. You can now have SQL Server enabled by Azure Arc environments with: License physical cores (p-cores) with unlimited virtualization. License physical cores (p-cores) without virtual machines.489Views0likes0CommentsPreview of Arc enabled SQL Server in US Government Virginia
Introduction We are excited to announce that Azure Arc-enabled SQL Server on Windows is now in public preview for the US Government Virginia region. With Azure Arc-enabled SQL Server, U.S. government agencies and organizations can manage SQL Server instances outside of Azure from the Azure Government portal, in a secure and compliant manner. Arc-enabled SQL Server resources in US Gov Virginia can be onboarded and viewed in the Azure Government portal just like any Azure resource, giving you a single pane of glass to monitor and organize your SQL Server estate in the Gov cloud. Preview features of Azure Arc-Enabled SQL Server Currently, in the US Government Virginia region, SQL Server registration provides the following features: Connect (onboard) a SQL Server instance to Azure Arc. SQL Server inventory which includes the following capabilities in the Azure portal: View the SQL Server instance as an Azure resource. View databases as an Azure resource. View the properties for each server. For example, you can view the version, edition, and database for each instance. All other features, including Extended Security Updates (ESU), are not currently available. How to Onboard Your SQL Server Onboarding a SQL Server to Azure Arc in the Government cloud is a two-step process that you can initiate from the Azure (US Gov) portal. Step 1: Connect hybrid machines with Azure Arc-enabled servers Step 2: Connect your SQL Server to Azure Arc on a server already enabled by Azure Arc Limitations The following SQL Server features are not currently available in any US Government region: Failover cluster instance (FCI) Availability group (AG) SQL Server services like SSIS, SSRS, or Power BI Report Server Future Plans and Roadmap This public preview is a major first step in bringing Azure Arc’s hybrid data management to Azure Government, and more enhancements are on the way. We will be enabling features like Arc-based billing (PAYG) and ESU purchasing along with feature parity with public cloud in future. Conclusion The availability of Azure Arc-enabled SQL Server in the US Gov Virginia region marks an important milestone for hybrid data management in Government. If you’re an Azure Government user managing SQL Server instances, we invite you to try out this public preview. And please, share your feedback with us through the community forum or your Microsoft representatives. Learn More: SQL Server enabled by Azure Arc in US Government Preview SQL Server enabled by Azure Arc Update August 14, 2025 Arc enabled SQL Server in US Government Virginia is now generally available with support for licensing and ESU. Please see SQL Server enabled by Azure Arc in US Government381Views3likes0CommentsAnnouncing the preview of Software Defined Networking (SDN) on Azure Local
Big news for Azure Local customers! Starting in Azure Local version 2506, we’re excited to announce the Public Preview of Software Defined Networking (SDN) on Azure Local using the Azure Arc resource bridge. This release introduces cloud-native networking capabilities for access control at the network layer, utilizing Network Security Groups (NSGs) on Azure Local. Key highlights in this release are: 1- Centralized network management: Manage Logical networks, network interfaces, and NSGs through the Azure control plane – whether your preference is the Azure Portal, Azure Command-Line Interface (CLI), or Azure Resource Manager templates. 2- Fine-grained traffic control: Safeguard your edge workloads with policy-driven access controls by applying inbound and outbound allow/deny rules on NSGs, just as you would in Azure. 3- Seamless hybrid consistency: Reduce operational friction and accelerate your IT staff’s ramp-up on advanced networking skills by using the same familiar tools and constructs across both Azure public cloud and Azure Local. Software Defined Networking (SDN) forms the backbone of delivering Azure-style networking on-premises. Whether you’re securing enterprise applications or extending cloud-scale agility to your on-premises infrastructure, Azure Local, combined with SDN enabled by Azure Arc, offers a unified and scalable solution. Try this feature today and let us know how it transforms your networking operations! What’s New in this Preview? Here’s what you can do today with SDN enabled by Azure Arc: ✅ Run SDN Network Controller as a Failover Cluster service — no VMs required! ✅ Deploy logical networks — use VLAN-backed networks in your datacenter that integrate with SDN enabled by Azure Arc. ✅ Attach VM Network Interfaces — assign static or DHCP IPs to VMs from logical networks. ✅ Apply NSGs - create, attach, and manage NSGs directly from Azure on your logical networks (VLANs in your datacenter) and/or on the VM network interface. This enables a generic rule set for VLANs, with a crisper rule set for individual Azure Local VM network interface using a complete 5-tuple control: source and destination IP, port, and protocol. ✅ Use Default Network Policies — apply baseline security policies during VM creation for your primary NIC. Select well-known inbound ports such as HTTP (while we block everything else for you), while still allowing outbound traffic. Or select an existing NSG you already have! SDN enabled by Azure Arc (Preview) vs. SDN managed by on-premises tools Choosing Your Path: Some SDN features like virtual networks (vNETs), Load Balancers (SLBs), and Gateways are not yet supported in the SDN enabled by Azure Arc (Preview). But good news: you’ve still got options. If your workloads need those features today, you can leverage SDN managed by on-premises tools: - SDN Express (PowerShell) - Windows Admin Center (WAC) The SDN managed by on-premises tools continues to provide full-stack SDN capabilities, including SLBs, Gateways, and VNET peering, while we actively work on bringing this additional value to complete SDN enabled by Azure Arc feature set. You must choose one of the modes of SDN management and cannot run in a hybrid management mode, mixing the two. Please read this important consideration section before getting started! Thank You to Our Community This milestone was only possible because of your input, your use cases, and your edge innovation. We're beyond excited to see what you build next with SDN enabled by Azure Arc. To try it out, head to the Azure Local documentation Let’s keep pushing the edge forward. Together!991Views6likes5CommentsEOL of Azure Linux 2.0 on Azure Kubernetes Service enabled by Azure Arc
Azure Linux 2.0 will reach its End of Life (EOL) in July 2025 Azure Linux 2.0 (formerly CBL-Mariner) will reach its official End of Life (EOL) on July 31, 2025. After this date, it will no longer receive updates, security patches, or support from the Azure Linux team. Starting with the Azure Local 2507 release, Azure Kubernetes Service enabled by Azure Arc will ship Azure Linux 3.0 images for all supported Kubernetes versions. This change applies to all AKS enabled by Azure Arc deployments, as we have used Azure Linux 2.0 as the base image in the past. To maintain security compliance and ensure continued support, all AKS Arc customers must plan on migrating to Azure Linux 3.0 at the earliest by upgrading their Azure Local instances to the 2507 release, when it is available. What's new in Azure Linux 3.0 Approximately every three years Azure Linux releases a new version of its operating system with upgrades to major components. Azure Linux 3.0 offers increased package availability and versions, an updated kernel, and improvements to performance, security, and tooling and developer experience. Some of the major components upgraded from Azure Linux 2.0 to 3.0 include: Component Azure Linux 3.0 Azure Linux 2.0 Release Notes Linux Kernel v6.6 (Latest LTS) V5.15 (Previous LTS) Linux 6.6 Containerd v1.7.13, but will also offer v2.0 once it becomes stable 1.6.26 Containerd Releases SystemD v255 V250 Systemd Releases OpenSSL v3.3.0 V1.1.1k OpenSSL 3.3 For more details on the key features and updates in Azure Linux 3.0 see the 3.0 GitHub release notes. Upgrading to Azure Linux 3.0 Once the Azure Local 2507 release is available, update to 2507 . Once your Azure Local instance has upgraded, you can then upgrade your Kubernetes clusters You can choose to the remain on the same Kubernetes version and provide the same version number in the aksarc upgarde command. Once the upgrade is completed, you should be able to check the kernel version on your Linux nodes. Kernel version v6.6 is the latest Azure Linux 3.0 version. Sample command kubectl --kubeconfig /path/to/aks-cluster-kubeconfig get nodes -o wide Sample output NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME moc-lsbe393il9d Ready control-plane 3h14m 100.72.248.133 100.72.248.133 <none> CBL-Mariner/Linux 6.6.92.2 Containerd://1.6.26 moc-lzwagtkjah5 Ready control-plane 3h12m 00.72.248.134 v1.30.4 <none> CBL-Mariner/Linux 6.6.92.2 Containerd://1.6.26 FAQs Is Azure Linux same as Mariner? Yes, Mariner was rebranded to Azure Linux. We will slowly update our documentation and VM/container image tags to reflect this name change When did Azure Linux 3.0 GA? Azure Linux 3.0 became generally available in August 2024. When will Azure Linux 3.0 reach End of Life (EOL)? We currently support each major version for 3 years after it becomes generally available. Azure Linux 3.0 will reach EOL in Summer 2027. How to keep in touch with the AKS Arc team For updates, feedback, and feature requests related to AKS Arc: Ask questions & submit feedback via AKS Arc GitHub Issues Partners with support questions can reach out to aks-hci-talk@microsoft.com523Views1like0CommentsCloud infrastructure for disconnected environments enabled by Azure Arc
Organizations in highly regulated industries such as government, defense, financial services, healthcare, and energy often operate under strict security and compliance requirements and across distributed locations, some with limited or no connectivity to public cloud. Leveraging advanced capabilities, including AI, in the face of this complexity can be time-consuming and resource intensive. Azure Local, enabled by Azure Arc, offers simplicity. Azure Local’s distributed infrastructure extends cloud services and security across distributed locations, including customer-owned on-premises environments. Through Azure Arc, customers benefit from a single management experience and full operational control that is consistent from cloud to edge. Available in preview to pre-qualified customers, Azure Local with disconnected operations extends these capabilities even further – enabling organizations to deploy, manage, and operate cloud-native infrastructure and services in completely disconnected or air-gapped networks. What is disconnected operations? Disconnected operations is an add-on capability of Azure Local, delivered as a virtual appliance, that enables the deployment and lifecycle management of your Azure Local infrastructure and Arc-enabled services, without any dependency on a continuous cloud connection. Key Benefits Consistent Azure Experience: You can operate your disconnected environment using the same tools you already know - Azure Portal, Azure CLI and ARM Templates extended through a local control plane. Built-in Azure Services: Through Azure Arc, you can deploy, update, and manage Azure services such as Azure Local VMs, Azure Kubernetes Service (AKS), etc. Data Residency and Control: You can govern and keep data within your organization's physical and legal jurisdiction to meet data residency, operational autonomy, and technological isolation requirements. Key Use Cases Azure Local with disconnected operations unlocks a range of impactful use cases for regulated industries: Government and Defense: Running sensitive government workloads and classified data more securely in air-gapped and tactical environments with familiar Azure management and operations. Manufacturing: Deploying and managing mission-critical applications like industrial process automation and control systems for real-time optimizations in more highly secure environments with zero connectivity. Financial Services: Enhanced protection of sensitive financial data with real time data analytics and decision making, while ensuring compliance with strict regulations in isolated networks. Healthcare: Running critical workloads with a need for real-time processing, storing and managing sensitive patient data with the increased levels of privacy and security in disconnected environments Energy: Operating critical infrastructure in isolated environments, such as electrical production and distribution facilities, oil rigs, or remote pipelines. Here is an example of how disconnected operations for Azure Local can provide mission critical emergency response and recovery efforts by providing essential services when critical infrastructure and networks are unavailable. Core Features and capabilities Simplified Deployment and Management Download and deploy the disconnected operations virtual appliance on Azure Local Premier Solutions through a streamlined user interface. Create and manage Azure Local instances using the local control plane, with the same tooling experience as Azure. Offline Updates The monthly update package includes all the essential components: the appliance, Azure Local software, AKS, and Arc-enabled service agents. You can update and manage the entire Azure Local instance using the local control plane without an internet connection. Monitoring Integration You can monitor your Azure Local instances and VMs using external monitoring solutions like SCOM by installing custom management packs and monitor AKS Clusters through 3 rd party open-source solutions like Prometheus and Grafana. Run Mission-Critical Workloads – Anytime, Anywhere Azure Local VMs You can run VMs with flexible sizing, support for custom VM images, and high availability through storage replication and automatic failover – all managed through the local Azure interface. AI & Containers with AKS You can use disconnected AI containers with Azure Kubernetes Service (AKS) on Azure Local to deploy and manage AI applications in disconnected scenarios where data residency and operational autonomy is required. AKS enables the deployment and management of containerized applications such as AI agents and models, deep learning frameworks, and related tools, which can be leveraged for inferencing, fine-tuning, and training in isolated networks. AKS also automates resource scaling, allowing for the dynamic addition and removal of container instances to more efficiently utilize hardware resources, including GPUs, which are critical for AI workloads. This provides consistent Azure experience in managing Kubernetes clusters and AI workloads with the same tooling and processes in connected environments. Get Started: Resources and Next Steps Microsoft is excited to announce the upcoming preview of Disconnected Operations for Azure Local in Q3 ‘CY25 for both Commercial and Government Cloud customers. To Learn more, please visit Disconnected operations for Azure Local overview (preview) - Azure Local Ready to participate? Get Qualified! or contact your Microsoft account team. Please also check out this session at Microsoft Build https://build.microsoft.com/en-US/sessions/BRK195 by Mark Russinovich, one of the most influential minds in cloud computing. His insights into the latest Azure innovations, the future of cloud architecture and computing, is a must-watch event!2.4KViews7likes3CommentsDeploy a Kubernetes Application Programmatically Using Terraform and CLI
In our previous blog post, we explored Kubernetes apps benefits along with an introduction into how to programmatically deploy Kubernetes Apps. Today we will cover deploying a Kubernetes application programmatically using Terraform and CLI. These deployment methods can streamline your workflow and automate repetitive tasks. Deploying your Kubernetes Application using Terraform This walkthrough assumes you have previous knowledge of Terraform. For additional information and guidance on using Terraform to provision a cluster, please refer here. Prerequisites Before we begin, ensure you have the following: Terraform Azure CLI Sample Location You can find the Terraform sample we will be using at this location: Terraform Sample Prepare the Environment First, initialize Terraform in the current directory where you have copied the k8s-extension-install sample by running the following command: terraform init In the directory, you will find two example tfvars files. These files can be used to deploy the application with different configurations: azure-vote-without-config.tfvars - Deploy the application with the default configuration for azure-vote. azure-vote-with-config.tfvars - Deploy/update the application with a custom configuration for azure-vote. Before you test run the sample tfvars files, update the following in the tfvars files: cluster_name - The name of the AKS cluster. resource_group_name - The name of the resource group where the AKS cluster is located. subscription_id - The subscription ID where the AKS cluster is located. Deploy the Application To deploy the application with the default configuration for azure-vote, run: terraform apply -var-file="azure-vote-without-config.tfvars" To deploy or update the application with a custom configuration for azure-vote, use: terraform apply -var-file="azure-vote-with-config.tfvars" Conclusion And that's it! You've successfully deployed your Kubernetes application programmatically using Terraform. This process can drastically reduce the time and effort involved in managing and scaling your applications. By using Terraform, you can ensure that your deployment is consistent and repeatable, making it easier to maintain your infrastructure as code. Deploying a Kubernetes Application from Azure CLI Deploying a Kubernetes application using Azure CLI can seem daunting, but we’re here to make it simple and accessible. Follow these steps, and you’ll have your azure-vote application up and running in no time! Prerequisites Before we get started, ensure you have the following: Azure CLI installed on your machine Deploying the Sample Azure-Vote Application from the Marketplace Step 1: Log in to Azure Open your terminal and log in to your Azure account by running: az login Step 2: Set Your Subscription Specify the subscription you want to use with: az account set --subscription Step 3: Deploy the Azure-Vote Application Now, deploy the azure-vote application to your Kubernetes cluster with the following command: az k8s-extension create --name azure-vote --scope cluster ` --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type managedClusters ` --extension-type commercialMarketplaceServices.AzureVote ` --plan-name azure-vote-paid ` --plan-product azure-vote-final-1 ` --plan-publisher microsoft_commercial_marketplace_services ` --configuration-settings title=VoteAnimal value1=Cats value2=Dogs Updating Configuration Settings If you want to update the configuration settings of the azure-vote application, you can do so easily. Use the following command to change the configuration settings: az k8s-extension update --name azure-vote ` --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type managedClusters ` --configuration-settings value1=Elephant value2=Horse And there you have it! By following these steps, you can deploy and update the azure-vote application on your Kubernetes cluster using Azure CLI. Conclusion Deploying Kubernetes applications using Azure CLI is a powerful way to manage and scale your applications. The process described above helps ensure your deployments are consistent and repeatable, simplifying the maintenance of your infrastructure as code.😄503Views0likes1CommentAnnouncing the Public Preview of the Azure Arc gateway!
The wait is over, we are thrilled to introduce the Public Preview of the Azure Arc gateway for Arc-enabled Servers, and Arc-enabled Kubernetes! They reduce the number of required endpoints for customers to configure their Enterprise proxy when setting up for using Azure Arc services. How Does it Work? Arc gateway introduces two new components: Arc gateway – An Azure Resource with a single, unique endpoint that will handle the incoming traffic to Azure from on-prem Arc workloads. This endpoint is to be configured in customer’s enterprise proxies. Azure Arc Proxy – A component of the Arc connected machine agent that routes all Agent and extension traffic to its destination in Azure via an Arc gateway Resource. The Arc Proxy is installed on every Arc-enabled Resource within the core Arc agent. Arc gateway on Arc-enabled Servers Architecture Arc gateway on Arc-enabled Kubernetes Architecture How do I Deploy Arc gateway? At a high level, there are three steps: create an Arc gateway Resource. Get the Arc gateway URL, and configure your Enterprise proxy Either onboard your Servers/K8s clusters using the gateway resource info or update the existing Arc Server/K8s resource with the created gateway resource info. For Arc enabled Servers, you can find Arc gateway details & instructions in the Public Preview documentation, and the Arc gateway for Arc-enabled Servers Jumpstart Episode. For Arc-enabled Kubernetes, more details are available in the Public Preview Documentation. Arc gateway Endpoint Coverage, Illustrated by the Azure Monitoring Scenario For the Arc gateway public preview, we have focused on covering primarily Service Endpoints for Azure control plane traffic. Most of the data plane endpoints are not yet covered by Arc gateway. I’d like to use the Azure monitoring on Arc-enabled Servers scenario to illustrate the Endpoints covered by the Public Preview release. Below is a comparison of the list of endpoints customers must open access to in their enterprise proxy with and without Arc gateway for this common scenario. As displayed, Arc gateway cuts the list of required endpoints nearly in half and removes the need for customers to allow wildcard endpoints in their on-prem environment. Endpoints required without Arc gateway (17) Endpoints required with Arc gateway (8) Arc-enabled Servers Endpoints aka.ms download.microsoft.com packages.microsoft.com login.microsoftonline.com *.login.microsoftonline.com pas.windows.net management.azure.com *.his.arc.azure.com *.guestconfiguration.azure.com azgn*.servicebus.windows.net *.blob.core.windows.net dc.services.visualstudio.com Azure Monitor Endpoints global.handler.control.monitor.azure.com <virtual-machine-region-name>.handler.control.monitor.azure.com <log-analytics-workspace-id>.ods.opinsights.azure.com <virtual-machine-region-name>.monitoring.azure.com <data-collection-endpoint>.<virtual-machine-region-name>.ingest.monitor.azure.com Arc-enabled Servers Endpoints <URL Prefix>.gw.arc.azure.com management.azure.com login.microsoftonline.com gbl.his.arc.azure.com <region>.his.arc.azure.com packages.microsoft.com Azure Monitor Endpoints <log-analytics-workspace-id>.ods.opinsights.azure.com <data-collection-endpoint>.<virtual-machine-region-name>.ingest.monitor.azure.com We're continuing to expand the endpoint coverage and further reduce the number of endpoints required to be configured through customers' Enterprise proxies. I’d like to invite you to try out the Arc gateway Public Preview release and share any questions, comments or feedback and requests to the Public Preview Contact Form.6.2KViews3likes2Comments