azure app service
497 TopicsRethinking Ingress on Azure: Application Gateway for Containers Explained
Introduction Azure Application Gateway for Containers is a managed Azure service designed to handle incoming traffic for container-based applications. It brings Layer-7 load balancing, routing, TLS termination, and web application protection outside of the Kubernetes cluster and into an Azure-managed data plane. By separating traffic management from the cluster itself, the service reduces operational complexity while providing a more consistent, secure, and scalable way to expose container workloads on Azure. Service Overview What Application Gateway for Containers does Azure Application Gateway for Containers is a managed Layer-7 load balancing and ingress service built specifically for containerized workloads. Its main job is to receive incoming application traffic (HTTP/HTTPS), apply routing and security rules, and forward that traffic to the right backend containers running in your Kubernetes cluster. Instead of deploying and operating an ingress controller inside the cluster, Application Gateway for Containers runs outside the cluster, as an Azure-managed data plane. It integrates natively with Kubernetes through the Gateway API (and Ingress API), translating Kubernetes configuration into fully managed Azure networking behavior. In practical terms, it handles: HTTP/HTTPS routing based on hostnames, paths, headers, and methods TLS termination and certificate management Web Application Firewall (WAF) protection Scaling and high availability of the ingress layer All of this is provided as a managed Azure service, without running ingress pods in your cluster. What problems it solves Application Gateway for Containers addresses several common challenges teams face with traditional Kubernetes ingress setups: Operational overhead Running ingress controllers inside the cluster means managing upgrades, scaling, certificates, and availability yourself. Moving ingress to a managed Azure service significantly reduces this burden. Security boundaries By keeping traffic management and WAF outside the cluster, you reduce the attack surface of the Kubernetes environment and keep security controls aligned with Azure-native services. Consistency across environments Platform teams can offer a standard, Azure-managed ingress layer that behaves the same way across clusters and environments, instead of relying on different in-cluster ingress configurations. Separation of responsibilities Infrastructure teams manage the gateway and security policies, while application teams focus on Kubernetes resources like routes and services. How it differs from classic Application Gateway While both services share the “Application Gateway” name, they target different use cases and operating models. In the traditional model of using Azure Application Gateway is a general-purpose Layer-7 load balancer primarily designed for VM-based or service-based backends. It relies on centralized configuration through Azure resources and is not Kubernetes-native by design. Application Gateway for Containers, on the other hand: Is designed specifically for container platforms Uses Kubernetes APIs (Gateway API / Ingress) instead of manual listener and rule configuration Separates control plane and data plane more cleanly Enables faster, near real-time updates driven by Kubernetes changes Avoids running ingress components inside the cluster In short, classic Application Gateway is infrastructure-first, while Application Gateway for Containers is platform- and Kubernetes-first. Architecture at a Glance At a high level, Azure Application Gateway for Containers is built around a clear separation between control plane and data plane. This separation is one of the key architectural ideas behind the service and explains many of its benefits. Control plane and data plane The control plane is responsible for configuration and orchestration. It listens to Kubernetes resources—such as Gateway API or Ingress objects—and translates them into a running gateway configuration. When you create or update routing rules, TLS settings, or security policies in Kubernetes, the control plane picks up those changes and applies them automatically. The data plane is where traffic actually flows. It handles incoming HTTP and HTTPS requests, applies routing rules, performs TLS termination, and forwards traffic to the correct backend services inside your cluster. This data plane is fully managed by Azure and runs outside of the Kubernetes cluster, providing isolation and high availability by design. Because the data plane is not deployed as pods inside the cluster, it does not consume cluster resources and does not need to be scaled or upgraded by the customer. Managed components vs customer responsibilities One of the goals of Application Gateway for Containers is to reduce what customers need to operate, while still giving them control where it matters. Managed by Azure Application Gateway for Containers data plane Scaling, availability, and patching of the gateway Integration with Azure networking Web Application Firewall engine and updates Translation of Kubernetes configuration into gateway rules Customer-managed Kubernetes resources (Gateway API or Ingress) Backend services and workloads TLS certificates and references Routing and security intent (hosts, paths, policies) Network design and connectivity to the cluster This split allows platform teams to keep ownership of the underlying Azure infrastructure, while application teams interact with the gateway using familiar Kubernetes APIs. The result is a cleaner operating model with fewer moving parts inside the cluster. In short, Application Gateway for Containers acts as an Azure-managed ingress layer, driven by Kubernetes configuration but operated outside the cluster. This architecture keeps traffic management simple, scalable, and aligned with Azure-native networking and security services. Traffic Handling and Routing This section explains what happens to a request from the moment it reaches Azure until it is forwarded to a container running in your cluster. Traffic Flow: From Internet to Pod Azure Application Gateway for Containers (AGC) acts as the specialized "front door" for your Kubernetes workloads. By sitting outside the cluster, it manages high-volume traffic ingestion so your environment remains focused on application logic rather than networking overhead. The Request Journey Once a request is initiated by a client—such as a browser or an API—it follows a streamlined path to your container: 1. Entry via Public Frontend: The request reaches AGC’s public frontend endpoint. Note: While private frontends are currently the most requested feature and are under high-priority development, the service currently supports public-facing endpoints. 2. Rule Evaluation: AGC evaluates the incoming request against the routing rules you’ve defined using standard Kubernetes resources (Gateway API or Ingress). 3. Direct Pod Proxying: Once a rule is matched, AGC forwards the traffic directly to the backend pods within your cluster. 4. Azure Native Delivery: Because AGC operates as a managed data plane outside the cluster, traffic reaches your workloads via Azure networking. This removes the need for managing scaling or resource contention for in-cluster ingress pods. Flexibility in Security and Routing The architecture is designed to be as "hands-off" or as "hands-on" as your security policy requires: Optional TLS Offloading: You have full control over the encryption lifecycle. Depending on your specific use case, you can choose to perform TLS termination at the gateway to offload the compute-intensive decryption, or maintain encryption all the way to the container for end-to-end security. Simplified Infrastructure: By using AGC, you eliminate the "hop" typically required by in-cluster controllers, allowing the gateway to communicate with pods with minimal latency and high predictability. Kubernetes Integration Application Gateway for Containers is designed to integrate natively with Kubernetes, allowing teams to manage ingress behavior using familiar Kubernetes resources instead of Azure-specific configuration. This makes the service feel like a natural extension of the Kubernetes platform rather than an external load balancer. Gateway API as the primary integration model The Gateway API is the preferred and recommended way to integrate Application Gateway for Containers with Kubernetes. With the Gateway API: Platform teams define the Gateway and control how traffic enters the cluster. Application teams define routes (such as HTTPRoute) to expose their services. Responsibilities are clearly separated, supporting multi-team and multi-namespace environments. Application Gateway for Containers supports core Gateway API resources such as: GatewayClass Gateway HTTPRoute When these resources are created or updated, Application Gateway for Containers automatically translates them into gateway configuration and applies the changes in near real time. Ingress API support For teams that already use the traditional Kubernetes Ingress API, Application Gateway for Containers also provides Ingress support. This allows: Reuse of existing Ingress manifests A smoother migration path from older ingress controllers Gradual adoption of Gateway API over time Ingress resources are associated with Application Gateway for Containers using a specific ingress class. While fully functional, the Ingress API offers fewer capabilities and less flexibility compared to the Gateway API. How teams interact with the service A key benefit of this integration model is the clean separation of responsibilities: Platform teams Provision and manage Application Gateway for Containers Define gateways, listeners, and security boundaries Own network and security policies Application teams Define routes using Kubernetes APIs Control how their applications are exposed Do not need direct access to Azure networking resources This approach enables self-service for application teams while keeping governance and security centralized. Why this matters By integrating deeply with Kubernetes APIs, Application Gateway for Containers avoids custom controllers, sidecars, or ingress pods inside the cluster. Configuration stays declarative, changes are automated, and the operational model stays consistent with Kubernetes best practices. Security Capabilities Security is a core part of Azure Application Gateway for Containers and one of the main reasons teams choose it over in-cluster ingress controllers. The service brings Azure-native security controls directly in front of your container workloads, without adding complexity inside the cluster. Web Application Firewall (WAF) Application Gateway for Containers integrates with Azure Web Application Firewall (WAF) to protect applications against common web attacks such as SQL injection, cross-site scripting, and other OWASP Top 10 threats. A key differentiator of this service is that it leverages Microsoft's global threat intelligence. This provides an enterprise-grade layer of security that constantly evolves to block emerging threats, a significant advantage over many open-source or standard competitor WAF solutions. Because the WAF operates within the managed data plane, it offers several operational benefits: Zero Cluster Footprint: No WAF-specific pods or components are required to run inside your Kubernetes cluster, saving resources for your actual applications. Edge Protection: Security rules and policies are applied at the Azure network edge, ensuring malicious traffic is blocked before it ever reaches your workloads. Automated Maintenance: All rule updates, patching, and engine maintenance are handled entirely by Azure. Centralized Governance: WAF policies can be managed centrally, ensuring consistent security enforcement across multiple teams and namespaces—a critical requirement for regulated environments. TLS and certificate handling TLS termination happens directly at the gateway. HTTPS traffic is decrypted at the edge, inspected, and then forwarded to backend services. Key points: Certificates are referenced from Kubernetes configuration TLS policies are enforced by the Azure-managed gateway Applications receive plain HTTP traffic, keeping workloads simpler This approach allows teams to standardize TLS behavior across clusters and environments, while avoiding certificate logic inside application pods. Network isolation and exposure control Because Application Gateway for Containers runs outside the cluster, it provides a clear security boundary between external traffic and Kubernetes workloads. Common patterns include: Internet-facing gateways with WAF protection Private gateways for internal or zero-trust access Controlled exposure of only selected services By keeping traffic management and security at the gateway layer, clusters remain more isolated and easier to protect. Security by design Overall, the security model follows a simple principle: inspect, protect, and control traffic before it enters the cluster. This reduces the attack surface of Kubernetes, centralizes security controls, and aligns container ingress with Azure’s broader security ecosystem. Scale, Performance, and Limits Azure Application Gateway for Containers is built to handle production-scale traffic without requiring customers to manage capacity, scaling rules, or availability of the ingress layer. Scalability and performance are handled as part of the managed service. Interoperability: The Best of Both Worlds A common hesitation when adopting cloud-native networking is the fear of vendor lock-in. Many organizations worry that using a provider-specific ingress service will tie their application logic too closely to a single cloud’s proprietary configuration. Azure Application Gateway for Containers (AGC) addresses this directly by utilizing the Kubernetes Gateway API as its primary integration model. This creates a powerful decoupling between how you define your traffic and how that traffic is actually delivered. Standardized API, Managed Execution By adopting this model, you gain two critical advantages simultaneously: Zero Vendor Lock-In (Standardized API): Your routing logic is defined using the open-source Kubernetes Gateway API standard. Because HTTPRoute and Gateway resources are community-driven standards, your configuration remains portable and familiar to any Kubernetes professional, regardless of the underlying infrastructure. Zero Operational Overhead (Managed Implementation): While the interface is a standard Kubernetes API, the implementation is a high-performance Azure-managed service. You gain the benefits of an enterprise-grade load balancer—automatic scaling, high availability, and integrated security—without the burden of managing, patching, or troubleshooting proxy pods inside your cluster. The "Pragmatic" Advantage As highlighted in recent architectural discussions, moving from traditional Ingress to the Gateway API is about more than just new features; it’s about interoperability. It allows platform teams to offer a consistent, self-service experience to developers while retaining the ability to leverage the best-in-class performance and security that only a native cloud provider can offer. The result is a future-proof architecture: your teams use the industry-standard language of Kubernetes to describe what they need, and Azure provides the managed muscle to make it happen. Scaling model Application Gateway for Containers uses an automatic scaling model. The gateway data plane scales up or down based on incoming traffic patterns, without manual intervention. From an operator’s perspective: There are no ingress pods to scale No node capacity planning for ingress No separate autoscaler to configure Scaling is handled entirely by Azure, allowing teams to focus on application behavior rather than ingress infrastructure. Performance characteristics Because the data plane runs outside the Kubernetes cluster, ingress traffic does not compete with application workloads for CPU or memory. This often results in: More predictable latency Better isolation between traffic management and application execution Consistent performance under load The service supports common production requirements such as: High concurrent connections Low-latency HTTP and HTTPS traffic Near real-time configuration updates driven by Kubernetes changes Service limits and considerations Like any managed service, Application Gateway for Containers has defined limits that architects should be aware of when designing solutions. These include limits around: Number of listeners and routes Backend service associations Certificates and TLS configurations Throughput and connection scaling thresholds These limits are documented and enforced by the platform to ensure stability and predictable behavior. For most application platforms, these limits are well above typical usage. However, they should be reviewed early when designing large multi-tenant or high-traffic environments. Designing with scale in mind The key takeaway is that Application Gateway for Containers removes ingress scaling from the cluster and turns it into an Azure-managed concern. This simplifies operations and provides a stable, high-performance entry point for container workloads. When to Use (and When Not to Use) Scenario Use it? Why Kubernetes workloads on Azure ✅ Yes The service is designed specifically for container platforms and integrates natively with Kubernetes APIs. Need for managed Layer-7 ingress ✅ Yes Routing, TLS, and scaling are handled by Azure without in-cluster components. Enterprise security requirements (WAF, TLS policies) ✅ Yes Built-in Azure WAF and centralized TLS enforcement simplify security. Platform team managing ingress for multiple apps ✅ Yes Clear separation between platform and application responsibilities. Multi-tenant Kubernetes clusters ✅ Yes Gateway API model supports clean ownership boundaries and isolation. Desire to avoid running ingress controllers in the cluster ✅ Yes No ingress pods, no cluster resource consumption. VM-based or non-container backends ❌ No Classic Application Gateway is a better fit for non-container workloads. Simple, low-traffic test or dev environments ❌ Maybe not A lightweight in-cluster ingress may be simpler and more cost-effective. Need for custom or unsupported L7 features ❌ Maybe not Some advanced or niche ingress features may not yet be available. Non-Kubernetes platforms ❌ No The service is tightly integrated with Kubernetes APIs. When to Choose a Different Path: Azure Container Apps While Application Gateway for Containers provides the ultimate control for Kubernetes environments, not every project requires that level of infrastructure management. For teams that don't need the full flexibility of Kubernetes and are looking for the fastest path to running containers on Azure without managing clusters or ingress infrastructure at all, Azure Container Apps offers a specialized alternative. It provides a fully managed, serverless container platform that handles scaling, ingress, and networking automatically "out of the box". Key Differences at a Glance Feature AGC + Kubernetes Azure Container Apps Control Granular control over cluster and ingress. Fully managed, serverless experience. Management You manage the cluster; Azure manages the gateway. Azure manages both the platform and ingress. Best For Complex, multi-team, or highly regulated environments. Rapid development and simplified operations. Appendix - Routing configuration examples The following examples show how Application Gateway for Containers can be configured using both Gateway API and Ingress API for common routing and TLS scenarios. More examples can be found here, in the detailed documentation. HTTP listener apiVersion: gateway.networking.k8s.io/v1 kind: HTTPRoute metadata: name: app-route spec: parentRefs: - name: agc-gateway rules: - backendRefs: - name: app-service port: 80 Path routing logic apiVersion: gateway.networking.k8s.io/v1 kind: HTTPRoute metadata: name: path-routing spec: parentRefs: - name: agc-gateway rules: - matches: - path: type: PathPrefix value: /api backendRefs: - name: api-service port: 80 - backendRefs: - name: web-service port: 80 Weighted canary / rollout apiVersion: gateway.networking.k8s.io/v1 kind: HTTPRoute metadata: name: canary-route spec: parentRefs: - name: agc-gateway rules: - backendRefs: - name: app-v1 port: 80 weight: 80 - name: app-v2 port: 80 weight: 20 TLS Termination apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: app-ingress spec: ingressClassName: azure-alb-external tls: - hosts: - app.contoso.com secretName: tls-cert rules: - host: app.contoso.com http: paths: - path: / pathType: Prefix backend: service: name: app-service port: number: 80273Views0likes0CommentsIndustry-Wide Certificate Changes Impacting Azure App Service Certificates
Executive Summary In early 2026, industry-wide changes mandated by browser applications and the CA/B Forum will affect both how TLS certificates are issued as well as their validity period. The CA/B Forum is a vendor body that establishes standards for securing websites and online communications through SSL/TLS certificates. Azure App Service is aligning with these standards for both App Service Managed Certificates (ASMC, free, DigiCert-issued) and App Service Certificates (ASC, paid, GoDaddy-issued). Most customers will experience no disruption. Action is required only if you pin certificates or use them for client authentication (mTLS). Update: February 17, 2026 We’ve published new Microsoft Learn documentation, Industry-wide certificate changes impacting Azure App Service , which provides more detailed guidance on these compliance-driven changes. The documentation also includes additional information not previously covered in this blog, such as updates to domain validation reuse, along with an expanding FAQ section. The Microsoft Learn documentation now represents the most complete and up-to-date overview of these changes. Going forward, any new details or clarifications will be published there, and we recommend bookmarking the documentation for the latest guidance. Who Should Read This? App Service administrators Security and compliance teams Anyone responsible for certificate management or application security Quick Reference: What’s Changing & What To Do Topic ASMC (Managed, free) ASC (GoDaddy, paid) Required Action New Cert Chain New chain (no action unless pinned) New chain (no action unless pinned) Remove certificate pinning Client Auth EKU Not supported (no action unless cert is used for mTLS) Not supported (no action unless cert is used for mTLS) Transition from mTLS Validity No change (already compliant) Two overlapping certs issued for the full year None (automated) If you do not pin certificates or use them for mTLS, no action is required. Timeline of Key Dates Date Change Action Required Mid-Jan 2026 and after ASMC migrates to new chain ASMC stops supporting client auth EKU Remove certificate pinning if used Transition to alternative authentication if the certificate is used for mTLS Mar 2026 and after ASC validity shortened ASC migrates to new chain ASC stops supporting client auth EKU Remove certificate pinning if used Transition to alternative authentication if the certificate is used for mTLS Actions Checklist For All Users Review your use of App Service certificates. If you do not pin these certificates and do not use them for mTLS, no action is required. If You Pin Certificates (ASMC or ASC) Remove all certificate or chain pinning before their respective key change dates to avoid service disruption. See Best Practices: Certificate Pinning. If You Use Certificates for Client Authentication (mTLS) Switch to an alternative authentication method before their respective key change dates to avoid service disruption, as client authentication EKU will no longer be supported for these certificates. See Sunsetting the client authentication EKU from DigiCert public TLS certificates. See Set Up TLS Mutual Authentication - Azure App Service Details & Rationale Why Are These Changes Happening? These updates are required by major browser programs (e.g., Chrome) and apply to all public CAs. They are designed to enhance security and compliance across the industry. Azure App Service is automating updates to minimize customer impact. What’s Changing? New Certificate Chain Certificates will be issued from a new chain to maintain browser trust. Impact: Remove any certificate pinning to avoid disruption. Removal of Client Authentication EKU Newly issued certificates will not support client authentication EKU. This change aligns with Google Chrome’s root program requirements to enhance security. Impact: If you use these certificates for mTLS, transition to an alternate authentication method. Shortening of Certificate Validity Certificate validity is now limited to a maximum of 200 days. Impact: ASMC is already compliant; ASC will automatically issue two overlapping certificates to cover one year. No billing impact. Frequently Asked Questions (FAQs) Will I lose coverage due to shorter validity? No. For App Service Certificate, App Service will issue two certificates to span the full year you purchased. Is this unique to DigiCert and GoDaddy? No. This is an industry-wide change. Do these changes impact certificates from other CAs? Yes. These changes are an industry-wide change. We recommend you reach out to your certificates’ CA for more information. Do I need to act today? If you do not pin or use these certs for mTLS, no action is required. Glossary ASMC: App Service Managed Certificate (free, DigiCert-issued) ASC: App Service Certificate (paid, GoDaddy-issued) EKU: Extended Key Usage mTLS: Mutual TLS (client certificate authentication) CA/B Forum: Certification Authority/Browser Forum Additional Resources Changes to the Managed TLS Feature Set Up TLS Mutual Authentication Azure App Service Best Practices – Certificate pinning DigiCert Root and Intermediate CA Certificate Updates 2023 Sunsetting the client authentication EKU from DigiCert public TLS certificates Feedback & Support If you have questions or need help, please visit our official support channels or the Microsoft Q&A, where our team and the community can assist you.2.2KViews1like0CommentsFrom Local MCP Server to Hosted Web Agent: App Service Observability, Part 2
In Part 1, we introduced the App Service Observability MCP Server — a proof-of-concept that lets GitHub Copilot (and other AI assistants) query your App Service logs, analyze errors, and help debug issues through natural language. That version runs locally alongside your IDE, and it's great for individual developers who want to investigate their apps without leaving VS Code. A local MCP server is powerful, but it's personal. Your teammate has to clone the repo, configure their IDE, and run it themselves. What if your on-call engineer could just open a browser and start asking questions? What if your whole team had a shared observability assistant — no setup required? In this post, we'll show how we took the same set of MCP tools and wrapped them in a hosted web application — deployed to Azure App Service with a chat UI and a built-in Azure OpenAI agent. We'll cover what changed, what stayed the same, and why this pattern opens the door to far more than just a web app. Quick Recap: The Local MCP Server If you haven't read Part 1, here's the short version: We built an MCP (Model Context Protocol) server that exposes ~15 observability tools for App Service — things like querying Log Analytics, fetching Kudu container logs, analyzing HTTP errors, correlating deployments with failures, and checking logging configurations. You point your AI assistant (GitHub Copilot, Claude, etc.) at the server, and it calls those tools on your behalf to answer questions about your apps. That version: Runs locally on your machine via node Uses stdio transport (your IDE spawns the process) Relies on your Azure credentials ( az login ) — the AI operates with your exact permissions Requires no additional Azure resources It works. It's fast. And for a developer investigating their own apps, it's the simplest path. This is still a perfectly valid way to use the project — nothing about the hosted version replaces it. The Problem: Sharing Is Hard The local MCP server has a limitation: it's tied to one developer's machine and IDE. In practice, this means: On-call engineers need to clone the repo and configure their environment before they can use it Team leads can't point someone at a URL and say "go investigate" Non-IDE users (PMs, support engineers) are left out entirely Consistent configuration (which subscription, which resource group) has to be managed per-person We wanted to keep the same tools and the same observability capabilities, but make them accessible to anyone with a browser. The Solution: Host It on App Service The answer turned out to be straightforward: deploy the MCP server itself to Azure App Service, give it a web frontend, and bring its own AI agent along for the ride. Here's what the hosted version adds on top of the local MCP server: Local MCP Server Hosted Web Agent How it works Runs locally, your IDE's AI calls the tools Deployed to Azure App Service with its own AI agent Interface VS Code, Claude Desktop, or any MCP client Browser-based chat UI Agent Your existing AI assistant (Copilot, Claude, etc.) Built-in Azure OpenAI (GPT-5-mini) Azure resources needed None beyond az login App Service, Azure OpenAI, VNet Best for Individual developers in their IDE Teams who want a shared, centralized tool Authentication Your local az login credentials Managed identity + Easy Auth (Entra ID) Deploy npm install && npm run build azd up The key insight: the MCP tools are identical. Both versions use the exact same set of observability tools — the only difference is who's calling them (your IDE's AI vs. the built-in Azure OpenAI agent) and where the server runs (your laptop vs. App Service). What We Built Architecture ┌─────────────────────────────────────────────────────────────────────────────┐ │ Web Browser │ │ React Chat UI — resource selectors, tool steps, markdown responses │ └──────────────────────────────────┬──────────────────────────────────────────┘ │ HTTP (REST API) ▼ ┌─────────────────────────────────────────────────────────────────────────────┐ │ Azure App Service (Node.js 20) │ │ ┌──────────────────────────────────────────────────────────────────────┐ │ │ │ Express Server │ │ │ │ ├── /api/chat → Agent loop (OpenAI → tool calls → respond) │ │ │ │ ├── /api/set-context → Set target app for investigation │ │ │ │ ├── /api/resource-groups, /api/apps → Resource discovery │ │ │ │ ├── /mcp → MCP protocol endpoint (Streamable HTTP) │ │ │ │ └── / → Static SPA (React chat UI) │ │ │ └──────────────────────────────────────────────────────────────────────┘ │ │ VNet Integration (snet-app) │ └─────────────────────────────────────────────────────────────────────────────┘ │ │ │ ▼ ▼ ▼ ┌──────────────┐ ┌───────────────────┐ ┌────────────────────┐ │ Azure OpenAI │ │ Log Analytics / │ │ ARM API / Kudu │ │ (GPT-5-mini) │ │ KQL Queries │ │ (app metadata, │ │ Private EP │ └───────────────────┘ │ container logs) │ └──────────────┘ └────────────────────┘ The Express server does double duty: it serves the React chat UI as static files and exposes the MCP endpoint for remote IDE connections. The agent loop is simple — when a user sends a message, the server calls Azure OpenAI, which may request tool calls, the server executes those tools, and the loop continues until the AI has a final answer. Demo The following screenshots show how this app can be used. The first screenshot shows what happens when you ask about a functioning app. You can see the agent made 5 tool calls and was able to give a thorough summary of the current app's status, recent deployments, as well as provide some recommendations for how to improve observability of the app itself. I expanded the tools section so you could see exactly what the agent was doing behind the scenes and get a sense of how it was thinking. At this point, you can proceed to ask more questions about your app if there were other pieces of information you wanted to pull from your logs. I then injected a fault into this app by initiating a deployment pointing to a config file that didn't actually exist. The goal here was to prove that the agent could correlate an application issue to a specific deployment event, something that currently involves manual effort and deep investigation into logs and source code. Having an agent that can do this for you in a matter of seconds saves so much time and effort that could be directed to more important activities and ensures that you find the issue the first time. A few minutes after initiating the bad deployment, I saw that my app was no longer responding. Rather than going to the logs and investigating myself, I asked the agent "I'm getting an application error now, what happened?" I obviously know what happened and what the source of the error was, but let's see if the agent can pick that up. The agent was able to see that something was wrong and then point me in the direction to address the issue. It ran a number of tool calls following our investigation steps called out in the skills file and was successfully able to identify the source of the error. And lastly, I wanted to confirm the error was associated with the recent deployment, something that our agent should be able to do because we built in the tools it needs to be able to corrleate these kinds of events with errors. I asked it directly and here was the response, exactly what I expected to see. Infrastructure (one command) Everything is defined in Bicep and deployed with the Azure Developer CLI: azd up This provisions: App Service Plan (P0v3) with App Service (Node.js 20 LTS, VNet-integrated) Azure OpenAI (GPT-5-mini, Global Standard) with a private endpoint and private DNS zone VNet (10.0.0.0/16) with dedicated subnets for the app and private endpoints Managed Identity with RBAC roles: Reader, Website Contributor, Log Analytics Reader, Cognitive Services OpenAI User No API keys anywhere. The App Service authenticates to Azure OpenAI over a private network using its managed identity. The Chat UI The web interface is designed to get out of the way and let you focus on investigating: Resource group and app dropdowns — Browse your subscription, pick the app you want to investigate Tool step visibility — A collapsible panel shows exactly which tools the agent called, what arguments it used, and how long each took Session management — Start fresh conversations, with confirmation dialogs when switching context mid-investigation Markdown responses — The agent's answers are rendered with full formatting, code blocks, and tables When you first open the app, it auto-discovers your subscription and populates the resource group dropdown. Select an app, hit "Tell me about this app," and the agent starts investigating. Security Since this app has subscription-wide read access to your App Services and Log Analytics workspaces, you should definitely enable authentication. After deploying, configure Easy Auth in the Azure Portal: Go to your App Service → Authentication Click Add identity provider → select Microsoft Entra ID Set Unauthenticated requests to "HTTP 401 Unauthorized" This ensures only authorized members of your organization can access the tool. The connection to Azure OpenAI is secured via a private endpoint — traffic never traverses the public internet. The app authenticates using its managed identity with the Cognitive Services OpenAI User role. What Stayed the Same This is the part worth emphasizing: the core tools didn't change at all. Whether you're using the local MCP server or the hosted web agent, you get the same 15 tools. The Agent Skill (SKILL.md) from Part 1 also carries over. The hosted agent has the same domain expertise for App Service debugging baked into its system prompt — the same debugging workflows, common error patterns, KQL templates, and SKU reference that make the local version effective. The Bigger Picture: It's Not Just a Web App Here's what makes this interesting beyond our specific implementation: the pattern is the point. We took a set of domain-specific tools (App Service observability), wrapped them in a standard protocol (MCP), and showed two ways to use them: Local MCP server → Your IDE's AI calls the tools Hosted web agent → A deployed app with its own AI calls the same tools But those are just two examples. The same tools could power: A Microsoft Teams bot — Your on-call channel gets an observability assistant that anyone can mention A Slack integration — Same idea, different platform A CLI agent — A terminal-based chat for engineers who live in the command line An automated monitor — An agent that periodically checks your apps and files alerts An Azure Portal extension — Observability chat embedded directly in the portal experience A mobile app — Check on your apps from your phone during an incident The MCP tools are the foundation. The agent and interface are just the delivery mechanism. Build whatever surface makes sense for your team. This is one of the core ideas behind MCP: write the tools once, use them everywhere. The protocol standardizes how AI assistants discover and call tools, so you're not locked into any single client or agent. Try It Yourself Both versions are open-source: Local MCP server (Part 1): github.com/seligj95/app-service-observability-agent Hosted web agent (Part 2): github.com/seligj95/app-service-observability-agent-hosted To deploy the hosted version: git clone https://github.com/seligj95/app-service-observability-agent-hosted.git cd app-service-observability-agent-hosted azd up To run the local version, see the Getting Started section in Part 1. What's Next? This is still a proof-of-concept, and we're continuing to explore how AI-powered observability can become a first-class part of the App Service platform. Some things we're thinking about: More tools — Resource health, autoscale history, certificate expiration, network diagnostics Multi-app investigations — Correlate issues across multiple apps in a resource group Proactive monitoring — Agents that watch your apps and alert you before users notice Deeper integration — What if every App Service came with a built-in observability endpoint? We'd love your feedback. Try it out, open an issue, or submit a PR if you have ideas for additional tools or debugging patterns. And if you build something interesting on top of these MCP tools — a Teams bot, a CLI agent, anything — we'd love to hear about it.288Views0likes0CommentsStrapi on App Service: Quick start
In this quick start guide, you will learn how to create and deploy your first Strapi site on Azure App Service Linux, using Azure Database for MySQL or PostgreSQL, along with other necessary Azure resources. This guide utilizes an ARM template to install the required resources for hosting your Strapi application.3.2KViews1like3CommentsChat with Your App Service Logs Using GitHub Copilot
The Problem We know that logs and observability on App Service can be tricky—it's feedback we hear from customers all the time. There are many sources of information: Kudu container logs, Log Analytics tables (AppServiceHTTPLogs, ConsoleLogs, PlatformLogs, AppLogs), deployment logs, metrics, and more. It's not always obvious where to look, especially when you're troubleshooting under pressure. Today, debugging production issues often involves: Opening the Azure Portal Figuring out which blade or log source has the information you need Navigating to Log Analytics and remembering (or searching for) the right KQL syntax Interpreting results and manually correlating across multiple log tables Repeating this process until you find the root cause This proof-of-concept is one of our attempts to simplify this experience, using AI to bridge the gap between your question and the answer—no matter where that answer lives. What if you could just ask: "Why did my app stop?" and get an answer without ever having to leave your IDE or terminal? The Solution: App Service Observability MCP Server We've built a proof-of-concept MCP (Model Context Protocol) server that exposes App Service observability tools directly to AI assistants like GitHub Copilot in VS Code. It can be accessed from your preferred IDE like VS Code, or directly in your CLI such as GitHub Copilot CLI or Claude Code. You now don't even have to leave your coding environment to get answers to your App Service issues. Key capabilities: Query logs from Log Analytics (HTTP logs, Console logs, Platform logs) Fetch container logs directly from Kudu (no setup required!) Analyze HTTP errors grouped by status code and endpoint Find slow requests exceeding latency thresholds Diagnose deployment issues — automatically correlate deploys with startup failures and errors Check logging setup — runtime-specific recommendations (Python, Node.js, .NET, Java) View deployment history and correlate with issues Investigate container restarts and identify root causes Demo To see how the tool works, here are a series of screenshots going over a real-world scenario where a deployment issue breaks an app. The first screenshot is from when the app was functioning with no issues. You can see the tool calls one of MCP tools to get the app info. I then asked if there were any issues. It found a couple issues early on, but they were all transient and after reviewing logs and errors, it determined the app was functional. I then introduced a bug into my app and redeployed. I changed the name of a reference to a config file to one that didn't exist, thereby causing the app to crash because it was trying to load a file that didn't exist. GitHub Copilot was easily able to pick up the issue based on the deployment logs tooling. Also, because GitHub Copliot has direct access to my codebase for the app it was analyzing, it was able to see that the file that I was referencing didn't exist and knew what to change it to to get the app working again. This is one of the benefits of running this tool locally alongside your codebase - in addition to the tooling, it can also keep an eye on your codebase. Now if you've ever used GitHub Copilot to help deploy one of your apps, you may know that it can natively make calls using the Azure CLI for example to pull logs. But in my experience, these actions don't always work, take multiple attempts, and take a significant amount of time to complete. With the tooling and skills here, this operation becomes seamless because the agent knows exactly what to do and how to do it. It takes away the guesswork and multiple attempts that an agent without this tool would have to go through. In the following screenshot, I asked why the deployment was taking so long. Usually a deployment with azd for an app like the demo app here takes under a minute, so I wanted it to help me understand what the delay was caused by. Within a couple seconds, it was able to pickup the bug I introduced. Lastly, I then asked it to correlate the error with a deployment. One of the major problem areas our customers have is determining which deployment led to a specific error. We've built a tool into this feature that can correlate deployements and errors. Here's what the tool found. You can see I did a couple deployments with config file names that didn't exist. The tool was able to pinpoint the exact deployments that caused the issue and also tell me what to change to remediate the issue. This is a major benefit that was only possible with the help of AI and it's reasoning capabilities. How It Works The MCP server sits between your AI assistant and Azure, translating natural language requests into API calls: Important: Some tools require Log Analytics diagnostic settings to be enabled, but `get_recent_logs` and `get_deployments` work out of the box by calling Kudu and ARM directly. Security: Your Credentials, Your Access A common question when connecting AI tools to cloud resources: "What access does this have?" The MCP server here uses `DefaultAzureCredential` from the Azure SDK, which means it leverages your existing Azure identity—typically from `az login`. It can only access resources you already have permission to access. There are no stored secrets, no service principals with elevated privileges, and no additional credentials required. In other words: if you could run the investigation yourself in the portal or via Azure CLI, the MCP server can do it. If you don't have access to a resource, neither does the server. This design ensures: No privilege escalation — The AI operates with your exact permissions Existing RBAC applies — Your organization's access controls are respected Audit trail intact — All API calls use your identity and appear in Azure activity logs Looking ahead: For production scenarios with fully autonomous agents (running unattended or on a schedule), you'd want to apply least-privilege principles—create a dedicated service principal or managed identity with only the specific permissions needed (e.g., Reader on the App Service, Log Analytics Reader on the workspace). This limits blast radius and follows security best practices for any automated system accessing cloud resources. The Agent Skill: Domain Expertise for Debugging Beyond just exposing tools, we've included an Agent Skill file (SKILL.md) that gives the AI domain expertise for App Service debugging. What's an Agent Skill? Agent Skills are markdown files that provide structured knowledge to AI assistants. When GitHub Copilot has access to our SKILL.md, it gains: Debugging workflows — Step-by-step approaches for investigating issues Common error patterns — OOM crashes, Always On issues, HTTP 500/503 errors KQL query templates — Ready-to-use queries for each scenario SKU reference — Which features are available on each pricing tier Best practices — How to present findings with actionable recommendations Tools + Knowledge = Better Debugging The MCP tools and Agent Skill work together: Component Role MCP Tools Fetch real data from your Azure resources Agent Skill Interpret results and guide investigations For example, when investigating a container restart, the Agent Skill teaches Copilot to: Check for OOM patterns in platform logs Look for crash loops vs. graceful terminations Consider Always On settings based on the SKU Provide the right `az` command to fix the issue Example: Investigating a Container Stop Here's a real conversation showing AI-assisted debugging: User: "Show me the recent logs" Copilot: calls get_recent_logs 2026-01-30T19:52:42Z Container start method called. 2026-01-30T19:52:43Z Container is running. 2026-01-30T19:52:57Z Site started. 2026-01-30T19:53:17Z Container is terminating. Grace period: 5 seconds. 2026-01-30T19:53:22Z Site stopped. Copilot: The site started but stopped after 20 seconds. Want me to investigate? User: "Please investigate" Copilot: calls check_diagnostics, get_http_errors, query_logs, get_app_info Root Cause: Your container is stopping because "Always On" is disabled. With low traffic (only 14 requests in 24 hours), App Service terminates idle containers to save resources. Evidence: ✅ No errors in platform or application logs ✅ No HTTP 5xx errors Container terminated gracefully (not a crash) Fix: az webapp config set --name your-app --resource-group your-rg --always-on true Note: Requires Basic tier or higher. No portal. No manual KQL. Just ask and get answers. Getting Started Prerequisites Node.js 20+ Azure CLI (`az login` for authentication) VS Code with GitHub Copilot Installation git clone https://github.com/seligj95/app-service-observability-agent.git cd app-service-observability-agent npm install && npm run build Configure VS Code Create `.vscode/mcp.json` in your workspace: { "servers": { "appservice-logs": { "command": "node", "args": ["/path/to/app-service-observability-agent/dist/index.js"], "type": "stdio", "env": { "AZURE_SUBSCRIPTION_ID": "your-subscription-id", "AZURE_RESOURCE_GROUP": "your-resource-group", "AZURE_APP_NAME": "your-app-name" } } } } Reload VS Code, click "Start" next to the server, and start chatting with Copilot about your App Service! What's Next? This is a proof-of-concept demonstrating what an App Service AI observability tool could look like. We're exploring how this pattern could become a first-class feature of the platform. Imagine a future where every App Service has a built-in MCP endpoint for observability: https://my-app.scm.azurewebsites.net/mcp. Stay tuned for part 2 of this blog where we will show how to host this tool on App Service so that not just you, but you're whole team can leverage this tool for your workloads. Try It Out GitHub Repo: github.com/seligj95/app-service-observability-agent Agent Skill: Check out SKILL.md for the debugging knowledge base We'd love your feedback! Use the comments below or open an issue or PR if you have ideas for additional tools or debugging patterns.828Views2likes0CommentsAnnouncing Conversational Diagnostics (Preview) on Windows App Services
We are pleased to announce Conversational Diagnostics (Preview), a new feature in Windows App Services that leverages the powerful capabilities of OpenAI to take your problem-solving to the next level. With this feature, you can engage in a conversational dialogue to accurately identify the issue with your Windows Web Apps, and receive a clear set of solutions, documentation, and diagnostics to help resolve the issue. How to access Conversational Diagnostics (Preview) Navigate to your Windows Web App on the Azure Portal. Select Diagnose and Solve Problems. Select AI Powered Diagnostics (preview) to open chat. What is the tool capable of? Type a question or select any of the sample questions in the options. Types of questions you can ask include but not limited to: Questions regarding a specific issue that your app is experiencing Questions about how-to instructions Questions about best practices The AI-powered Diagnostics runs relevant diagnostic checks and provides a diagnosis of the issue and a recommended solution. Additionally, it can provide documentation addressing your questions. Select View Details for a deep-dive view on the solution and reference to the suggested insight. Launch recommended solutions directly from the chat. Conversational Diagnostics (Preview) remembers the context of the conversation, so you can ask follow-up questions. Once you are done with troubleshooting, you can create a summary of your troubleshooting session by responding to the chat with the message that the issue has been resolved. If you want to start a new investigation, select New Chat. How to sign up for access To get started, sign up for early access to the Conversational Diagnostics (Preview) feature in the Diagnose and Solve Problems experience. Your access request may take up to 4 weeks to complete. Once access is granted, Conversational Diagnostics (Preview) will be available for all the Windows App Service on your subscription. Navigate to your Windows Web App on the Azure Portal. Select Diagnose and Solve Problems. Select Request Access in the announcement banner. Whether you're a seasoned developer or a newcomer to App Services, Conversational Diagnostics (Preview) provides an intuitive and efficient way to understand and tackle any issue you may encounter. You can easily access this feature whenever you need it without any extra configuration. Say goodbye to frustrating troubleshooting and hello to a smarter, more efficient way of resolving issues with App Services. Preview Terms Of Use | Microsoft Azure Microsoft Privacy Statement – Microsoft privacy5.7KViews4likes0CommentsAnnouncing Public Preview: ASEv3 Outbound Network Segmentation
🗣️ Update 2/3/2026: As of February 3, 2026, the Outbound Network Segmentation feature is now Generally Available. For more information, refer to the documentation. 🔍 What Is Outbound Network Segmentation? Outbound Network Segmentation allows you to define and control how outbound traffic is routed from your App Service Environment v3 apps. This means you can now segment outbound traffic at an app level, enabling fine-grained egress control that aligns with enterprise security policies and compliance requirements. Previously, all outbound traffic from an App Service Environment v3 originated from the full subnet range hosting the App Service Environment, making it difficult for networking teams to apply per-app restrictions, like what is available with the multi-tenant App Service offering. With this new capability, you can now: For each app, define the subnet all outbound traffic is routed through. Assign dedicated outbound IPs per app via NAT Gateways. Route traffic through custom firewalls or appliances. Apply Network Security Groups (NSGs) with greater precision. Improve auditability and compliance for regulated workloads. In an App Service Environment, each worker is assigned an IP from the subnet, but there is no way to group IPs from various apps/plans to allow for routing/blocking/allowing specific app traffic from a networking perspective. With outbound network segmentation, you can now direct various app traffic to the same subnet/virtual network and gain this level of control. For example, consider the following scenario where you would like to ensure that only App A is able to talk to Database A. To ensure only traffic from App A can reach Database A, you join App A to an alternate subnet (vnet-integration-subnet). The alternate subnet has network access to the private endpoint subnet via NSG. This means that only traffic from the virtual network integration subnet can reach the private endpoint subnet, which then gives access to the database. 🧪 What’s Included in the Public Preview? This feature is currently available in all public Azure regions. If you're interested in trying it out, you will need to create a new App Service Environment and enable the following cluster setting during creation. Cluster settings can be configured using an ARM/Bicep template. For guidance on configuring cluster settings, see Custom configuration settings for App Service Environments. "clusterSettings": [ { "name": "MultipleSubnetJoinEnabled", "value": "true" } ] Once the App Service Environment is created and this cluster setting is enabled, you will have access to join apps to alternate subnets at any time. However, if you don't set the cluster setting during creation, the App Service Environment will not support this feature. Enabling this feature on existing App Service Environments is not supported. Portal support for enabling this cluster setting as well as joining alternate subnets is not available at this time. To configure the cluster setting, use an ARM/Bicep template to create the App Service Environment. To join an alternate subnet, you can use the following Azure CLI command. The alternate subnet must be empty and be delegated to Microsoft.web/serverfarms prior to attempting to join it. Also ensure that application traffic routing is enabled for your app. This is key to ensure all traffic is routed through the alternate subnet and not the default route. az webapp vnet-integration add --resource-group <APP-RESOURCE-GROUP> --name <APP-NAME> --vnet <VNET-NAME> --subnet <ALTERNATE-SUBNET-NAME> If your alternate subnet is in a different resource group than your app, run "az webapp vnet-integration add -h" and see the help text to learn how to specify this resource id. 🔧 Tech Specs If you're familiar with the multi-plan subnet join feature available in the multi-tenant App Service offering, unfortunately, App Service Environments and the alternate subnet join feature are incompatible with multi-plan subnet join. For App Service Environments, each app from a given plan can only integrate with 1 alternate subnet. Similar to regular virtual network integration, however, a given plan can have multiple different connections and apps in the same plan can use either of the connections. For multi-tenant App Service, this is limited to 2 connections per plan. For App Service Environment v3, you can have up to 4 connections. If you need to remove or change the alternate subnet join for a specific app, you can do this at any time. First remove the existing join, and then add a new one following the same process as you did previously. 💡 Why This Feature Matters App Service Environment v3 has always been about isolation, scalability, and control. With outbound segmentation, we’re taking that control to the next level. Whether you're running high-scale web apps, handling sensitive data, or managing complex environments, this feature gives you the tools to secure outbound traffic without compromising performance. 📚 Learn More To dive deeper into App Service Environment v3 networking capabilities, check out the App Service Environment v3 networking overview. Have questions or feedback? Drop them in the comments below.543Views1like0CommentsAnnouncing the Public Preview of the New App Service Quota Self-Service Experience
Update 10/30/2025: The App Service Quota Self-Service experience is back online after a short period where we were incorporating your feedback and making needed updates. As this is public preview, availability and features are subject to change as we receive and incorporate feedback. What’s New? The updated experience introduces a dedicated App Service Quota blade in the Azure portal, offering a streamlined and intuitive interface to: View current usage and limits across the various SKUs Set custom quotas tailored to your App Service plan needs This new experience empowers developers and IT admins to proactively manage resources, avoid service disruptions, and optimize performance. Quick Reference - Start here! If your deployment requires quota for ten or more subscriptions, then file a support ticket with problem type Quota following the instructions at the bottom of this post. If any subscription included in your request requires zone redundancy (note that most Isolated v2 deployments require ZR), then file a support ticket with problem type Quota following the instructions at the bottom of this post. Otherwise, leverage the new self-service experience to increase your quota automatically. Self-service Quota Requests For non-zone-redundant needs, quota alone is sufficient to enable App Service deployment or scale-out. Follow the provided steps to place your request. 1. Navigate to the Quotas resource provider in the Azure portal 2. Select App Service (Pubic Preview) Navigating the primary interface: Each App Service VM size is represented as a separate SKU. If the intention is to be able to scale up or down within a specific offering (e.g., Premium v3), then equivalent number of VMs need to be requested for each applicable size of that offering (e.g., request 5 instances for both P1v3 and P3v3). As with other quotas, you can filter by region, subscription, provider, or usage. Note that your portal will now show "App Service (Public Preview)" for the Provider name. You can also group the results by usage, quota (App Service VM type), or location (region). Current usage is represented as App Service VMs. This allows you to quickly identify which SKUs are nearing their quota limits. Adjustments can be made inline: no need to visit another page. This is covered in detail in the next section. Total Regional VMs: There is a SKU in each region called Total Regional VMs. This SKU summarizes your usage and available quota across all individual SKUs in that region. There are three key points about using Total Regional VMs. You should never request Total Regional VMs quota directly - it will automatically increase in response to your request for individual SKU quota. If you are unable to deploy a given SKU, then you must request more quota for that SKU to unblock deployment. For your deployment to succeed, you must have sufficient quota in the individual SKU as well as Total Regional VMs. If either usage is at its respective limit, then you will be unable to deploy and must request more of that individual SKU's quota to proceed. In some regions, Total Regional VMs appears as "0 of 0" usage and limit and no individual SKU quotas are shown. This is an indication that you should not interact with the portal to resolve any quota-related issues in this region. Instead, you should try the deployment and observe any error messages that arise. If any error messages indicate more quota is needed, then this must be requested by filing a support ticket with problem type Quota following the instructions at the bottom of this post so that App Service can identify and fix any potential quota issues. In most cases, this will not be necessary, and your deployment will work without requesting quota wherever "0 of 0" is shown for Total Regional VMs and no individual SKU quotas are visible. See the example below: 3. Request quota adjustments Clicking the pen icon opens a flyout window to capture the quota request: The quota type (App Service SKU) is already populated, along with current usage. Note that your request is not incremental: you must specify the new limit that you wish to see reflected in the portal. For example, to request two additional instances of P1v2 VMs, you would file the request like this: Click submit to send the request for automatic processing. How quota approvals work: Immediately upon submitting a quota request, you will see a processing dialog like the one shown: If the quota request can be automatically fulfilled, then no support request is needed. You should receive this confirmation within a few minutes of submission: If the request cannot be automatically fulfilled, then you will be given the option to file a support request with the same information. In the example below, the requested new limit exceeds what can be automatically granted for the region: 4. If applicable, create support ticket When creating a support ticket, you will need to repopulate the Region and App Service plan details; the new limit has already been populated for you. If you forget the region or SKU that was requested, you can reference them in your notifications pane: If you choose to create a support ticket, then you will interact with the capacity management team for that region. This is a 24x7 service, so requests may be created at any time. Once you have filed the support request, you can track its status via the Help + support dashboard. Known issues The self-service quota request experience for App Service is in public preview. Here are some caveats worth mentioning while the team finalizes the release for general availability: Closing the quota request flyout window will stop meaningful notifications for that request. You can still view the outcome of your quota requests by checking actual quota, but if you want to rely on notifications for alerts, then we recommend leaving the quota request window open for the few minutes that it is processing. Some SKUs are not yet represented in the quota dashboard. These will be added later in the public preview. The Activity Log does not currently provide a meaningful summary of previous quota requests and their outcomes. This will also be addressed during the public preview. As noted in the walkthrough, the new experience does not enable zone-redundant deployments. Quota is an inherently regional construct, and zone-redundant enablement requires a separate step that can only be taken in response to a support ticket being filed. Quota API documentation is being drafted to enable bulk non-zone redundant quota requests without requiring you to file a support ticket. Filing a Support Ticket If your deployment requires zone redundancy or contains many subscriptions, then we recommend filing a support ticket with issue type "Technical" and problem type "Quota": We want your feedback! If you notice any aspect of the experience that does not work as expected, or you have feedback on how to make it better, please use the comments below to share your thoughts!7.5KViews3likes31CommentsAnnouncing the Public Preview of the New Hybrid Connection Manager (HCM)
Update May 28, 2025: The new Hybrid Connection Manager is now Generally Available. The download links shared in this post will give you the latest Generally Available version. Learn more Key Features and Improvements The new version of HCM introduces several enhancements aimed at improving usability, performance, and security: Cross-Platform Compatibility: The new HCM is now supported on both Windows and Linux clients, allowing for seamless management of hybrid connections across different platforms, providing users with greater flexibility and control. Enhanced User Interface: We have redesigned the GUI to offer a more intuitive and efficient user experience. In addition to a new and more accessible GUI, we have also introduced a CLI that includes all the functionality needed to manage connections, especially for our Linux customers who may solely use a CLI to manage their workloads. Improved Visibility: The new version offers enhanced logging and connection testing, which provides greater insight into connections and simplifies debugging. Getting Started To get started with the new Hybrid Connection Manager, follow these steps: Requirements: Windows clients must have ports 4999-5001 available Linux clients must have port 5001 available Download and Install: The new HCM can be downloaded from the following links. Ensure you download the version that corresponds to your client. If you are new to the HCM, check out the existing documentation to learn more about the product and how to get started. If you are an existing Windows user, installing the new Windows version will automatically upgrade your existing version to the new version, and all your existing connections will be automatically ported over. There is no automated migration path from the Windows to the Linux version at this time. Windows download Download the MSI package and follow the installation instructions Linux download From your terminal running as administrator, follow these steps: sudo apt update sudo apt install tar gzip build-essential sudo wget "https://download.microsoft.com/download/HybridConnectionManager-Linux.tar.gz" sudo tar -xf HybridConnectionManager-Linux.tar.gz cd HybridConnectionManager/ sudo chmod 755 setup.sh sudo ./setup.sh Once that is finished, your HCM is ready to be used Run `hcm help` to see the available commands For interactive mode, you will need to install and login to the Azure CLI. Authentication from the HCM to Azure is done using this credential. Install the Azure CLI with: `install azure cli: curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash` Run `az login` and follow the prompts Add your first connection by running `hcm add` Configure Your Connections: Use the GUI or the CLI to add hybrid connections to your local machine. Manage Your Connections: Use the GUI or the CLI with the `hcm list` and `hcm remove` commands to manage your hybrid connections efficiently. Detailed help texts are available for each command to assist you. Join the Preview We invite you to join the public preview and provide your valuable feedback. Your insights will help us refine and improve the Hybrid Connections Manager to better meet your needs. Feedback and Support If you encounter any issues or have suggestions, please reach out to hcmsupport@service.microsoft.com or leave a comment on this post. We are committed to ensuring a smooth and productive experience with the new HCM. Detailed documentation and guidance will be available in the coming weeks as we get closer to General Availability (GA). Thank you for your continued support and collaboration. We look forward to hearing your thoughts and feedback on this exciting new release.2.5KViews2likes18CommentsExciting Updates Coming to Conversational Diagnostics (Public Preview)
Last year, at Ignite 2023, we unveiled Conversational Diagnostics (Preview), a revolutionary tool integrated with AI-powered capabilities to enhance problem-solving for Windows Web Apps. This year, we're thrilled to share what’s new and forthcoming for Conversational Diagnostics (Preview). Get ready to experience a broader range of functionalities and expanded support across various Azure Products, making your troubleshooting journey even more seamless and intuitive.387Views0likes0Comments