Azure Application Gateway for Containers is a modern Azure service that helps you expose containerized applications in a secure and scalable way. By moving ingress, routing, and web protection outside the Kubernetes cluster, it simplifies operations while keeping traffic management fully Azure-managed. This article walks through how the service works, what problems it solves, and when it’s the right choice for your container platforms.
Introduction
Azure Application Gateway for Containers is a managed Azure service designed to handle incoming traffic for container-based applications. It brings Layer-7 load balancing, routing, TLS termination, and web application protection outside of the Kubernetes cluster and into an Azure-managed data plane. By separating traffic management from the cluster itself, the service reduces operational complexity while providing a more consistent, secure, and scalable way to expose container workloads on Azure.
All images and visualizations in this articles have been generated using GEN AI technologies.
Service Overview
What Application Gateway for Containers does
Overview of Application Gateway for ContainersAzure Application Gateway for Containers is a managed Layer-7 load balancing and ingress service built specifically for containerized workloads. Its main job is to receive incoming application traffic (HTTP/HTTPS), apply routing and security rules, and forward that traffic to the right backend containers running in your Kubernetes cluster.
Instead of deploying and operating an ingress controller inside the cluster, Application Gateway for Containers runs outside the cluster, as an Azure-managed data plane. It integrates natively with Kubernetes through the Gateway API (and Ingress API), translating Kubernetes configuration into fully managed Azure networking behavior.
In practical terms, it handles:
- HTTP/HTTPS routing based on hostnames, paths, headers, and methods
- TLS termination and certificate management
- Web Application Firewall (WAF) protection
- Scaling and high availability of the ingress layer
All of this is provided as a managed Azure service, without running ingress pods in your cluster.
What problems it solves
Application Gateway for Containers addresses several common challenges teams face with traditional Kubernetes ingress setups:
- Operational overhead
Running ingress controllers inside the cluster means managing upgrades, scaling, certificates, and availability yourself. Moving ingress to a managed Azure service significantly reduces this burden. - Security boundaries
By keeping traffic management and WAF outside the cluster, you reduce the attack surface of the Kubernetes environment and keep security controls aligned with Azure-native services. - Consistency across environments
Platform teams can offer a standard, Azure-managed ingress layer that behaves the same way across clusters and environments, instead of relying on different in-cluster ingress configurations. - Separation of responsibilities
Infrastructure teams manage the gateway and security policies, while application teams focus on Kubernetes resources like routes and services.
How it differs from classic Application Gateway
While both services share the “Application Gateway” name, they target different use cases and operating models.
In the traditional model of using Azure Application Gateway is a general-purpose Layer-7 load balancer primarily designed for VM-based or service-based backends. It relies on centralized configuration through Azure resources and is not Kubernetes-native by design.
Application Gateway for Containers, on the other hand:
- Is designed specifically for container platforms
- Uses Kubernetes APIs (Gateway API / Ingress) instead of manual listener and rule configuration
- Separates control plane and data plane more cleanly
- Enables faster, near real-time updates driven by Kubernetes changes
- Avoids running ingress components inside the cluster
In short, classic Application Gateway is infrastructure-first, while Application Gateway for Containers is platform- and Kubernetes-first.
Architecture at a Glance
Shared responsibility model of Azure Application Gateway for ContainersAt a high level, Azure Application Gateway for Containers is built around a clear separation between control plane and data plane. This separation is one of the key architectural ideas behind the service and explains many of its benefits.
Control plane and data plane
The control plane is responsible for configuration and orchestration. It listens to Kubernetes resources—such as Gateway API or Ingress objects—and translates them into a running gateway configuration. When you create or update routing rules, TLS settings, or security policies in Kubernetes, the control plane picks up those changes and applies them automatically.
The data plane is where traffic actually flows. It handles incoming HTTP and HTTPS requests, applies routing rules, performs TLS termination, and forwards traffic to the correct backend services inside your cluster. This data plane is fully managed by Azure and runs outside of the Kubernetes cluster, providing isolation and high availability by design.
Because the data plane is not deployed as pods inside the cluster, it does not consume cluster resources and does not need to be scaled or upgraded by the customer.
Managed components vs customer responsibilities
One of the goals of Application Gateway for Containers is to reduce what customers need to operate, while still giving them control where it matters.
Managed by Azure
- Application Gateway for Containers data plane
- Scaling, availability, and patching of the gateway
- Integration with Azure networking
- Web Application Firewall engine and updates
- Translation of Kubernetes configuration into gateway rules
Customer-managed
- Kubernetes resources (Gateway API or Ingress)
- Backend services and workloads
- TLS certificates and references
- Routing and security intent (hosts, paths, policies)
- Network design and connectivity to the cluster
This split allows platform teams to keep ownership of the underlying Azure infrastructure, while application teams interact with the gateway using familiar Kubernetes APIs. The result is a cleaner operating model with fewer moving parts inside the cluster.
In short, Application Gateway for Containers acts as an Azure-managed ingress layer, driven by Kubernetes configuration but operated outside the cluster. This architecture keeps traffic management simple, scalable, and aligned with Azure-native networking and security services.
Traffic Handling and Routing
This section explains what happens to a request from the moment it reaches Azure until it is forwarded to a container running in your cluster.
Traffic flow for Application Gateway for ContainersTraffic Flow: From Internet to Pod
Azure Application Gateway for Containers (AGC) acts as the specialized "front door" for your Kubernetes workloads. By sitting outside the cluster, it manages high-volume traffic ingestion so your environment remains focused on application logic rather than networking overhead.
The Request Journey
Once a request is initiated by a client—such as a browser or an API—it follows a streamlined path to your container:
- 1. Entry via Public Frontend: The request reaches AGC’s public frontend endpoint.
- Note: While private frontends are currently the most requested feature and are under high-priority development, the service currently supports public-facing endpoints.
- 2. Rule Evaluation: AGC evaluates the incoming request against the routing rules you’ve defined using standard Kubernetes resources (Gateway API or Ingress).
- 3. Direct Pod Proxying: Once a rule is matched, AGC forwards the traffic directly to the backend pods within your cluster.
- 4. Azure Native Delivery: Because AGC operates as a managed data plane outside the cluster, traffic reaches your workloads via Azure networking. This removes the need for managing scaling or resource contention for in-cluster ingress pods.
Flexibility in Security and Routing
The architecture is designed to be as "hands-off" or as "hands-on" as your security policy requires:
- Optional TLS Offloading: You have full control over the encryption lifecycle. Depending on your specific use case, you can choose to perform TLS termination at the gateway to offload the compute-intensive decryption, or maintain encryption all the way to the container for end-to-end security.
- Simplified Infrastructure: By using AGC, you eliminate the "hop" typically required by in-cluster controllers, allowing the gateway to communicate with pods with minimal latency and high predictability.
Kubernetes Integration
Application Gateway for Containers is designed to integrate natively with Kubernetes, allowing teams to manage ingress behavior using familiar Kubernetes resources instead of Azure-specific configuration. This makes the service feel like a natural extension of the Kubernetes platform rather than an external load balancer.
Kubernetes integrations optionsGateway API as the primary integration model
The Gateway API is the preferred and recommended way to integrate Application Gateway for Containers with Kubernetes.
With the Gateway API:
- Platform teams define the Gateway and control how traffic enters the cluster.
- Application teams define routes (such as HTTPRoute) to expose their services.
- Responsibilities are clearly separated, supporting multi-team and multi-namespace environments.
Application Gateway for Containers supports core Gateway API resources such as:
- GatewayClass
- Gateway
- HTTPRoute
When these resources are created or updated, Application Gateway for Containers automatically translates them into gateway configuration and applies the changes in near real time.
Ingress API support
For teams that already use the traditional Kubernetes Ingress API, Application Gateway for Containers also provides Ingress support.
This allows:
- Reuse of existing Ingress manifests
- A smoother migration path from older ingress controllers
- Gradual adoption of Gateway API over time
Ingress resources are associated with Application Gateway for Containers using a specific ingress class. While fully functional, the Ingress API offers fewer capabilities and less flexibility compared to the Gateway API.
How teams interact with the service
A key benefit of this integration model is the clean separation of responsibilities:
- Platform teams
- Provision and manage Application Gateway for Containers
- Define gateways, listeners, and security boundaries
- Own network and security policies
- Application teams
- Define routes using Kubernetes APIs
- Control how their applications are exposed
- Do not need direct access to Azure networking resources
This approach enables self-service for application teams while keeping governance and security centralized.
Why this matters
By integrating deeply with Kubernetes APIs, Application Gateway for Containers avoids custom controllers, sidecars, or ingress pods inside the cluster. Configuration stays declarative, changes are automated, and the operational model stays consistent with Kubernetes best practices.
Security Capabilities
Security is a core part of Azure Application Gateway for Containers and one of the main reasons teams choose it over in-cluster ingress controllers. The service brings Azure-native security controls directly in front of your container workloads, without adding complexity inside the cluster.
Web Application Firewall (WAF)
Application Gateway for Containers integrates with Azure Web Application Firewall (WAF) to protect applications against common web attacks such as SQL injection, cross-site scripting, and other OWASP Top 10 threats.
A key differentiator of this service is that it leverages Microsoft's global threat intelligence. This provides an enterprise-grade layer of security that constantly evolves to block emerging threats, a significant advantage over many open-source or standard competitor WAF solutions.
Because the WAF operates within the managed data plane, it offers several operational benefits:
- Zero Cluster Footprint: No WAF-specific pods or components are required to run inside your Kubernetes cluster, saving resources for your actual applications.
- Edge Protection: Security rules and policies are applied at the Azure network edge, ensuring malicious traffic is blocked before it ever reaches your workloads.
- Automated Maintenance: All rule updates, patching, and engine maintenance are handled entirely by Azure.
- Centralized Governance: WAF policies can be managed centrally, ensuring consistent security enforcement across multiple teams and namespaces—a critical requirement for regulated environments.
TLS and certificate handling
TLS termination happens directly at the gateway. HTTPS traffic is decrypted at the edge, inspected, and then forwarded to backend services.
Key points:
- Certificates are referenced from Kubernetes configuration
- TLS policies are enforced by the Azure-managed gateway
- Applications receive plain HTTP traffic, keeping workloads simpler
This approach allows teams to standardize TLS behavior across clusters and environments, while avoiding certificate logic inside application pods.
Network isolation and exposure control
Because Application Gateway for Containers runs outside the cluster, it provides a clear security boundary between external traffic and Kubernetes workloads.
Common patterns include:
- Internet-facing gateways with WAF protection
- Private gateways for internal or zero-trust access
- Controlled exposure of only selected services
By keeping traffic management and security at the gateway layer, clusters remain more isolated and easier to protect.
Security by design
Overall, the security model follows a simple principle: inspect, protect, and control traffic before it enters the cluster.
This reduces the attack surface of Kubernetes, centralizes security controls, and aligns container ingress with Azure’s broader security ecosystem.
Scale, Performance, and Limits
Azure Application Gateway for Containers is built to handle production-scale traffic without requiring customers to manage capacity, scaling rules, or availability of the ingress layer. Scalability and performance are handled as part of the managed service.
Various aspects of the scaling capabilitiesInteroperability: The Best of Both Worlds
A common hesitation when adopting cloud-native networking is the fear of vendor lock-in. Many organizations worry that using a provider-specific ingress service will tie their application logic too closely to a single cloud’s proprietary configuration.
Azure Application Gateway for Containers (AGC) addresses this directly by utilizing the Kubernetes Gateway API as its primary integration model. This creates a powerful decoupling between how you define your traffic and how that traffic is actually delivered.
Standardized API, Managed Execution
By adopting this model, you gain two critical advantages simultaneously:
- Zero Vendor Lock-In (Standardized API): Your routing logic is defined using the open-source Kubernetes Gateway API standard. Because HTTPRoute and Gateway resources are community-driven standards, your configuration remains portable and familiar to any Kubernetes professional, regardless of the underlying infrastructure.
- Zero Operational Overhead (Managed Implementation): While the interface is a standard Kubernetes API, the implementation is a high-performance Azure-managed service. You gain the benefits of an enterprise-grade load balancer—automatic scaling, high availability, and integrated security—without the burden of managing, patching, or troubleshooting proxy pods inside your cluster.
The "Pragmatic" Advantage
As highlighted in recent architectural discussions, moving from traditional Ingress to the Gateway API is about more than just new features; it’s about interoperability. It allows platform teams to offer a consistent, self-service experience to developers while retaining the ability to leverage the best-in-class performance and security that only a native cloud provider can offer.
The result is a future-proof architecture: your teams use the industry-standard language of Kubernetes to describe what they need, and Azure provides the managed muscle to make it happen.
Scaling model
Application Gateway for Containers uses an automatic scaling model. The gateway data plane scales up or down based on incoming traffic patterns, without manual intervention.
From an operator’s perspective:
- There are no ingress pods to scale
- No node capacity planning for ingress
- No separate autoscaler to configure
Scaling is handled entirely by Azure, allowing teams to focus on application behavior rather than ingress infrastructure.
Performance characteristics
Because the data plane runs outside the Kubernetes cluster, ingress traffic does not compete with application workloads for CPU or memory. This often results in:
- More predictable latency
- Better isolation between traffic management and application execution
- Consistent performance under load
The service supports common production requirements such as:
- High concurrent connections
- Low-latency HTTP and HTTPS traffic
- Near real-time configuration updates driven by Kubernetes changes
Service limits and considerations
Like any managed service, Application Gateway for Containers has defined limits that architects should be aware of when designing solutions. These include limits around:
- Number of listeners and routes
- Backend service associations
- Certificates and TLS configurations
- Throughput and connection scaling thresholds
These limits are documented and enforced by the platform to ensure stability and predictable behavior.
For most application platforms, these limits are well above typical usage. However, they should be reviewed early when designing large multi-tenant or high-traffic environments.
Designing with scale in mind
The key takeaway is that Application Gateway for Containers removes ingress scaling from the cluster and turns it into an Azure-managed concern. This simplifies operations and provides a stable, high-performance entry point for container workloads.
When to Use (and When Not to Use)
|
Scenario | Use it? |
Why |
|---|---|---|
|
Kubernetes workloads on Azure |
✅ Yes |
The service is designed specifically for container platforms and integrates natively with Kubernetes APIs. |
|
Need for managed Layer-7 ingress |
✅ Yes |
Routing, TLS, and scaling are handled by Azure without in-cluster components. |
|
Enterprise security requirements (WAF, TLS policies) |
✅ Yes |
Built-in Azure WAF and centralized TLS enforcement simplify security. |
|
Platform team managing ingress for multiple apps |
✅ Yes |
Clear separation between platform and application responsibilities. |
|
Multi-tenant Kubernetes clusters |
✅ Yes |
Gateway API model supports clean ownership boundaries and isolation. |
|
Desire to avoid running ingress controllers in the cluster |
✅ Yes |
No ingress pods, no cluster resource consumption. |
|
VM-based or non-container backends |
❌ No |
Classic Application Gateway is a better fit for non-container workloads. |
|
Simple, low-traffic test or dev environments |
❌ Maybe not |
A lightweight in-cluster ingress may be simpler and more cost-effective. |
|
Need for custom or unsupported L7 features |
❌ Maybe not |
Some advanced or niche ingress features may not yet be available. |
|
Non-Kubernetes platforms |
❌ No |
The service is tightly integrated with Kubernetes APIs. |
When to Choose a Different Path: Azure Container Apps
While Application Gateway for Containers provides the ultimate control for Kubernetes environments, not every project requires that level of infrastructure management.
For teams that don't need the full flexibility of Kubernetes and are looking for the fastest path to running containers on Azure without managing clusters or ingress infrastructure at all, Azure Container Apps offers a specialized alternative. It provides a fully managed, serverless container platform that handles scaling, ingress, and networking automatically "out of the box".
Key Differences at a Glance
| Feature | AGC + Kubernetes | Azure Container Apps |
| Control | Granular control over cluster and ingress. | Fully managed, serverless experience. |
| Management | You manage the cluster; Azure manages the gateway. | Azure manages both the platform and ingress. |
| Best For | Complex, multi-team, or highly regulated environments. | Rapid development and simplified operations. |
Appendix - Routing configuration examples
The following examples show how Application Gateway for Containers can be configured using both Gateway API and Ingress API for common routing and TLS scenarios. More examples can be found here, in the detailed documentation.
HTTP listener
apiVersion: gateway.networking.k8s.io/v1 kind: HTTPRoute metadata: name: app-route spec: parentRefs: - name: agc-gateway rules: - backendRefs: - name: app-service port: 80
Path routing logic
apiVersion: gateway.networking.k8s.io/v1 kind: HTTPRoute metadata: name: path-routing spec: parentRefs: - name: agc-gateway rules: - matches: - path: type: PathPrefix value: /api backendRefs: - name: api-service port: 80 - backendRefs: - name: web-service port: 80
Weighted canary / rollout
apiVersion: gateway.networking.k8s.io/v1 kind: HTTPRoute metadata: name: canary-route spec: parentRefs: - name: agc-gateway rules: - backendRefs: - name: app-v1 port: 80 weight: 80 - name: app-v2 port: 80 weight: 20
TLS Termination
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: app-ingress spec: ingressClassName: azure-alb-external tls: - hosts: - app.contoso.com secretName: tls-cert rules: - host: app.contoso.com http: paths: - path: / pathType: Prefix backend: service: name: app-service port: number: 80