Blog Post

Azure Architecture Blog
3 MIN READ

AKS cluster with AGIC hits the Azure Application Gateway backend pool limit (100)

kkaushal's avatar
kkaushal
Brass Contributor
Apr 02, 2026

I’m writing this article to document a real-world scaling issue we hit while exposing many applications from an Azure Kubernetes Service (AKS) cluster using Application Gateway Ingress Controller (AGIC). The problem is easy to miss because Kubernetes resources keep applying successfully, but the underlying Azure Application Gateway has a hard platform limit of 100 backend pools—so once your deployment pattern requires the 101st pool, AGIC can’t reconcile the gateway configuration and traffic stops flowing for new apps. This post explains how the limit is triggered, how to reproduce and recognize it, and what practical mitigation paths exist as you grow.

 

A real-world scalability limit, reproduction steps, and recommended mitigation options:

  • AGIC typically creates one Application Gateway backend pool per Kubernetes Service referenced by an Ingress.
  • Azure Application Gateway enforces a hard limit of 100 backend pools.
  • When the 101st backend pool is required, Application Gateway rejects the update and AGIC fails reconciliation.
  • Kubernetes resources appear created, but traffic does not flow due to the external platform limit.
  • Gateway API–based application routing is the most scalable forward-looking solution.

Architecture Overview

The environment follows a Hub-and-Spoke network architecture, commonly used in enterprise Azure deployments to centralize shared services and isolate workloads.

Hub Network

  • Azure Firewall / Network security services
  • VPN / ExpressRoute Gateways
  • Private DNS Zones
  • Shared monitoring and governance components

Spoke Network

  • Private Azure Kubernetes Service (AKS) cluster
  • Azure Application Gateway with private frontend
  • Application Gateway Ingress Controller (AGIC)
  • Application workloads exposed via Kubernetes Services and Ingress

Ingress Traffic Flow

Client → Private Application Gateway → AGIC-managed routing → Kubernetes Service → Pod

Application Deployment Model

Each application followed a simple and repeatable Kubernetes pattern that ultimately triggered backend pool exhaustion.

  • One Deployment per application
  • One Service per application
  • One Ingress per application
  • Each Ingress referencing a unique Service

Kubernetes Manifests Used

Note: All Kubernetes manifests in this example are deployed into the demo namespace. Please ensure the namespace is created before applying the manifests.

Deployment template

apiVersion: apps/v1
kind: Deployment
metadata:
  name: app-{{N}}
  namespace: demo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: app-{{N}}
  template:
    metadata:
      labels:
        app: app-{{N}}
    spec:
      containers:
      - name: app
        image: hashicorp/http-echo:1.0
        args:
          - "-text=Hello from app {{N}}"
        ports:
        - containerPort: 5678

Service template

apiVersion: v1
kind: Service
metadata:
  name: svc-{{N}}
  namespace: demo
spec:
  selector:
    app: app-{{N}}
  ports:
  - port: 80
    targetPort: 5678

Ingress template

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ing-{{N}}
  namespace: demo
spec:
  ingressClassName: azure-application-gateway
  rules:
  - host: app{{N}}.example.internal
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: svc-{{N}}
            port:
              number: 80

Reproducing the Backend Pool Limitation

The issue was reproduced by deploying 101 applications using the same pattern. Each iteration resulted in AGIC attempting to create a new backend pool.

for ($i = 1; $i -le 101; $i++) {
  (Get-Content deployment.yaml) -replace "{{N}}", $i | kubectl apply -f -
  (Get-Content service.yaml)    -replace "{{N}}", $i | kubectl apply -f -
  (Get-Content ingress.yaml)    -replace "{{N}}", $i | kubectl apply -f -
}

Observed AGIC Error

Code="ApplicationGatewayBackendAddressPoolLimitReached"
Message="The number of BackendAddressPools exceeds the maximum allowed value.
The number of BackendAddressPools is 101 and the maximum allowed is 100.

Root Cause Analysis

Azure Application Gateway enforces a non-configurable maximum of 100 backend pools. AGIC creates backend pools based on Services referenced by Ingress resources, leading to exhaustion at scale.

Available Options After Hitting the Limit

Option 1: Azure Gateway Controller (AGC)

AGC uses the Kubernetes Gateway API and avoids the legacy Ingress model. However, it currently supports only public frontends and does not support private frontends.

Option 2: ingress-nginx via Application Routing

This option is supported only until November 2026 and is not recommended due to deprecation and lack of long-term viability.

Option 3: Application Routing with Gateway API (Preview)

Gateway API–based application routing is the strategic long-term direction for AKS. Although currently in preview, it has been stable upstream for several years and is suitable for onboarding new applications with appropriate risk awareness. Like in the below screenshot, I am using two controllers.

 

Reference Microsoft documents:

Azure Kubernetes Service (AKS) Managed Gateway API Installation (preview) - Azure Kubernetes Service | Microsoft Learn

Azure Kubernetes Service (AKS) application routing add-on with the Kubernetes Gateway API (preview) - Azure Kubernetes Service | Microsoft Learn

Secure ingress traffic with the application routing Gateway API implementation

 

Conclusion

The 100-backend pool limitation is a hard Azure Application Gateway constraint. Teams using AGIC must plan for scale early by consolidating services or adopting Gateway API–based routing to avoid production onboarding blockers.

 

Author: Kumar Shashi Kaushal(Sr. Digital Cloud solutions Architect)

Updated Apr 03, 2026
Version 3.0
No CommentsBe the first to comment