If you're running Ingress NGINX on AKS, you've probably seen the announcements by now. The community Ingress Nginx project is being retired, upstream maintenance ends in March 2026, and Microsoft's extended support for the Application Routing add-on runs out in November 2026. A requirement to migrate to another solution is inevitable.
There are a few places you can go. This post focuses on Application Gateway for Containers: what it is, why it's worth the move, and how to actually do it. Microsoft has also released a migration utility that handles most of the translation work from your existing Ingress resources, so we'll cover that too.
Ingress NGINX Retirement
Ingress NGINX has been the default choice for Kubernetes HTTP routing for years. It's reliable, well-understood, and it appears in roughly half the "getting started with AKS" tutorials on the internet. So the retirement announcement caught a lot of teams off guard.
In November 2025, the Kubernetes SIG Network and Security Response Committee announced that the community ingress-nginx project would enter best-effort maintenance until March 2026, after which there will be no further releases, bug fixes, or security patches. It had been running on a small group of volunteers for years, accumulated serious technical debt from its flexible annotation model, and the maintainers couldn't sustain it.
For AKS, the timeline depends on how you're running it. If you self-installed via Helm, you're directly exposed to the March 2026 upstream deadline, after that, you're on your own for CVEs. If you're using the Application Routing add-on, Microsoft has committed to critical security patches until November 2026, but nothing beyond that. No new features, no general bug fixes.
Application Gateway for Containers
Application Gateway for Containers (AGC) is Azure's managed Layer 7 load balancer for AKS, and it's the successor to both the classic Application Gateway Ingress Controller and the Ingress API approach more broadly. It went GA in late 2024 and added WAF support in November 2025.
The architecture splits across two planes. On the Azure side, you have the AGC resource itself, a managed load balancer that sits outside your cluster and handles the actual traffic. It has child resources for frontends (the public entry points, each with an auto-generated FQDN) and an association that links it to a dedicated delegated subnet in your VNet. Unlike the older App Gateway Ingress Controller, AGC is a standalone Azure resource, you don't deploy an App Gateway instance
On the Kubernetes side, the ALB Controller runs as a small deployment in your cluster. It watches for Gateway API resources: Gateways, HTTPRoutes, and the various AGC policy types and translates them into configuration on the AGC resource. When you create or update an HTTPRoute, the controller picks it up and pushes the changes to the data plane.
AGC supports both Gateway API and the Ingress API. This means you don't have to convert everything to Gateway API resources in one shot. Gateway API is where the richer functionality lives though, and so you may want to consider undertaking this migration.
For deployment, you have two options:
- Bring Your Own (BYO) — you create the AGC resource, frontend, and subnet association in Azure yourself using the CLI, portal, Bicep, Terraform, or whatever tool you prefer. The ALB Controller then references the resource by ID. This gives you full control over the Azure-side lifecycle and fits well into existing IaC pipelines.
- Managed by ALB Controller — you define an ApplicationLoadBalancer custom resource in Kubernetes and the ALB Controller creates and manages the Azure resources for you. Simpler to get started, but the Azure resource lifecycle is tied to the Kubernetes resource — which some teams find uncomfortable for production workloads.
One prerequisite worth flagging upfront: AGC requires Azure CNI or Azure CNI Overlay. Kubenet has been deprecated and will be fully retired in 2028, so If you're on Kubenet, you'll need to plan a CNI migration alongside this work. There is an in-place cluster migration process to allow you to do this without re-building your cluster.
Why Choose AGC Over Other Alternatives?
AGC's architecture is different from running an in-cluster ingress controller, and worth understanding before you start.
The data plane runs outside your cluster entirely. With NGINX you're running pods that consume node resources, need upgrading, and can themselves become a reliability concern. With AGC, that's Azure's problem. You're not patching an ingress controller or sizing nodes around it. The ALB Controller does run a small number of pods in your cluster, but they're lightweight, watching Kubernetes resources and syncing configuration to the Azure data plane. They're not in the traffic path, and their resource footprint is minimal.
Ingress and HTTPRoute resources still reference Kubernetes Services as usual. Application Gateway for Containers runs an Azure‑managed data plane outside the cluster and routes traffic directly to backend pod IPs using Kubernetes Endpoint/EndpointSlice data, rather than relying on in‑cluster ingress pods. This enables faster convergence as pods scale and allows health probing and traffic management to be handled at the gateway layer.
WAF is built in, using the same Azure WAF policies you might already have. If you're currently running a separate Application Gateway in front of your cluster purely for WAF, AGC removes that extra hop and one fewer resource to keep current.
Configuration changes push to the data plane near-instantly, without a reload cycle. NGINX reloads its config when routes change, which is mostly fine, but noticeable if you're in a high-churn environment with frequent deployments.
Building on Gateway API from the start also means you're not doing this twice. It's where Kubernetes ingress is heading, and AGC fully supports it. By taking advantage of the Gateway API you are defining your configuration once in a proxy agnostic manner, and can easily switch the underlying proxy if you need to at a later date, avoiding vendor lock-in.
Planning Your Migration
Before you run any tooling or touch any manifests, spend some time understanding what you actually have.
Start by inventorying your Ingress NGINX resources across all clusters and namespaces. You want to know how many Ingress objects you have, which annotations they're using, and whether there's anything non-standard such as custom snippets, lua configurations, or anything that leans heavily on NGINX-specific behaviour. The migration utility will flag most of this, but knowing upfront means fewer surprises.
Next, confirm your cluster prerequisites. AGC requires Azure CNI or Azure CNI Overlay and workload identity. If you're on Kubenet, that migration needs to happen first. Finally, check that workload identity is enabled on your cluster.
Decide on your deployment model before generating any output. BYO gives you full control over the AGC resource lifecycle and slots into existing IaC pipelines cleanly, but requires you to pre-create Azure resources. Managed is simpler to get started with but ties the Azure resource lifecycle to Kubernetes objects, which can feel uncomfortable for production workloads.
Finally, decide whether you want to migrate from Ingress API to Gateway API as part of this work, or keep your existing Ingress resources and just swap the controller. AGC supports both. Doing both at once is more work upfront but gets you to the right place in a single migration. Keeping Ingress resources is lower risk in the short term, but you'll need to do the API migration later regardless.
Introducing the AGC Migration Utility
Microsoft released the AGC Migration Utility in January 2026 as an official CLI tool to handle the conversion of existing Ingress resources to Gateway API resources compatible with AGC. It doesn't modify anything on your cluster. It reads your existing configuration and generates YAML you can review and apply when you're ready.
One thing to be aware of is that the migration utility only generates Gateway API resources, so if you use it, you're moving off the Ingress API at the same time as moving off NGINX. There's no flag to produce Ingress resources for AGC instead. If you want to land on AGC but keep Ingress resources for now, you'll need to set that up manually.
There are two input modes. In files mode, you point it at a directory of YAML manifests and it converts them locally without needing cluster access. In cluster mode, it connects to your current kubeconfig context and reads Ingress resources directly from a live cluster. Both produce the same output.
Alongside the converted YAML, the utility produces a migration report covering every annotation it encountered. Each annotation gets a status: completed, warning, not-supported, or error. The warning and not-supported statuses are where you'll need to do some manual work/ These represent annotations that either migrated with caveats, or have no AGC equivalent at all.
The coverage of NGINX annotations is broad. URL rewrites, SSL redirects, session affinity, backend protocol, mTLS, WAF, canary routing by weight or header, permanent and temporary redirects, custom hostnames, most of the common patterns are covered. Before you run a full conversion, it's worth doing a --dry-run pass first to get a clear picture of what needs manual attention.
Migrating Step by Step
With prerequisites confirmed and your deployment model chosen, here's how the migration looks end to end.
1. Get the utility
Pre-built binaries for Linux, macOS, and Windows are available on the GitHub releases page. Download the binary for your platform and make it executable. If you'd prefer to build from source, clone the repo and run ./build.sh from the root, which produces binaries in the bin folder.
2. Run a dry-run against your manifests
Before generating any output, run in dry-run mode to see what the migration report looks like. This tells you which annotations are fully supported, which need manual attention, and which have no AGC equivalent.
./agc-migration files --provider nginx --ingress-class nginx --dry-run ./manifests/*.yaml
If you'd rather read directly from your cluster, use cluster mode:
./agc-migration cluster --provider nginx --ingress-class nginx --dry-run
3. Review the migration report
Work through the report before proceeding. Anything marked not-supported needs a plan. The next section covers the most common gaps, but the report itself includes specific recommendations for each issue it finds.
4. Set up AGC and install the ALB Controller
Before applying any generated resources you need AGC running in Azure and the ALB Controller installed in your cluster. The setup process is well documented, so rather than reproduce it here, follow the official quickstart at aka.ms/agc. Make sure you note the resource ID of your AGC instance if you're using BYO deployment, as you'll need it in the next step.
5. Generate the converted resources
Run the utility again with your chosen deployment flag to generate output:
# BYO
./agc-migration files --provider nginx --ingress-class nginx \
--byo-resource-id $AGC_ID \
--output-dir ./output \ ./manifests/*.yaml
# Managed
./agc-migration files --provider nginx --ingress-class nginx \
--managed-subnet-id $SUBNET_ID \ -
-output-dir ./output \ ./manifests/*.yaml
6. Review and apply the generated resources
Check the generated Gateway, HTTPRoute, and policy resources against your expected routing behaviour before applying anything. Apply to a non-production cluster first if you can.
kubectl apply -f ./output/
7. Validate and cut over
Test your routes before updating DNS. Running both NGINX and AGC in parallel while you validate is a sensible approach; route test traffic to AGC while NGINX continues serving production, then update your DNS records to point to the AGC frontend FQDN once you're satisfied.
8. Decommission NGINX
Once traffic has been running through AGC cleanly, uninstall the NGINX controller and remove the old Ingress resources. Two ingress controllers watching the same resources will cause confusion sooner or later.
What the Migration Utility Doesn't Handle
The utility covers a lot of ground, but there are some gaps you should be clear on.
Annotations marked not-supported in the migration report have no direct AGC equivalent and won't appear in the generated output. The most common for NGINX users are custom snippets and lua-based configurations, which allow arbitrary NGINX config to be injected directly into the server block. There's no equivalent in AGC or Gateway API. If you're relying on these, you'll need to work out whether AGC's native routing capabilities can cover the same requirements through HTTPRoute filters, URL rewrites, or header manipulation.
The utility doesn't migrate TLS certificates or update certificate references in the generated resources. Your existing Kubernetes Secrets containing certificates should carry over without changes, but verify that the Secret references in your generated Gateway and HTTPRoute resources are correct before cutting over.
DNS cutover is outside the scope of the utility entirely. Once your AGC frontend is provisioned it gets an auto-generated FQDN, and you'll need to update your DNS records or CNAME entries accordingly.
Any GitOps or CI/CD pipelines that reference your Ingress resources by name or apply them from a specific path will also need updating to reflect the new Gateway API resource types and output structure.
Conclusion
For many, the retirement of Ingress NGINX is unwanted complexity and extra work. If you have to migrate though, you can use it as an opportunity to land on a significantly better architecture: Gateway API as your routing layer, WAF and per-pod load balancing built in, and an ingress controller that's fully managed by Azure rather than running in your cluster.
The migration utility can take care of a lot of the mechanical conversion work. Rather than manually rewriting Ingress resources into Gateway API equivalents and mapping NGINX annotations to their AGC counterparts, the utility does that translation for you and produces a migration report that tells you exactly what it couldn't handle. Running a dry-run against your manifests is a good first step to get a clear picture of your annotation coverage and what needs manual attention before you commit to a timeline.
Full documentation for AGC is at aka.ms/agc and the migration utility repo is at github.com/Azure/Application-Gateway-for-Containers-Migration-Utility. Ingress NGINX retirement is coming up very soon, with the standalone implementation retiring very soon, at the end of March 2026. Using the App Routing add-on for AKS gives you a little bit of breathing room until November 2026, but it's still not long. Make sure you have a solution in place before this date to avoid running unsupported and potentially vulnerable software on your critical infrastructure.