The Kubernetes Steering Committee has announced that the Nginx Ingress controller will be retired in March 2026. That' not far away, and once this happens Nginx Ingress will not receive any further updates, including security patches. Continuing to run the standalone Nginx Ingress controller past the end of March could open you up to security risks.
Azure Kubernetes Service (AKS) offers a managed routing add-on which also implements Nginx as the Ingress Controller. Microsoft has recently committed to supporting this version of Nginx Ingress until November 2026. There is also an updated version of the App Routing add-on in the works, that will be based on Istio to allow for transition off Nginx Ingress. This new App Routing add-on will support Gateway API based ingress only, so there will be some migration required if you are using the Ingress API. There is tooling availible to support migration from Ingress to Gateway API, such as the Ingress2Gateway tool.
If you are already using the App Routing add-on then you are supported until November and have extra time to either move to the new Istio based solution when it is released or migrate to another solution such as App Gateway for Containers. However, if you are running the standalone version of Nginx Ingress, you may want to consider migrating to the App Routing add-on to give you some extra time.
To be very clear, migrating to the App Routing add-on does not solve the problem; it buys you some more time until November and sets you up for a transition to the future Istio based App Routing add-on. Once you complete this migration you will need to plan to either move to the new version based on Istio, or migrate to another Ingress solution, before November.
This rest of this article walks through migrating from BYO Nginx to the App Routing add-on without disrupting your existing traffic.
How Parallel Running Works
The key to a zero-downtime migration is that both controllers can run simultaneously. Each controller uses a different IngressClass, so Kubernetes routes traffic based on which class your Ingress resources reference.
Your BYO Nginx uses the nginx IngressClass and runs in the ingress-nginx namespace. The App Routing add-on uses the webapprouting.kubernetes.azure.com IngressClass and runs in the app-routing-system namespace. They operate completely independently, each with its own load balancer IP.
This means you can:
- Enable the add-on alongside your existing controller
- Create new Ingress resources targeting the add-on (or duplicate existing ones)
- Validate everything works via the add-on's IP
- Cut over DNS or backend pool configuration
- Remove the old Ingress resources once you're satisfied
At no point does your production traffic go offline.
Enabling the App Routing add-on
Start by enabling the add-on on your existing cluster. This doesn't touch your BYO Nginx installation.
bash
az aks approuting enable \
--resource-group <resource-group> \
--name <cluster-name> </cluster-name></resource-group>
Wait for the add-on to deploy. You can verify it's running by checking the app-routing-system namespace:
kubectl get pods -n app-routing-system
kubectl get svc -n app-routing-system
You should see the Nginx controller pod running and a service called nginx with a load balancer IP. This IP is separate from your BYO controller's IP.
bash
# Get both IPs for comparison
BYO_IP=$(kubectl get svc ingress-nginx-controller -n ingress-nginx \ -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
add-on_IP=$(kubectl get svc nginx -n app-routing-system \ -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo "BYO Nginx IP: $BYO_IP" echo "add-on IP: $add-on_IP"
Both controllers are now running. Your existing applications continue to use the BYO controller because their Ingress resources still reference ingressClassName: nginx.
Migrating Applications: The Parallel Ingress Approach
For production workloads, create a second Ingress resource that targets the add-on. This lets you validate everything before cutting over traffic.
Here's an example. Your existing Ingress might look like this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp-ingress-byo
namespace: myapp
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx # BYO controller
rules:
- host: myapp.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: myapp
port:
number: 80
Create a new Ingress for the add-on:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp-ingress-add-on
namespace: myapp
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: webapprouting.kubernetes.azure.com # add-on controller
rules:
- host: myapp.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: myapp
port:
number: 80
Apply this new Ingress resource. The add-on controller picks it up and configures routing, but your production traffic still flows through the BYO controller because DNS (or your backend pool) still points to the old IP.
Validating Before Cutover
Test the new route via the add-on IP before touching anything else:
# For public ingress with DNS
curl -H "Host: myapp.example.com" http://$add-on_IP
# For private ingress, test from a VM in the VNet
curl -H "Host: myapp.example.com" http://$add-on_IP
Run your full validation suite against this IP. Check TLS certificates, path routing, authentication, rate limiting, custom headers, and anything else your application depends on. If you have monitoring or synthetic tests, point them at the add-on IP temporarily to gather confidence.
If something doesn't work, you can troubleshoot without affecting production. The BYO controller is still handling all real traffic.
Cutover To Routing Add-on
If your ingress has a public IP and you're using DNS to route traffic, the cutover is straightforward.
- Lower your DNS TTL well in advance. Set it to 60 seconds at least an hour before you plan to cut over. This ensures changes propagate quickly and you can roll back fast if needed.
- When you're ready, update your DNS A record to point to the add-on IP
If your ingress has a private IP and sits behind App Gateway, API Management, or Front Door, the cutover involves updating the backend pool instead of DNS.
In-Place Patching: The Faster But Riskier Option
If you're migrating a non-critical application or an internal service where some downtime is acceptable, you can patch the ingressClassName in place:
kubectl patch ingress myapp-ingress-byo -n myapp \
--type='json' \
-p='[{"op":"replace","path":"/spec/ingressClassName","value":"webapprouting.kubernetes.azure.com"}]'
This is atomic from Kubernetes' perspective. The BYO controller immediately drops the route, and the add-on controller immediately picks it up. In practice, there's usually a few seconds gap while the add-on configures Nginx and reloads. Once this change is made, the Ingress will not work until you update your DNS or backend pool details to point to the new IP.
Decommissioning the BYO Nginx Controller
Once all your applications are migrated and you're confident everything works, you can remove the BYO controller.
First, verify nothing is still using it:
kubectl get ingress --all-namespaces \
-o custom-columns='NAMESPACE:.metadata.namespace,NAME:.metadata.name,CLASS:.spec.ingressClassName' \
| grep -v "webapprouting"
If that returns only the header row (or is empty), you're clear to proceed. If it shows any Ingress resources, you've still got work to do.
Remove the BYO Nginx Helm release:
helm uninstall ingress-nginx -n ingress-nginx kubectl delete namespace ingress-nginx
The Azure Load Balancer provisioned for the BYO controller will be deprovisioned automatically.
Verify only the add-on IngressClass remains:
kubectl get ingressclass
You should see only webapprouting.kubernetes.azure.com.
Key Differences Between BYO Nginx and the App Routing add-on
The add-on runs the same Nginx binary, so most of your configuration carries over. However, there are a few differences worth noting.
TLS Certificates: The BYO setup typically uses cert-manager or manual Secrets for certificates. The add-on supports this, but it also integrates natively with Azure Key Vault. If you want to use Key Vault, you need to configure the add-on with the appropriate annotations. Otherwise, your existing cert-manager setup continues to work.
DNS Management: If you're using external-dns with your BYO controller, it works with the add-on too. The add-on also has native integration with Azure DNS zones if you want to simplify your setup. This is optional.
Custom Nginx Configuration: With BYO Nginx, you have full access to the ConfigMap and can customise the global Nginx configuration extensively. The add-on restricts this because it's managed by Azure. If you've done significant customisation (Lua scripts, custom modules, etc.), audit carefully to ensure the add-on supports what you need. Most standard configurations work fine.
Annotations: The nginx.ingress.kubernetes.io/* annotations work the same way. The add-on adds some Azure-specific annotations for WAF integration and other features, but your existing annotations should carry over without changes.
What Comes Next
This migration gets you onto a supported platform, but it's temporary. November 2026 is not far away, and you'll need to plan your next move.
Microsoft is building a new App Routing add-on based on Istio. This is expected later in 2026 and will likely become the long-term supported option. Keep an eye on Azure updates for announcements about preview availability and migration paths.
If you need something production-ready sooner, App Gateway for Containers is worth evaluating. It's built on Envoy and supports the Kubernetes Gateway API, which is the future direction for ingress in Kubernetes. The Gateway API is more expressive than the Ingress API and is designed to be vendor-neutral.
For now, getting off the unsupported BYO Nginx controller is the priority. The App Routing add-on gives you the breathing room to make an informed decision about your long-term strategy rather than rushing into something because you're running out of time.