Blog Post

Microsoft Mission Critical Blog
8 MIN READ

Nginx Ingress controller integration with Istio Service Mesh

pratiksharma's avatar
pratiksharma
Icon for Microsoft rankMicrosoft
Apr 10, 2025

Introduction 

Nginx (pronounced as "engine x") is an HTTP web server, reverse proxy, content cache, load balancer, TCP/UDP proxy server, and mail proxy server. It is one of the common ingress (used to ingest external traffic into the cluster) used in Kubernetes. I have discussed Istio service mesh in my previous article here: Istio Service Mesh Observability in AKS | Microsoft Community Hub. Setting up nginx ingress controller with Istio Service mesh requires custom configuration and is not as straightforward as using in-house ingress from Istio. One of my customers faced this issue and I was able to resolve it using the configuration we will discuss in this article. Not all customers can migrate to Istio Ingress when enabling service mesh as they might already have lot of dependencies on existing ingress rules as well as enterprise agreements with Ingress providers. The main problem with having both nginx ingress controller and Istio service mesh in the same Kubernetes cluster is when mTLS is enforced strictly by Istio. 

TLS vs mTLS 

Usually when we communicate with a server, we use TLS in which only the server’s identity is verified using a certificate. The client is verified using secondary methods like username-password, tokens etc. With the advent of distributed attacks increasing in the age of AI it is critical to implement cryptographically verifiable identities for clients as well. Mutual TLS or mTLS is based on this Zero trust mindset. With mTLS both client and server present a verifiable certificate which makes man in the middle attack extremely difficult. Enabling mTLS is one of the primary use cases of using Istio Service mesh in the Kubernetes cluster. 

Sidecar in Istio 

Sidecars are secondary containers which get injected and attach to the pod with main containers in the Pod. Istio sidecar acts like a proxy and intercepts all the incoming and outgoing traffic to the application container unless explicitly specified. Sidecar is how istio is able to implement it functionalities around traffic management in service mesh. In future there would be an option to operate Istio in a Sidecarless fashion using Ambient mode, which is still in development for Istio addon for AKS at the time of writing this article. 

Root cause 

In the above diagram you can see that istio sidecar injection is enabled in Application pod namespace but not in Ingress controller. Also, traffic enters the ingress controller through AKS exposed Internal load balancer. This traffic is https / TLS based and get TLS terminated at the ingress controller side. This is usually done as otherwise Nginx would not be able to perform many of it functionalities like path and header-based routing unless it decrypts the traffic. Therefore, traffic going towards application pods is http based. Now since mTLS is strictly enforced in the service mesh it will only accept mTLS traffic therefore, this traffic gets dropped and the user will get a 502 bad gateway error thrown by Nginx. Even if the traffic is re-encrypted and sent to application pods, which Nginx supports, the request will still get dropped as Istio allows only mTLS not TLS. 

Solution 

To solve this problem, we follow the following steps:

1. Enable sidecar injection in Ingress controller namespace: First we will enable sidecar injection in Ingress controller namespace, so that traffic egress from the ingress controller pods is mTLS.  

2. Exempt external inbound traffic from sidecar: Next, mTLS is only understood within the AKS cluster, so we will have to bypass the external traffic from going through the Istio proxy container and directly to nginx container. If we don’t do this, Istio will expect external traffic to also be mTLS and will drop it. After traffic enters Nginx, it then decrypts the traffic and sends it out, which is intercepted by istio-proxy sidecar and encrypted to mTLS. 

3. Send traffic to application service instead of pods directly: By default, nginx sends traffic directly to application pods as you can see in the root cause diagram. If we continue doing that, istio will not consider this traffic to be mesh traffic and drop it. Therefore, for istio to allow this traffic as part of the mesh we have to direct it through the application service. After this is done, istio allows this traffic to go through to the application pods. 

There are some additional configurations which we will discuss in the detailed steps below. 

Steps to integrate Nginx Ingress Controller with Istio Service mesh 

For details on setting up the AKS cluster, enabling istio and installing demo application, check out my prior article: Istio Service Mesh Observability in AKS | Microsoft Community Hub, steps 1 through 4. The steps below assume that you already have an AKS cluster setup with istio service mesh installed. Also, demo application should be installed as discussed in my previous article. 

1. Enable mTLS strict mode for the entire service mesh. This would enforce mTLS in all namespaces where istio sidecar injection is enabled. 

# Enable mTLS for the entire service mesh 
kubectl apply -n aks-istio-system -f - <<EOF 
apiVersion: security.istio.io/v1 
kind: PeerAuthentication 
metadata: 
  name: global-mtls 
  namespace: aks-istio-system 
spec: 
  mtls: 
    mode: STRICT 
EOF 

 

2. Install Nginx ingress controller if not installed already in your AKS cluster. 

# Namespace where you want to install the ingress-nginx controller 
NAMESPACE=ingress-basic 

# Add nginx helm repo to your repositories 
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx 
helm repo update 

# Install Nginx Ingress Controller with annotation for Azure Load Balancer and externalTrafficPolicy set to Local 
# This is important for the health probe to work correctly with the Azure Load Balancer 
helm install ingress-nginx ingress-nginx/ingress-nginx \ 
  --create-namespace \ 
  --namespace $NAMESPACE \ 
  --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz \ 
  --set controller.service.externalTrafficPolicy=Local 

 

3. Create Ingress object in Application namespace: You need to create an Ingress object to allow nginx to route traffic to your pods. Kindly refer nginx-ingress-before.yaml 

# Apply Ingress Resource for the sample application 
kubectl apply -f ./nginx-ingress-before.yaml -n default 

 

4. Validate if you are able to access sample app using nginx ingress created: We will get the external IP of the ingress controller service that is of type LoadBalancer. 

# Get external IP for the service 
kubectl get services -n ingress-basic 

You will get an output as shown below: 

Now copy the IP from above and access http://<external-ip>/test in your browser. You will notice that nginx is throwing 502 Bad Gateway error. This is because it was not able to reach the application pods and get a response as istio-proxy dropped the requests as it was not mTLS. 

 Following steps will fix this issue:  

5. Enable sidecar injection in ingress controller namespace : For pods to understand traffic from nginx, it has to be sent with mTLS from istio side. To make this possible we have to enable sidecar injection in nginx ingress controller namespace. Post adding this label, restart the ingress controller deployment so that sidecars are injected into the nginx ingress controller pods: 

# Get the istio version installed on the AKS cluster 
az aks show --resource-group $MY_RESOURCE_GROUP_NAME --name $MY_AKS_CLUSTER_NAME  --query 'serviceMeshProfile.istio.revisions' 

# Label namespace with appropriate istio version to enable sidecar injection 
kubectl label namespace <ingress-controller-namespace> istio.io/rev=asm-1-<version> 

# Restart nginx ingress controller deployment so that sidecars can be injected into the pods 
kubectl rollout restart deployment/ingress-nginx-controller -n ingress-basic 

 

6. Exempt external inbound traffic from sidecar: This is required as mTLS is only understood within the AKS cluster and is not meant for external traffic. Now since sidecar is injected in Nginx, we need to exempt external traffic from going to istio proxy otherwise it will get dropped from not being mTLS (It is only TLS). To do this we need to add the following annotations: 

# Edit nginx controller deployment 
kubectl edit deployments -n ingress-basic ingress-nginx-controller 

# Disable all inbound port redirection to proxy (empty quotes to this property archives that) 
traffic.sidecar.istio.io/includeInboundPorts: "" 

# Explicitly enable inbound ports on which the cluster is exposed externally to bypass istio-proxy redirection and take traffic directly to ingress controller pods 
traffic.sidecar.istio.io/excludeInboundPorts: "80,443" 

Kindly wait before exiting the edit mode as we have one more annotation to add below. 

 

7. Allow connection between Nginx ingress controller and API server: Now since mTLS is enforced for Nginx it will not be able to communicate with Kubernetes API server to monitor and react to changes in Ingress resources, enabling dynamic configuration of NGINX based on these changes. Therefore, we need to exempt Kubernetes API server IP from mTLS traffic.  

# Query kubernetes API server IP 
kubectl get svc kubernetes -o jsonpath='{.spec.clusterIP}' 

# Add annotation to ingress controller 
traffic.sidecar.istio.io/excludeOutboundIPRanges: "KUBE_API_SERVER_IP/32" 

The problem with this approach is that AKS doesn't guarantee static IP for API server as it is managed by platform. Usually, API server IP changes during cluster restart or reprovisioning but that is not guaranteed to only happen during those instances. It can take up any IP from the service CIDR which is a /16 CIDR unless configured explicitly. One option is to have dedicated CIDR subnet for API server using VNET integration feature but this feature is currently in preview with tentative GA in Q2 2025: API Server VNet Integration in Azure Kubernetes Service (AKS) - Azure Kubernetes Service. After enabling this feature API server will always take an IP from the allocated subnet which can be mentioned in the annotation above. 

This is how the final deployment yaml for nginx ingress controller should look, note that annotations are updated under template and not at the deployment level: 

  

8. Route traffic to istio sidecar once it enters the ingress object: By default, nginx sends traffic to upstream PodIP and port combination. If this is done with mTLS enabled, istio will not recognize this as mesh traffic and drop it. Therefore, it is important to change this behavior and send traffic to the exposed service instead of the backend pod directly. This is done with the annotations below, you can check the sample here nginx-ingress-after.yaml: 

# Setup nginx to send traffic to upstream service instead of PodIP and port 
nginx.ingress.kubernetes.io/service-upstream: "true"  

# Specify the service fqdn where to route the traffic (this is the service that exposes the application pods) 
nginx.ingress.kubernetes.io/upstream-vhost: <service>.<namespace>.svc.cluster.local 

# Apply Ingress Resource for the sample application 
kubectl apply -f ./nginx-ingress-after.yaml -n default 

 

9. Configure the ingress’s sidecar to route traffic to services in the mesh: This is only needed if the ingress object is in a separate namespace compared to the services it is routing traffic to, we don’t need this as our ingress and application service are in the same namespace. Sidecars know how to route traffic to services in the same namespace but if you want them to route traffic to a different namespace, you will need to allow it in your sidecar configuration, which can be done using the yaml here Sidecar.yaml. 

# Apply Sidecar yaml in the namespace where ingress object is deployed     
kubectl apply -f Sidecar.yaml -n <ingress-object-namespace> 

Validate if the application is accessible: The application should now load at the endpoint http://<external-ip>/test in your browser.  

 Conclusion 

That’s it, once the steps above are followed, traffic should flow as expected between mTLS enforced service mesh and nginx ingress controller. You can find all the commands and yaml files from this article here. Let me know if you have any questions or face any issues with integrating nginx ingress controller with Istio service mesh in comments below. 

 

Updated Apr 10, 2025
Version 1.0
No CommentsBe the first to comment