How client source IP preservation works for loadbalancer services in AKS
Published Dec 15 2021 06:25 PM 12.5K Views



Packets sent to LoadBalancer Services are source NAT'd (source IP is replaced by the IP of the node) by default because all schedulable nodes in the "Ready" state are eligible for load-balanced traffic. When your ingress controller routes a client's request to a container in your AKS cluster, the original source IP of that request is unavailable to the target container. You can preserve source IP on requests to your containers in AKS by enabling client source IP preservationThe client source IP is stored in the request header under X-Forwarded-For. One caveat is when using an ingress controller with client source IP preservation enabled, TLS pass-through to the destination container will not work. The following details explain how to setup client source IP preservation




  1. Create a private cluster.
  2. Create a jumpbox in the same subnet as the AKS cluster.
  3. Connect to jumpbox and install kubectl and az cli.


    sudo az aks install-cli
  1. Configure kubectl to connect to your Kubernetes cluster using the az aks get-credentials command.


    az aks get-credentials --resource-group ftademo --name asurity-demo                                    


Deploy Application in AKS Cluster


We will create a small nginx webserver that echoes back the source IP of requests it receives through an HTTP header.


  1.  Create a deployment for the app.


    kubectl create deployment source-ip-app
  2.  Expose the application through a loadbalancer.


    kubectl expose deployment source-ip-app --name=loadbalancer --port=80 --target-port=8080 --type=LoadBalancer
  3. Get the IP of the loadbalancer.


    kubectl get svc loadbalancer


  4.  Send data to the application to see the client's IP address.


  5. The Client IP is of one of the nodes.




Enable Client source IP preservation


  1. Edit loadbalancer to set service.spec.externalTrafficPolicy field to "Local".


    kubectl patch svc loadbalancer -p '{"spec":{"externalTrafficPolicy":"Local"}}'


    apiVersion: v1
    kind: Service
    creationTimestamp: "2021-12-08T09:10:05Z"
    app: source-ip-app
    name: loadbalancer
    namespace: sourceip
    resourceVersion: "11870944"
    uid: f8e39f83-f205-4b0c-b74d-a3ab3dbc9659
    externalTrafficPolicy: Local
    healthCheckNodePort: 30107
    - nodePort: 30486
    port: 80
    protocol: TCP
    targetPort: 8080
    app: source-ip-app
    sessionAffinity: None
    type: LoadBalancer
    - ip:


  2.   Send data to the application to see the client's IP address.   




  3. Now the client IP is the same as the source IP(srjumpbox).


How load is balanced when a client source IP is preserved


Setting the service.spec.externalTrafficPolicy field to "Local" forces nodes without Service endpoints to remove themselves from the list of nodes eligible for loadbalanced traffic by deliberately failing health checks.


  1. Get the health check node port of the load balancer service.


    kubectl get pod -o wide -l run=source-ip-app


  2. Get the pod details to check on which node pod is running.


    kubectl get pod -n sourceip -o wide -l app=source-ip-app
  3. Create ssh connection to the node containing the pod.


    kubectl debug node/aks-nodepool1-33498924-vmss00000c -it
  4. Curl to fetch the /healthz endpoint.


    curl localhost:30107/healthz
    sonalikaroy_0-1638961161730.pngThere are Endpoints on this node.


  5. Create ssh connection to the node not containing the pod.


    kubectl debug node/aks-nodepool1-33498924-vmss000001 -it
  6. Curl to fetch the /healthz endpoint.


    curl localhost:30107/healthz

    There are no Endpoints on this node.


  7.  Packets are sent to the nodes by checking HTTP health at the port stored in the service.spec.healthCheckNodePort field on the Service.
1 Comment
Version history
Last update:
‎Dec 15 2021 06:25 PM
Updated by: