Blog Post

Azure Architecture Blog
9 MIN READ

Static Egress Gateway in AKS: The Native Way to Control Multiple Outbound IPs

pjlewis's avatar
pjlewis
Icon for Microsoft rankMicrosoft
Jan 13, 2026

Introduction

Back in late 2023, I published a blog post outlining how to provision multiple egress IP addresses in AKS, using multiple node pools running in multiple subnets to achieve this outcome.

In July 2024, the AKS product group released the open-source kube-egress-gateway project to help customers provide multiple egress paths from their AKS clusters, and in September 2025 AKS added preview support for Static Egress Gateway, making the whole process simpler.

The purpose of this blog post is to re-visit the problem statement from the original blog post, and then look at how we can achieve this using the Static Egress Gateway feature built into AKS, and see how much simpler the modern solution is.

Problem Statement

To recap the original problem statement: some applications in a single AKS Cluster need different outbound paths, so you need a simple way to apply custom egress routing per workload to achieve this.

How Static Egress Gateway Improves on the 2023 Approach

Static Egress Gateway removes the architectural complexity required in the 2023 design by eliminating the need for multiple node pools, user defined routes and custom routing logic. Instead of mapping workloads to different subnets and maintaining separate outbound paths, the gateway provides a simple AKS native way to allocate predictable egress IPs to selected workloads. It also removes operational overhead because the gateway node pool owns the public IP prefix and handles translation without relying on cluster scale events. This gives a cleaner separation between compute and egress and is easier to automate and reason about. The result is the same outcome as the original post but achieved with fewer moving parts and a far more supportable model.

Proposed Solution

This time, instead of brewing our own solution, we will be using the Static Egress Gateway feature in AKS, as it is much simpler to implement, and more flexible too! This feature allows us to define multiple outbound egress pathways in a Kubernetes-native manner, without juggling node pools, UDRs (User Defined Routing), or custom routing logic.

Before we start, please be aware of the limitations and considerations related to using Static Egress Gateway.

As always, we start by defining some environment variables and creating a resource group to deploy our Azure resources into:

rg=egress-gw
location=swedencentral
vnet_name=vnet-egress-gw
cluster=egress-gw
vm_size=Standard_D4as_v6

az group create -n $rg -l $location

We then create an AKS cluster, with the Static Egress Gateway feature enabled:

az aks create \
  --name $cluster \
  --resource-group $rg \
  --location $location \
  --enable-static-egress-gateway

az aks get-credentials -n $cluster -g $rg --overwrite

It's also possible to enable the Static Egress Gateway feature on an existing cluster, using the `az aks update` command, e.g.:

az aks update \
  --name $cluster \
  --resource-group $rg \
  --enable-static-egress-gateway

Once the cluster is created (or the feature has been enabled), we need to create a dedicated gateway node pool, which will handle the egress traffic. The --gateway-prefix-size is the size of the public IP prefix to be applied to the gateway node pools (28-31), and will define or limit the number of gateway nodes you can scale out in the gateway node pool.

az aks nodepool add \
  --cluster-name $cluster \
  --name gwpool1 \
  --resource-group $rg \
  --mode gateway \
  --node-count 2 \
  --gateway-prefix-size 30

Note: Gateway node pools should not be used for general-purpose workloads, and should be reserved for egress traffic only.

With the gateway node pool created, we can connect to our cluster and deploy a StaticGatewayConfiguration custom resource. The StaticGatewayConfiguration CRD tells AKS which gateway node pool to use and which IP prefix to assign to the outbound traffic. A StaticGatewayConfiguration custom resource looks something like this:

apiVersion: egressgateway.kubernetes.azure.com/v1alpha1
kind: StaticGatewayConfiguration
metadata:
  name: egress-gw-1
  namespace: default
spec:
  gatewayNodepoolName: gwpool1
  excludeCidrs:  # Optional 
  - 10.0.0.0/8
  - 172.16.0.0/12
  - 169.254.169.254/32
#  publicIpPrefixId: /subscriptions/<subscription-id>/resourceGroups/<resource-group>/providers/Microsoft.Network/publicIPPrefixes/<prefix-name> # Optional

Note: In this example I excluded the publicIpPrefixId argument, so the AKS cluster will create a Public IP Prefix Azure resource for me automatically. If you already have a Public IP Prefix you would like to use, uncomment the line and set the resource ID accordingly.

We can view the Public IP Prefix that has been provisioned for us by describing the StaticGatewayConfiguration resource we deployed. The Public IP Prefix can take a while to deploy and be configured, so you may need to run the command below a few times to see similar output. 

$ kubectl describe StaticGatewayConfiguration egress-gw-1 -n default
Name:         egress-gw-1
Namespace:    default
Labels:       
Annotations:  
API Version:  egressgateway.kubernetes.azure.com/v1alpha1
Kind:         StaticGatewayConfiguration
Metadata:
  Creation Timestamp:  2026-01-07T11:34:17Z
  Finalizers:
    static-gateway-configuration-controller.microsoft.com
  Generation:        2
  Resource Version:  15543
  UID:               de7642b6-c2fe-47cc-b478-f7c71c5db402
Spec:
  Default Route:  staticEgressGateway
  Exclude Cidrs:
    10.0.0.0/8
    172.16.0.0/12
    169.254.169.254/32
  Gateway Nodepool Name:  gwpool1
  Gateway Vmss Profile:
  Provision Public Ips:  true
Status:
  Egress Ip Prefix:  20.91.186.64/30
  Gateway Server Profile:
    Ip:    10.224.0.9
    Port:  6000
    Private Key Secret Ref:
      API Version:  v1
      Kind:         Secret
      Name:         sgw-de7642b6-c2fe-47cc-b478-f7c71c5db402
      Namespace:    aks-static-egress-gateway
    Public Key:     FgeNumtkWbWnIGebcY1C/Ul19AmI1mLGf5DSMze5KBE=
Events:
  Type    Reason                                  Age                From                                   Message
  ----    ------                                  ----               ----                                   -------
  Normal  Reconciling                             14m (x5 over 14m)  staticGatewayConfiguration-controller  StaticGatewayConfiguration provisioned with egress prefix 
  Normal  ReconcileGatewayLBConfigurationSuccess  13m (x4 over 14m)  gatewayLBConfiguration-controller      GatewayLBConfiguration reconciled
  Normal  Reconciled                              13m (x2 over 13m)  staticGatewayConfiguration-controller  StaticGatewayConfiguration provisioned with egress prefix 20.91.186.64/30
  Normal  ReconcileGatewayVMConfigurationSuccess  13m (x2 over 13m)  gatewayVMConfiguration-controller      GatewayVMConfiguration reconciled

With all the relevant pre-requisites in place, we can now configure and deploy an app to make use of this Static Egress Gateway. As in the previous blog post, we will be deploying the API component of YADA: Yet Another Demo App with a public-facing LoadBalancer (inbound) service, so that we can access the YADA app, which will show us our outbound IP addresses.

Copy the sample manifest from the GitHub repository, and update it to look like my two examples below.

YADA API app using the default cluster egress (yada-api-default.yaml):

kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    run: api-default
  name: api-default
spec:
  replicas: 1
  selector:
    matchLabels:
      run: api-default
  template:
    metadata:
      labels:
        run: api-default
    spec:
      containers:
      - image: erjosito/yadaapi:1.0
        name: api-default
        ports:
        - containerPort: 8080
          protocol: TCP
      restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
  name: yada-default
  annotations:
    service.beta.kubernetes.io/azure-load-balancer-internal: "false"
spec:
  type: LoadBalancer
  ports:
  - port: 8080
    targetPort: 8080
  selector:
    run: api-default
EOF

YADA API app using the Static Egress Gateway (yada-api-egressgw.yaml):

kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    run: api-egressgw
  name: api-egressgw
spec:
  replicas: 1
  selector:
    matchLabels:
      run: api-egressgw
  template:
    metadata:
      labels:
        run: api-egressgw
      # This annotation defines which StaticGatewayConfiguration the workload should use for egress
      annotations:
        kubernetes.azure.com/static-gateway-configuration: egress-gw-1
    spec:
      containers:
      - image: erjosito/yadaapi:1.0
        name: api-egressgw
        ports:
        - containerPort: 8080
          protocol: TCP
      restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
  name: yada-egressgw
  annotations:
    service.beta.kubernetes.io/azure-load-balancer-internal: "false"
spec:
  type: LoadBalancer
  ports:
  - port: 8080
    targetPort: 8080
  selector:
    run: api-egressgw
EOF

Now we deploy both apps into our AKS cluster:

kubectl apply -f yada-api-default.yaml
kubectl apply -f yada-api-egressgw.yaml

Unlike in our cluster from my 2023 blog post, if we view the pods we will find that they are running in the same node pool:

$ kubectl get pods -o wide
NAME                            READY   STATUS    IP             NODE
api-default-7f4d8c5ccc-vcpxz    1/1     Running   10.244.0.166   aks-nodepool1-31740631-vmss000000
api-egressgw-6f45f5d8cc-xrfsm   1/1     Running   10.244.2.234   aks-nodepool1-31740631-vmss000002

And now we can use determine the inbound IP address for each yada service, and then use cURL to query the YADA API to view the outbound IP address for each yada pod we have deployed:

echo "default: svc IP=$(kubectl get svc yada-default -o jsonpath='{.status.loadBalancer.ingress[0].ip}'), egress IP=$(curl -s http://$(kubectl get svc yada-default -o jsonpath='{.status.loadBalancer.ingress[0].ip}'):8080/api/ip | jq -r '.my_public_ip')"
echo "egressgw: svc IP=$(kubectl get svc yada-egressgw -o jsonpath='{.status.loadBalancer.ingress[0].ip}'), egress IP=$(curl -s http://$(kubectl get svc yada-egressgw -o jsonpath='{.status.loadBalancer.ingress[0].ip}'):8080/api/ip | jq -r '.my_public_ip')"

You should see some output like this, demonstrating that each yada service has a unique inbound IP address (as expected with LoadBalancer services), as well as unique outbound IP addresses too:

default: svc IP=135.116.221.253, egress IP=4.165.56.119
egressgw: svc IP=135.116.255.200, egress IP=20.91.186.64

If we were to add more egress gateways, and configure additional workloads to use these gateways, we would see each Static Egress Gateway providing a new, unique set of outbound IP addresses.

Keeping Egress IPs Private

In some scenarios you may want or need to egress traffic on private IP addresses. You can do this by enabling private IP support on the gateway node pool. To do this, specify the --vm-set-type VirtualMachines parameter when creating the node pool, e.g.:

az aks nodepool add \
  --cluster-name $cluster \
  --name privgwpool1 \
  --resource-group $rg \
  --mode gateway \
  --node-count 2 \
  --vm-set-type VirtualMachines \
  --gateway-prefix-size 30

Note: At the time of writing, the private gateway deployment path is not functioning correctly. This should be resolved by the end of January once a fix has been rolled out.

With this configuration, the provisionPublicIps=false setting keeps the private IPs allocated to the gateway nodes for the lifetime of the StaticGatewayConfiguration.

BYO Subnets

As you'll have noticed, so far we have used the default resources and configurations created automatically by AKS. For many customers this is fine. For others, the solution must integrate with their existing network architecture, and they will need to opt for Bring Your Own Networking. The Static Egress Gateway feature works well with BYO network configurations, but there are a few requirements to be aware of when the cluster runs inside a hub-and-spoke or centrally-managed IP space.

Network Requirements

When using BYO subnets there are some important constraints:

  • The gateway node pool must be deployed into a subnet with enough free IPs to host the Public IP Prefix assigned to the gateway.
  • The gateway subnet must not contain UDRs that redirect egress‑gateway traffic to a firewall before translation occurs.
  • Pod CIDRs, Service CIDRs, and the Public IP Prefix used by the gateway must not overlap with any internal ranges.

Deploying the Resources

Here is a minimal example using AZ CLI to create a BYO network layout suitable for Static Egress Gateway. Start by creating the Vnet and the subnets:

az network vnet create \
  --name $vnet_name \
  --resource-group $rg \
  --address-prefixes 10.240.0.0/16 \
  --subnet-name aks-system \
  --subnet-prefixes 10.240.0.0/22 \
  --location $location \
  --output none

az network vnet subnet create \
  --vnet-name $vnet_name \
  --resource-group $rg \
  --address-prefix 10.240.4.0/22 \
  --name aks-user \
  --output none

az network vnet subnet create \
  --vnet-name $vnet_name \
  --resource-group $rg \
  --address-prefix 10.240.8.0/24 \
  --name aks-egress \
  --output none

Deploy an AKS cluster into the BYO Subnets:

az aks create \
  --resource-group $rg \
  --name $cluster \
  --location $location \
  --enable-static-egress-gateway \
  --vnet-subnet-id $(az network vnet subnet show -g $rg --vnet-name $vnet_name -n aks-system --query id -o tsv) \
  --nodepool-name systempool1 \
  --node-count 2 \
  --node-vm-size $vm_size \
  --network-plugin azure \
  --network-dataplane=azure \
  --output none

az aks get-credentials -n $cluster -g $rg --overwrite

Add a node pool for user workloads:

az aks nodepool add \
  --cluster-name $cluster \
  --name userpool1 \
  --resource-group $rg \
  --node-count 2 \
  --vnet-subnet-id $(az network vnet subnet show -g $rg --vnet-name $vnet_name -n aks-user --query id -o tsv)

Update the system-managed AKS identity with the "Network Contributor" role so it can configure the customer-managed subnets:

clientId=$(az aks show --name $cluster --resource-group $rg --query identity.principalId --output tsv)
rg_id=$(az group show --name $rg --query id -o tsv)
mc_rg_id=$(az group show --name MC_${rg}_${cluster}_${location} --query id -o tsv)
az role assignment create --assignee $clientId --role "Network Contributor" --scope $rg_id
az role assignment create --assignee $clientId --role "Network Contributor" --scope $mc_rg_id

Create the Gateway node pool:

az aks nodepool add \
  --cluster-name $cluster \
  --name gwpool1 \
  --resource-group $rg \
  --mode gateway \
  --node-count 2 \
  --vnet-subnet-id $(az network vnet subnet show -g $rg --vnet-name $vnet_name -n aks-egress --query id -o tsv) \
  --gateway-prefix-size 30  

Once created, you can apply the StaticGatewayConfiguration as shown earlier in the post.

Conclusion

As you will have seen, the Static Egress Gateway feature removes almost all of the complexity from the design in the original blog post while giving you predictable outbound IPs that scale with your cluster and workloads. It provides the same outcome as before but with fewer moving parts and a model that is simpler to operate and automate.

Updated Jan 13, 2026
Version 1.0
No CommentsBe the first to comment