Hi junpark1135, thanks for your comments. You're right, Azure CNI Overlay is actually part of the first solution, and the second solution is based on Azure CNI with Dynamic IP Address Allocation. I've updated the article to correct this and make things clearer, it may take some time for my updates to be published. Apologies for the typo and thanks for catching it.
With CNI Overlay, pods are assigned IP addresses from a separate CIDR range that is outside the Vnet CIDR range. As per the docs:
Pod and node traffic within the cluster use an Overlay network. Network Address Translation (NAT) uses the node's IP address to reach resources outside the cluster.
So the pods are assigned into a /24 subnet assigned to each pod, but the pod traffic gets routed according to the underlying node hosting it, and so the nodes need to be in separate subnets with separate NAT Gateways. I deployed solution one with the CNI in Overlay mode, and ended up with the following:
$ az network vnet create -n $vnet_name -g $rg --address-prefixes 10.240.0.0/16 \
--subnet-name default-subnet --subnet-prefixes 10.240.0.0/22 -l $location -o none
...
$ az aks create -g $rg -n $cluster -l $location --vnet-subnet-id $default_subnet_id \
--nodepool-name default --node-count 1 -s $vm_size --network-plugin azure \
--network-plugin-mode overlay --pod-cidr 192.168.0.0/16 --network-dataplane=azure -o none
...
$ kubectl get nodes -o wide
NAME STATUS INTERNAL-IP
aks-default-12054227-vmss000000 Ready 10.240.0.4
aks-app1pool-31822835-vmss000000 Ready 10.240.4.4
aks-app2pool-16403591-vmss000000 Ready 10.240.8.4
$ kubectl get pods -o wide
NAME READY STATUS IP NODE
yada-default-695f868d87-k95xz 1/1 Running 192.168.0.76 aks-default-12054227-vmss000000
yada-app-1-74b4dd6ddf-cdxkv 1/1 Running 192.168.1.22 aks-app1pool-31822835-vmss000000
yada-app-2-5779bff44b-rlqdk 1/1 Running 192.168.2.229 aks-app2pool-16403591-vmss000000
So you can see here that the nodes and the pods are being allocated IP addresses from different ranges. If we also run the echo
commands to display the service IP addresses and egress IP addresses, we get output similar to this:
default svc IP=20.103.74.74, egress IP=13.93.93.130
app1 svc IP=20.4.240.190, egress IP=40.118.54.48
app2 svc IP=20.4.241.37, egress IP=52.166.129.246
And when we cross-check this against the NAT Gateway IPs, we can see each service is egressing via a different IP address:
$ az network public-ip list -g $rg --query "[].{Name:name, Address:ipAddress}" -o table
Name Address
----------- --------------
default-pip 13.93.93.130
app-1-pip 40.118.54.48
app-2-pip 52.166.129.246
I hope this helps clarify how this solution works with CNI Overlay.