How to choose the right network plugin for your AKS cluster: A flowchart guide
Published Nov 01 2023 01:56 AM 5,732 Views
Microsoft

One of the important decisions you need to make when creating a new Azure Kubernetes Service (AKS) cluster is which network plugin to use.

The network plugin determines how the pods and nodes communicate with each other and with external resources.

Each type has its own advantages and disadvantages, depending on your requirements and preferences.

 

In this blog post, I will explain the differences between Kubenet and Azure CNI, and help you choose the best option for your AKS cluster.

 

Here is a short overview of the network plugin types available to Azure Kubernetes Service:

 

Kubenet

  • The default plugin for the AKS cluster (created with Azure CLI)
  • Designed for fewer than 400 nodes (only nodes get a "real IP")
  • Pods receive an internal IP address from a logically different address space of the Azure virtual network subnet of the nodes.
  • Can’t use with Azure Network Policy, but Calico is supported.
  • A load balancer must be used for pods to reach resources outside the cluster.
  • Because the use of UDRs is a must, it can lead to increased management overhead, added complexity, and potential collisions.
  • Additional NATing can cause latency issues.
  • You can only have one AKS cluster per subnet.

Kubenet: dual-stack

  • Similar to Kubenet but with an IPv6 suuport.
  • Nodes receive both an IPv4 and IPv6 address from the Azure virtual network subnet9.
  • Pods receive both an IPv4 and IPv6 address from a logically different address space to the Azure virtual network subnet of the nodes9.

Azure CNI

  • Advanced network plugin for AKS
  • Every pod gets an IP address from the subnet and can be accessed directly.
  • IP addresses per node are then reserved up front (can lead to IP exhaustion)
  • Traffic to endpoints in the same virtual network isn't NAT'd to the node's primary IP
  • Requires more planning, and often leads to IP address exhaustion or the need to rebuild clusters in a larger subnet as your application demands grow.
  • Pods have individual network identities and are directly accessible on the VNet, making network policy application and enforcement simpler. However, Azure CNI consumes more IP addresses, as each pod gets its own.

Azure CNI Overlay mode

  • Cluster nodes are deployed into the Vnet’s subnet.
  • Pods are assigned IP form a private CIDR different from the Vnet address space of the nodes. Thus, preserving your "real IPs".
  • Only supports Linux nodes currently.
  • Can't use with AGIC.
  • Requires less planning of IP allocation.
  • Enables scaling to large sizes.
  • Azure CNI translates the source IP (Overlay IP of the pod) of the traffic to the primary IP address of the VM, which enables the Azure Networking stack to route the traffic to the destination.
  • Can reuse the private CIDR in different AKS clusters, which extends the IP space available for containerized applications.

Azure CNI Dynamic IP Allocation mode

  • You can segment incoming traffic from pods and nodes separately and dynamically add new subnets to increase the pod CIDR.
  • Each pod uses an IP from a separate CIDR range (virtual), this is only accessible from the cluster and can be reused across multiple AKS clusters.
  • Exposing an application as a Private Link Service using a Kubernetes Load Balancer Service isn’t supported.
  • Better IP utilization.
  • Only supports Linux nodes currently.
  • Can use with AGIC.
  • Provides better flexibility.
  • Can assign separate VNet Policies for Pods.

Azure CNI Powered by Cilium

  • It provides functionality equivalent to existing Azure CNI and Azure CNI Overlay plugins.
  • It offers a different service routing, network policy enforcement, observability of cluster traffic and support for larger clusters with increased numbers of nodes, pods, and services.
  • It takes advantage of Cilium’s direct routing mode inside guest virtual machines and combines it with the Azure native routing inside the Azure network.
  • Available only for Linux and not for Windows.
  • Cilium L7 policy enforcement is disabled.
  • Hubble is disabled.
  • Network policies cannot use ipBlock to allow access to node or pod IPs (Cilium issue #9209 and #12277).
  • Kubernetes services with internalTrafficPolicy=Local aren’t supported (Cilium issue #17796).

I've shared a decision flowchart to help you decide which is more suitable for your needs.

* I should mention that this flowchart reflects my personal recommendations. There are multiple ways to select a network plugin for your AKS clusters. and you should follow the updates on Microsoft documentations”.

 

AKS Network-Plugin decision flowchartAKS Network-Plugin decision flowchart

 

 

 

** This blog is relevant for the time of publishing it. Please bear in mind that Cloud technology is an ever-changing world, and some nuances might change.

 

Reference docs from Microsoft - 

1 Comment
Co-Authors
Version history
Last update:
‎Nov 09 2023 03:50 AM
Updated by: