Blog Post

Azure Infrastructure Blog
3 MIN READ

AKS Networking || Bring your own CNI plugin (BYOC)

Shyam_Yadav's avatar
Shyam_Yadav
Icon for Microsoft rankMicrosoft
Jun 12, 2024

Bring your own Container Network Interface (BYOCNI) plugin with Azure Kubernetes Service (AKS)

 

What is BYOCNI?
BYOCNI stands for Bring Your Own Container Network Interface. It allows advanced AKS users to deploy an AKS cluster with no CNI plugin preinstalled. Instead, you can install any third-party CNI plugin that works in Azure. This flexibility enables you to use the same CNI plugin used in on-premises Kubernetes environments or leverage advanced functionalities available in other CNI plugins.

 


Before diving into BYOCNI, ensure the following prerequisites are met:
- Use at least template version 2022-01-02-preview or 2022-06-01 for Azure Resource Manager (ARM) or Bicep.
- Have Azure CLI version 2.39.0 or later.
- The virtual network for the AKS cluster must allow outbound internet connectivity.
- Avoid using specific address ranges (e.g., 169.254.0.0/16, 172.30.0.0/16, 172.31.0.0/16, or 192.0.2.0/24) for Kubernetes service, pod address range, or cluster virtual network address range.
- The Identity used by the AKS cluster need to have least Network Contributor permissions on the subnet within your virtual network. Or you can use the custom role which has "Microsoft.Network/virtualNetworks/subnets/join/action and Microsoft.Network/virtualNetworks/subnets/read" permission.
- Subnet cannot be a delegated subnet used by AKS node pool.
- AKS doesn't apply NSGs to its subnet or modify any of the NSGs associated with that subnet. If you add custom NSGs to the subnet, ensure the security rules allow traffic within the node CIDR range.

 

Deploy AKS cluster with no CNI plugin preinstalled:

You can deploy the AKS cluster with different Infrastructure as code (IAC) and CLI. We just need to pass network-plugin with the value as none. Refer the below snapping for the same.

 

1. Azure CLI:

2. Terraform: 

3. ARM template: 

4. Bicep:

 

Upon a successfully deployment you can see the AKS cluster is online, but all the nodes are not ready, you can check and verify the same on the azure poral as well as by running the kubectl commands as shown below,
Azure portal:

kubectl:

We can clearly see the reason:NetworkPluginNotReady in the blow snapping. 

Now to make the nodes ready we need to install the network plugin. To do so you can leverage BYOCNI plugin third-party vendor such as Cilium, Flannel and Weave. Apart from these three there are so many other 3rd party plugins as well. You can run the below command to install the network pluginIn my Case I have used Flannel.

 

Deploying Flannel with kubectl

kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

 

If you use custom podCIDR (not 10.244.0.0/16) you first need to download the above manifest and modify the network to match your one.

Deploying Flannel with helm

# Needs manual creation of namespace to avoid helm error
kubectl create ns kube-flannel
kubectl label --overwrite ns kube-flannel pod-security.kubernetes.io/enforce=privileged

helm repo add flannel https://flannel-io.github.io/flannel/
helm install flannel --set podCidr="10.244.0.0/16" --namespace kube-flannel flannel/flannel

 

After applying the above kubectl commands the nods are now in ready state as you can see below,

 

Portal:


Using kubectl:

 

Note:

Remember that Microsoft support cannot assist with CNI-related issues in clusters deployed with BYOCNI. For CNI-related support, consider using a supported AKS network plugin or seek support from the third-party vendor of your chosen CNI plugin. Support is still provided for non-CNI-related issues.

BYOCNI empowers you to tailor your AKS networking to your specific requirements.

 

Updated Jun 12, 2024
Version 7.0
No CommentsBe the first to comment