With the release of Kubernetes v1.14, Windows server nodes are officially declared “stable”. But how is network connectivity provided between the pods and services which consist a modern application? Whenever a DevOps team deploys a new Kubernetes (K8s) cluster or adds a Windows node to an existing cluster, they want networking to just work with the equivalent or even better network management capabilities on containers than on existing infrastructure. Windows Server 2019 now includes a simpler and more scalable overlay networking solution for Kubernetes clusters via Windows update KB4489899, including integration with the latest release of the Flannel network control-plane, CNI plugins, and kube-proxy.
Overlay networking uses encapsulation to create a virtual network on top of the existing physical network without requiring any configuration changes to the physical network infrastructure. Overlay networking on Windows containers brings the following benefits:
Overlay networks work by encapsulating networks packets with (outer) packets to form a new network topology independent of the underlying network. This allows for simpler communication paths between entities which were originally out of scope. By tunneling network subnets between individual hosts, it allows containers to communicate with each other as if they were on the same machine, thereby creating one network that spans multiple hosts.
Without overlays, network administrators and other users are faced with cumbersome networking requirements on their underlying infrastructure when trying to adopt Kubernetes on Windows. This includes having L2 adjacency between container hosts/nodes, route tables in other networking modes (l2bridge/l2tunnel) falling out of sync as clusters grow in size, or network glitches on transparent networks waiting for switches to learn MAC addresses of sometimes short-lived containers.
Now, with improved overlay networking support for containers, these constraints no longer apply, meaning users have a elastic way to deploy Kubernetes on Windows which is more agnostic and decoupled from the underlying infrastructure and its network configuration.
Another particular source of excitement is that we’ve also revised the platform overlay networking design specifically for multi-node clustering scenarios in order to improve scalability as nodes and subnets in a cluster grow.
In overlay networks, L3 IP connectivity between hosts is all that is required for containers scheduled on machines to interact with each other as if they had direct L2 connectivity (or in other words: lived on the same machine). Conceptually, this is achieved by encapsulating network packets coming from containers with an outer header. Each machine also programs HNS “RemoteSubnet” policies so that each node knows which container network is assigned to whom.
Here is how the updated workflow using “RemoteSubnets” looks like on Windows Server 2019:
Here is an short explanation of some of the fields used in the video:
Great! Building on top of the platform enrichment's described above, we’re also incredibly excited to announce Windows support for the popular open-source Flannel CNI plugin in overlay network mode using VXLAN encapsulation. This is the recommended way to get started with overlay networking on Kubernetes as it offers the simplest management experience available on Windows today.
The Kubernetes getting started guide has been updated with everything that you need. Here is how you can try it out yourself:
In short, the answer to this question really depends on a user’s goals and requirements. There are many reasons why attaching containers to an underlay network via a network mode like “l2bridge” may be desired. For example, users may wish for containers be able to talk to an existing on-premise service, rather than being isolated from the underlying network. One may also wish to fully onboard containers onto existing SDN features such as network security groups (NSG), which may not be possible otherwise. In performance-sensitive scenarios users should also keep in mind that overlay networks incur a performance hit, as nodes need to setup tunnels between hosts, which reduces available MTUs, and the data path need additional time and CPU cycles to encapsulate/decapsulate packets.
The reasons why a user may prefer overlays are largely covered above already; users deploying using Flannel should also consult the Flannel backend docs for additional guidance.
Since overlay networking for Kubernetes has just launched, we are working on incremental improvements as well as overcoming a few limitations to make it more customizable and useful. Here is a short teaser of what’s next:
Finally, since overlay networking for Kubernetes on Windows is brand new, we’d love to hear any feedback in trying it out at SIG-Windows or in the comments below!
Thanks for reading,
David Schott
*Special thanks to Kalya Subramanian & Pradip Dhara for designing and implementing overlay networking for Windows containers, as well as providing materials to help create content for this blog!
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.