Growing adoption of containers for workload deployments has seen the emergence of Kubernetes as the preferred container orchestrator tool in the Cloud Native ecosystem. Azure Kubernetes Service (AKS), the managed Kubernetes offering from Azure, allows customers to offload the operational overhead to Azure, thus letting the user focus primarily on their workloads and business logic. However, as users migrate large scale workloads to containers, they often create a large number of AKS clusters for various reasons – clusters in different environments (QA/test, staging, prod), clusters in different regions for high availability or localization/proximity, or using clusters to achieve strong isolation between workloads. But the large number of clusters may result in platform admins having to deal with the following issues across their entire fleet of clusters – managing the lifecycle of multiple clusters and ensuring they have the same configuration, consistently scheduling the same app across multiple clusters, exposing the same app from multiple clusters and multiple regions at scale, and checking the health of my app across multiple clusters and the status of those clusters itself. To address the requirements of such large-scale AKS deployments, we are excited to announce a major step forward in Azure’s Kubernetes ecosystem by introducing the public preview of Azure Kubernetes Fleet Manager.
Organize your AKS cluster inventory
You can create an Azure Kubernetes Fleet Manager resource to group and organize your AKS clusters. You can join any existing AKS cluster as member cluster to the fleet and later use the metadata of these clusters in orchestrating multi-cluster scenarios like Kubernetes resource propagation and multi-cluster load balancing. Azure Kubernetes Fleet Manager supports joining existing AKS clusters from different resource groups, subscriptions, and regions. They just need to be a part of the same Azure AD tenant.
With this, the platform admins responsible for many AKS clusters now get a new landing spot in the form of Azure Kubernetes Fleet Manager, from where they can manage their fleet of clusters. One of the goals of Fleet is to enable you to treat a set of clusters more like a single cluster, so that you can re-use and expand your K8s knowledge and constructs from a single cluster to now a fleet of clusters. To achieve this, we chose to expose Fleet operations through Kubernetes itself. So, the Azure Kubernetes Fleet Manager resource itself is a Kubernetes cluster underneath which provides you a hub Kubernetes API from which you can orchestrate scenarios like Kubernetes resource propagation to member clusters and multi-cluster load balancing.
Kubernetes Resource Propagation
Fleet provides ClusterResourcePlacement as a mechanism to control how cluster-scoped Kubernetes resources are propagated to member clusters. ClusterResourcePlacement has two parts to it:
Resource selection: Pick which Kubernetes resources (namespace, ClusterRoleBinding,…) will propagate from fleet Kubernetes cluster to member clusters based on Kubernetes native constructs like <group, version, kind, name> or labels & selectors.
Target cluster section: Pick which member clusters of the fleet to propagate the chosen resources to. The target clusters can be all clusters (no policy) or a specific subset of member clusters either identified by name or labels & selectors
Before fleet, when platform admins got requirements for namespaces from their application teams, they would manually triage those and then figure out which AKS clusters in the inventory meet those requirements or if they need to create new AKS clusters to be able to place those namespaces. This namespace creation across multiple clusters and handover of the same to application teams was a manual process before fleet. Fleet’s ClusterResourcePlacement is an important step in enabling platform admins to automate placement of their namespaces against their cluster inventory based on the requirements shared by their application teams/tenants.
Multi-cluster load balancing
Large-scale AKS customers often deploy the same set of workloads and services across multiple clusters, potentially even in different regions, and then want to load balance incoming traffic across them. They do this to address problems originating from application or infrastructure failure in a single cluster or region. Other reasons for clusters deployed in multiple regions include localization and region proximity.
Azure Kubernetes Fleet manager allows you to create a custom resource to indicate that they want to set up Layer 4 multi-cluster load balancing for incoming traffic across endpoints of this service on multiple member clusters.
What’s next on fleet’s roadmap?
Now that the foundational building block of the fleet resource is in place, we have a rich roadmap we are working on over the coming few months:
Lifecycle management of member clusters: Getting your entire fleet of clusters (dev, staging, prod) in a safe way to a new Kubernetes version can be challenging. We plan to provide an aggregated experience for orchestrating upgrade all member clusters of the fleet in a controlled manner across different environments (dev, staging, prod or region-based rings) as defined by the user. We also plan to allow declarative creation of AKS clusters from its Kubernetes API so that the platform admins can fully automate their cluster inventory creation and management in response to new requirements they receive from their application teams.
Fully managed hub cluster: With fleet acting as the grouping control plane for scenarios like Kubernetes resource propagation and multi-cluster load balancing, we plan to further solidify its resiliency by adding region failover capability for business continuity and disaster recovery.
Multi-cluster networking: Fleet currently addresses Layer-4 north-south load balancing across services on multiple member clusters. We plan to provide the equivalent Layer-7 north-south HTTPS load balancing solution with Gateway API as the interface. East-west communication and multi-cluster service mesh is in the fleet roadmap too.
Arc-enabled Kubernetes as member clusters: Fleet is currently scoped to joining AKS clusters as member clusters, but it is designed in an extensible way so that Arc-enabled Kubernetes resource type can be supported in future under fleet too. This will allow us to extend the benefits of fleet to edge and hybrid clusters.
The full roadmap for Azure Kubernetes Fleet Manager can be found here, where you can share your feature asks and scenarios
Get Started and Next Steps
You can get started with Azure Kubernetes Fleet Manager in Azure portal or Azure CLI. Getting started with fleet is as easy as navigating to Azure Portal, then to Azure Kubernetes Fleet Manager and then using the creation wizard to create the Fleet resource and pre-seed a few member clusters from existing AKS clusters you have access to. Visit the quickstart in our product documentation for more details.
You can learn more about the other AKS announcements at Ignite here.