These days, it seems like Kubernetes is one of the most popular conversation topics in the world of Cloud and modern applications. The question is not if your organization uses Kubernetes, it is when the organization will use it.
This post will focus on the need for standardization and not if Kubernetes is meeting your business and technical requirements or not. So now we established the assumption that you already use it, let's talk about a couple of challenges.
Challenges #1 - Sprawling
If you are in the process of modernizing your applications and adopting cloud-native design patterns, you know, like "next-gen" scalability, availability, security, etc. you probably have some notion of why Kubernetes. But the challenge, in this case, is the Kubernetes cluster sprawl that is about to hit you.
Rather you are building it yourself on-premises, using the gazillion Kubernetes flavors out there, installing on bare-metal or deploying one of the managed Kubernetes offerings the cloud providers has to offer, the problem remains, and all of a sudden you are in the business of managing Kubernetes clusters, well, all over the place. I like to call this "my fleet is out of control" :)
Challenge #2 - "I am drifting away…"
You got your clusters, good for you! But how do you keep all these clusters configured the way you left it?! Don't you want them to be all meet your configuration baseline?! You do!
It's not just your clusters that matter because after all, the applications are what drives the business!
Looking at both situations, you can see a recurring theme. This is really about making sure no configuration, rather on the cluster and/or the applications deployed on it are drifting away. You want this because otherwise, you might be facing an outdated application or a cluster that is not meeting for example your security needs.
Azure Arc enabled Kubernetes + GitOps == Wining
Now that we addressed these couple of challenges, we can talk a solution - enters Azure Arc enabled Kubernetes with GitOps configurations.
By extending or “stretching” the Azure Resource Manager (ARM) control plane, we are able to project your Kubernetes clusters which are deployed OUTSIDE of Azure as 1st class citizens inside Azure next to existing resources, for example, as you can see in the figure below, Azure Kubernetes Service (AKS) clusters reside next to Azure Arc ones.
By doing so, you get a single interface to rule them all which is the start of the solution to challenge #1 - the sprawl of clusters.
Azure Arc enabled Kubernetes clusters alongside AKS clusters
Projecting the clusters is the fundamental building block and now you apply GitOps Configurations for these clusters. Azure Arc with Kubernetes and GitOps is not scary as one might think, the concept and the flow are very straight forward.
Generally speaking, GitOps with Kubernetes is about deploying your applications based on Git repository which represents the “source of truth” or the baseline for this app deployment.
It relays on a Kubernetes Operator, which Is the Flux Operator in the Azure Arc Kubernetes case to “listen” if changes are being made on the baseline, meaning the repo. If such changes occur, the operator will initiate a rolling update Kubernetes deployment, deploying the new Pods and terminating the old one.
This can be done against a standard Kubernetes YAML manifest or a Helm charts release, using also the Help Operator with conjunction to the Flux one (which also gets deployed automatically).
Application deployment GitOps flow with Azure Arc enabled Kubernetes
Existing Kubernetes clusters are already deployed
Azure Arc Kubernetes connected cluster is created
The user creates Azure Arc Kubernetes cluster GitOps Configuration
The Flux Operator (and optionally the Helm Operator) is deployed on the cluster and starts” listening” to the git repository with the user’s application code
The Flux operator initiates the user’s application deployment on the cluster, representing the current desired state
User is updating the application (creating a new app version) and merge changes to the repository
Flux pick up a change to the Git repository
Flux operator initiates a new user’s application version deployment on the cluster while removing old version application pods, resulting in a new Desired State
Cluster-level Configuration vs. Namespace-level Configuration
With Cluster-level GitOps Configuration, the goal is to have a baseline for the "horizontal components" or "management components" deployed on your Kubernetes cluster which will then be used by your applications. Good examples are Ingress Controller, Service Meshes, Security products, Monitoring solutions, etc. Having such deployments as part of your GitOps Configuration will assure your cluster is meeting the cluster baseline standards.
With Namespace-level GitOps Configuration, the goal is to have the Kubernetes resources deployed only in the namespace selected. The most obvious use-case here is simply your application and it’s respective pods, services, ingress routes, etc. Having such deployments as part of your GitOps Configuration will assure your applications are meeting the Kubernetes applications baseline standards.
So, as you can see, by having the same GitOps Configurations for all your Kubernetes clusters, managed by Azure Arc you are solving challenge #2 and able to govern a potential cluster and application config and versioning drifts.
Get Started Today!
In this post we briefly touched on the power of using Azure Arc enabled Kubernetes alongside its native GitOps Configurations capabilities. Having all your Kubernetes clusters projected as Azure resources and have the same GitOps Configurations for all of them will allow you to gain much better control for both fleet management and deployment baselines as well as drift avoidance.