“Kubernetes allows us to develop cloud-native, microservices applications” - how many times you've heard that?!
Yes, Kubernetes is the platform for platforms and today’s de-facto standard for container orchestration, but by itself, Kubernetes is not enough.
The sheer number of integrations that are not directly related to Kubernetes is really, if you think about it, the majority of makes the app function. Components, services and processes like data and state management, secret stores and certificates, traffic routing, version control, service discovery and secured communication, and application rollout, are just a subset of the things one needs to consider when developing cloud-native, microservices applications.
The Multi-Cloud Kubernetes and portability challenge
The first challenge that comes to mind is the natural expectation when developing such applications to be able to use them in any cloud, or more accurately, on any cloud Kubernetes distribution. Whether it’s on Azure Kubernetes Service (AKS) or any other distro, in an ideal world, a developer should not need to factor this when developing the app.
Kubernetes by itself is not really the issue, it is the set of integrations needed to make everything tick, and the challenge here is that every cloud provider has its own solutions for these peripheral integrations, not to mention the fact that the on-premises Kubernetes distributions do not have the breadth and scalability the hyperscale public cloud providers are offering.
Portability == Hybrid
Being able to develop cloud-native, microservice applications and move those around easily is the holy grail of the promise that is application portability.
To overcome the challenges, the need for concern about which cloud provider and Kubernetes distribution to use must be taken out of the equation as well as the application integration services that support the application.
Being able to “move around” really means being able to use existing development patterns and processes without being constrained by the underlying infrastructure. Whether it’s located on-premises or in a multi-cloud environment, the consistency of a Kubernetes-based application deployment and services integrations is key. Enters Azure Arc…
Azure Arc extensibility model
The Azure Arc extensibility model allows for any Azure Arc-enabled Kubernetes cluster and both stateless and stateful applications running on it to take advantage of the Azure services integrations offered via Cluster extensions. You can think of extensions for Azure Arc-enabled Kubernetes as being split into two categories: extensions for Azure Arc-enabled infrastructure services and extensions for Azure Arc-enabled services.
To further illustrate the Azure Arc extensibility model, let’s use a simple “Bookstore” application, written in Golang. This microservice application has many components and services, and that also means external dependencies on integrations.
One extensions layer to rule them all
When developing these types of microservices, cloud-native applications, questions like these below keep coming up:
“Where should the database be stored?”
“How should we handle secrets management?”
“How should we think about A/B testing and service observability?”
“What solution should we use for pub/sub?”
“What should be our CI/CD approach in a Kubernetes environment?”
This set of questions is just a fraction of things to consider in a cloud-native, Kubernetes-based development pattern, but using the same extensions via Arc-enabled Kubernetes, regardless of the backend infrastructure of the cluster or where it is physically located, these questions can be answered.
GitOps with Flux
Flux (GitOps) extension on Azure Arc-enabled Kubernetes uses an extension that implements Flux, a popular open-source tool set. Flux is an operator that automates GitOps configuration deployments in your cluster. Flux provides support for common file sources (Git repositories, Helm repositories, Buckets) and template types (YAML, Helm, and Kustomize). Flux also supports multi-tenancy and deployment dependency management among other features. Read more about the design elements around GitOps with Azure Arc-enabled Kubernetes.
Observability with service mesh
Open Service Mesh (OSM) is a lightweight, extensible, Cloud Native service mesh that allows users to uniformly manage, secure, and get out-of-the-box observability features for highly dynamic microservice environments.
OSM runs an Envoy-based control plane on Kubernetes, can be configured with SMI APIs, and works by injecting an Envoy proxy as a sidecar container next to each instance of your application. OSM provides core service mesh features like:
mTLS traffic encryption between microservices
Traffic splitting for canary and blue/green deployments
Fine-grained access control policies for microservices communicating over HTTP, TCP, and gRPC
Observability for application performance
Traffic control for ingress with various tools such as Contour
Progressive delivery with Flagger
Secrets management with Azure Key Vault
The Azure Key Vault Provider for Secrets Store CSI Driver allows for the integration of Azure Key Vault as a secrets store with a Kubernetes cluster via a CSI volume. For Azure Arc-enabled Kubernetes clusters, you can install the Azure Key Vault Secrets Provider extension to fetch secrets.
Azure Arc-enabled SQL Managed Instance
Azure Arc-enabled SQL Managed Instance is deployed on the cluster via the Azure Arc-enabled data services cluster extension. It is the same service as Azure SQL Managed Instance data service that can be created on Azure Arc-enabled Kubernetes clusters running on any cloud. It has close compatibility with the latest SQL Server database engine. It also enables existing SQL Server customers to lift and shift their applications to Azure Arc data services with minimal application and database changes while maintaining data sovereignty.
At the same time, SQL Managed Instance includes built-in management capabilities that drastically reduce management overhead. In the future, other database engines such as Azure PostgreSQL will also be available using the same extension model making it possible to get the unified management API across the data services that will be Arc-enabled.
Distributed Application Runtime (Dapr) is a portable, event-driven runtime that makes it easy for any developer to build resilient, stateless and stateful applications that run on the cloud and edge and embraces the diversity of languages and developer frameworks.
Leveraging the benefits of a sidecar architecture, Dapr helps you tackle the challenges that come with building microservices and keeps your code platform agnostic. It helps with solving problems around services calling other services reliably and securely, building event-driven apps with pub-sub, and building applications that are portable across multiple cloud services and hosts (e.g., Kubernetes vs. a VM).
Developing cloud-native, microservice applications in a Kubernetes environment does not end with Kubernetes. The integration points and the dependency on external services present challenges that are only getting even bigger when developing against multiple cloud providers and Kubernetes distributions.
With Azure Arc-enabled Kubernetes and the extensibility model, a set of cluster extensions can be deployed to serve these application integration requirements regardless of the Kubernetes distribution and its location, whether on-premises or in a multi-cloud environment.