If you missed the previous parts they can be found here:
Building Microservices with Azure Kubernetes Service and Azure DevOps - Part 1
Building Microservices with Azure Kubernetes Service and Azure DevOps - Part 2
Building Microservices with Azure Kubernetes Service and Azure DevOps - Part 3
Clearly there are a couple of things we need to fix for our microservice to become a regular web page:
The opinion of this author is that when the architect's eyes become glossy; waxing about microservices, they often omit minor details like these. It certainly isn’t impossible to fix, but developers usually don't have to deal with this when deploying classic monoliths, and mentally this means more "boxes" to keep track of. It may also lead to a discussion of its own whether this is something the developers can be trusted to handle themselves, or if this should be handled by the operations department.
In a larger organization where the cluster is already up and running, and the developer are just handed instructions on how to get going, it may indeed already have been taken care of. But for the context of this guide the reader could be running a one-man shop and need to do things without involvement from operations.
While there are network admins who have enough public IPv4 addresses to supply an entire town, the most common pattern is to have a limited number of external facing addresses and port openings exposed and have all traffic to the back-end flow through some sort of aggregator. (The term "aggregator" is used because it could be any combination of firewalls, load balancers, reverse proxies and routers.) Kubernetes is no different in this sense. You can assign a public IP for each service if your cluster provider doesn't limit you, but you probably don't have a good design if you set it up like that.
The Kubernetes construct for controlling inbound traffic is called an ingress controller, so the steps in this guide will look at ways to configure this component.
When the cluster was created the parameter --enable-addons http_application_routing was a part of the command. This sets up some DNS integration points automagically, and if you browse to the companion resource group you will have a DNS zone preconfigured.
Figure 70 AKS Resource Groups
Notice that Azure always creates a second resource group prefixed with MC_ that contain the individual resources for an AKS cluster.
Figure 71 AKS Companion Resource Group Contents
This means that services deployed can have an FQDN working across the internet without manually configuring DNS records.
There are two possible approaches you can take to acquire an external IP address and a DNS name.
The first, and probably best approach if you stick to the CI/CD pipelines is editing the Helm chart that was checked in. In the initial check-in ingress was set to false, thus not attempting to create an ingress either.
You can change values.yaml accordingly:
If you want the more direct approach, and skip the Helm chart, you can create a file called aksdotnet-dns.yaml with the following contents:
- name: aksdotnet
- containerPort: 80
- port: 80
- host: aksdotnetcoder.dns-zone.aksapp.io
Note that if you apply this in parallel with a deployment based on the Helm charts make sure the names are unique, so you don’t get a conflict between the two.
Replace acrname with your Azure Container Registry name, and dns-zone based on the resource in the companion resource group.
Apply with kubectl create -f aksdotnet-dns.yaml.
Figure 72 Kubectl create aksdotnet-dns
Wait for a couple minutes for Azure to perform the task and run kubectl get ingresses to get an output with a FQDN and an external IP address.
Figure 73 Kubectl get external IP address acquired by Application Routing
You can now test by opening the address in a browser, and you will hopefully see the AKS Web App.
Figure 74 AKS Web App
It should be noted that you can install a Helm chart without going through Azure DevOps, but it is not recommended to mix options in that way.
There is however one flaw with this method. It does not support custom domain names. Having URLs with random guids isn’t exactly user-friendly so this will only work for testing.
Those familiar with DNS might suggest using CNAME records. Basically having www.contoso.com point to www.randomnumber.northeurope.aksapp.io. This will work, but only for HTTP. If you apply certificates and HTTPS this approach will break unless you take extra steps to ensure the certificate matches the CNAME record instead of the A record linked to the IP address. While this is a workaround it adds extra manual steps to make sure the DNS zones match up, so if possible avoid it.
It is possible to skip DNS handling within Azure and just acquire the necessary external IP address that you can point to in your existing DNS infrastructure, however this guide will demonstrate a solution for integrating custom domain names with Azure DNS.
If you followed the guide up to this point you need to do one of two things before proceeding with custom DNS handling:
The second approach is slightly more time-consuming, and is more work, but provides more learning by repeating the steps. As it also creates a “clean” cluster this is the basis for the steps below. (In other words - use the first approach at your own peril.)
The Azure DNS zone should ideally be in a resource group separate from AKS.
az group create –-name AKSCustomDNSrg –-location northeurope
Figure 75 Creating Azure DNS resource group
For automatic creation of DNS records we need to create a service principal that the Kubernetes cluster can use for this purpose. Run the following command:
az ad sp create-for-rbac --role="Contributor"
Figure 76 Creating Azure DNS Service Principal
Make a note of the attributes returned and use these to create a file called azure.json with contents similar to this:
Create the Azure DNS zone resource:
az network dns zone create -g AKSCustomDNSrg -n contoso.com
Figure 77 Creating Azure DNS Zone
Apply the configuration created in the azure.json file with the following command:
kubectl create secret generic azure-config-file --from-file=azure.json
Figure 78 Kubectl create dns config
When using the application routing an ingress controller was created automatically for us. When opting to deploy without this add-on it is necessary to install separately. This can be done with the following command:
helm install stable/nginx-ingress --namespace kube-system --set controller.replicaCount=2
While AKS is hosted in Azure it doesn’t automatically have the means for updating Azure DNS, so a configuration for allowing this is required by employing a component called External DNS.
Create a file called external-dns.yaml with the following contents:
- apiGroups: [""]
- apiGroups: [""]
- apiGroups: ["extensions"]
- apiGroups: [""]
- kind: ServiceAccount
- name: external-dns
- name: azure-config-file
- name: azure-config-file
Apply with kubectl create -f externaldns.yaml.
Figure 79 Kubectl create external-dns
The ingress controller was installed without seeing the helm chart for it, but the definition can be changed in the cluster. (It is not likely the ingress controller will be re-deployed as often as the microservices, so it might not be needed to do it fully automated, but that is largely a design choice.)
Locate the nginx-controller service in the dashboard. (Names assigned automagically so your cluster will probably not match the screenshots.)
Figure 80 nginx controller in k8s dashboard
Hit the three dots to get a context menu and select View/edit YAML.
Figure 81 nginx controller View/edit YAML
Add the following annotation to the YAML:
Replace with the DNS name of your choice. You can use either contoso.com (your own domain name), or cloudapp.azure.com (supplied by Microsoft) here if you like.
It goes in the metadata section like this:
Figure 82 nginx controller YAML definition
Once this is applied the DNS records in Azure DNS are automatically updated.
There is actually not a requirement to use a domain you own/control since by default Azure DNS is internal to Azure. You can delegate a domain you own by pointing the name servers to Azure, and as this is specific to your registrar the necessary steps are not included here. Azure DNS does not support DNSSEC yet though, so there might be top-level domains where it’s not so easy to delegate to Azure DNS.
It is highly recommended to enable support for TLS/SSL certificates for your web services. This is enabled through a Kubernetes plugin called Cert-Manager and uses Let’s Encrypt for issuing certificates. The steps will be similar whether you are using the auto-generated domain names or setting up a custom domain. For this guide it was tested with a custom domain name.
Creating DNS records isn’t the only thing required for having the right certificate issued. You also need to create a “link” between the IP address of your ingress controller and the desired FQDN. To make this work you need to look up your external IP address, and run the following CLI script:
# Public IP address
$PUBLICIPID = az network public-ip list --query "[?ipAddress!=null]|[?contains(ipAddress, '$IP')].[id]" --output tsv
az network public-ip update --ids $PUBIPID --dns-name $DNSNAME
The next step is to install Cert-Manager through Helm.
helm install stable/cert-manager --set ingressShim.defaultIssuerName=letsencrypt-staging --set ingressShim.defaultIssuerKind=ClusterIssuer
In these samples, the staging environment of Let’s Encrypt is used, but this can be changed to letsencrypt-prod instead once everything has been verified working.
There are a couple of configuration components needed to issue the certificates. The first thing is an issuer of certificates within Kubernetes. Here a Cluster Issuer is used, which means this component can be used for all services in the cluster. It is also possible to add an issuer that is local to a specific namespace, but this guide hasn’t covered that aspect of k8s so a further elaboration of this is left as an exercise for the reader. Create a file called cluster-issuer.yaml with the following contents:
Fill in a working email address to be notified when the certificate is nearing expiration time.
Enable with kubectl create -f cluster-issuer.yaml
You also need to add the certificate object for a specific host name. Create a file called certificates.yaml with the following contents:
Apply with kubectl create -f certificate.yaml
- secretName: tls-secret
Edit the helm-chart for the service:
# Default values for helm-charts.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
# kubernetes.io/ingress.class: addon-http-application-routing
- secretName: tls-secret
The important part that has changed is the annotations part which is where you hook into the components that do the heavy lifting.
This guide covered a basic scenario for getting started with Azure Kubernetes Service along with Azure DevOps for Continuous Integration (CI) and Continuous Deployment (CD).
All the important core topics like simple networking, DNS, certificates, etc. were covered, and by following the guide one should have a proof-of-concept deployment working. This should enable the reader to further explore Kubernetes.
All instructions and screenshots were correct at the time of writing this publication, but the cloud moves at high speed so unfortunately it is possible that minor details have changed.
There are several topics that should be studied further by the reader:
For more information, see the following resources:
The scripts and yaml files used in this guide can be found at https://github.com/ahelland/AKS-Guide
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.