Building Microservices with Azure Kubernetes Service and Azure DevOps - Part 4
Published Mar 14 2019 09:00 AM 7,341 Views
Iron Contributor

If you missed the previous parts they can be found here: 
Building Microservices with Azure Kubernetes Service and Azure DevOps - Part 1
Building Microservices with Azure Kubernetes Service and Azure DevOps - Part 2
Building Microservices with Azure Kubernetes Service and Azure DevOps - Part 3

 

Fixing remaining Ops issues

Clearly there are a couple of things we need to fix for our microservice to become a regular web page:

  • We need an external IP address accessible over the Internet.
  • We would like to assign a DNS name to the IP address.
  • We should assign a certificate to our service for encrypting the traffic.

The opinion of this author is that when the architect's eyes become glossy; waxing about microservices, they often omit minor details like these. It certainly isn’t impossible to fix, but developers usually don't have to deal with this when deploying classic monoliths, and mentally this means more "boxes" to keep track of. It may also lead to a discussion of its own whether this is something the developers can be trusted to handle themselves, or if this should be handled by the operations department.

 

In a larger organization where the cluster is already up and running, and the developer are just handed instructions on how to get going, it may indeed already have been taken care of. But for the context of this guide the reader could be running a one-man shop and need to do things without involvement from operations.

 

While there are network admins who have enough public IPv4 addresses to supply an entire town, the most common pattern is to have a limited number of external facing addresses and port openings exposed and have all traffic to the back-end flow through some sort of aggregator. (The term "aggregator" is used because it could be any combination of firewalls, load balancers, reverse proxies and routers.) Kubernetes is no different in this sense. You can assign a public IP for each service if your cluster provider doesn't limit you, but you probably don't have a good design if you set it up like that.

 

The Kubernetes construct for controlling inbound traffic is called an ingress controller, so the steps in this guide will look at ways to configure this component.

 

HTTP Application Routing

When the cluster was created the parameter --enable-addons http_application_routing was a part of the command. This sets up some DNS integration points automagically, and if you browse to the companion resource group you will have a DNS zone preconfigured.

 

DNS_01.png

Figure 70 AKS Resource Groups

 

Notice that Azure always creates a second resource group prefixed with MC_ that contain the individual resources for an AKS cluster.

DNS_02.png

Figure 71 AKS Companion Resource Group Contents

 

This means that services deployed can have an FQDN working across the internet without manually configuring DNS records.

 

There are two possible approaches you can take to acquire an external IP address and a DNS name.

The first, and probably best approach if you stick to the CI/CD pipelines is editing the Helm chart that was checked in. In the initial check-in ingress was set to false, thus not attempting to create an ingress either.

 

You can change values.yaml accordingly:

 

ingress:
enabled: true
annotations: {
   kubernetes.io/ingress.class: addon-http-application-routing

}
path: /
hosts:
   -
aksdotnetcoder.dns-zone.northeurope.aksapp.io

 

If you want the more direct approach, and skip the Helm chart, you can create a file called aksdotnet-dns.yaml with the following contents:

 

apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: aksdotnetcoder
spec:
template:
   metadata:
     labels:
       app: aksdotnet
   spec:
     containers:
     - name: aksdotnet
       image: "acrname.azurecr.io/aksdotnetcoder:latest"
       ports:
       - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: aksdotnet
spec:
ports:
- port: 80
   protocol: TCP
selector:
   app: aksdotnet
type: ClusterIP
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: aksdotnet
annotations:
   kubernetes.io/ingress.class: addon-http-application-routing
spec:
rules:
- host: aksdotnetcoder.dns-zone.aksapp.io
   http:
    paths:
     - backend:
         serviceName: aksdotnet
         servicePort: 80
       path: /

 

Note that if you apply this in parallel with a deployment based on the Helm charts make sure the names are unique, so you don’t get a conflict between the two.

 

Replace acrname with your Azure Container Registry name, and dns-zone based on the resource in the companion resource group.

 

Apply with kubectl create -f aksdotnet-dns.yaml.

DNS_03.png

Figure 72 Kubectl create aksdotnet-dns

 

Wait for a couple minutes for Azure to perform the task and run kubectl get ingresses to get an output with a FQDN and an external IP address.

DNS_04.png

Figure 73 Kubectl get external IP address acquired by Application Routing

 

You can now test by opening the address in a browser, and you will hopefully see the AKS Web App.

AKS_Web_App.png

Figure 74 AKS Web App

 

It should be noted that you can install a Helm chart without going through Azure DevOps, but it is not recommended to mix options in that way.

 

Custom Domain Names

There is however one flaw with this method. It does not support custom domain names. Having URLs with random guids isn’t exactly user-friendly so this will only work for testing.

 

Those familiar with DNS might suggest using CNAME records. Basically having www.contoso.com point to www.randomnumber.northeurope.aksapp.io. This will work, but only for HTTP. If you apply certificates and HTTPS this approach will break unless you take extra steps to ensure the certificate matches the CNAME record instead of the A record linked to the IP address. While this is a workaround it adds extra manual steps to make sure the DNS zones match up, so if possible avoid it.

 

It is possible to skip DNS handling within Azure and just acquire the necessary external IP address that you can point to in your existing DNS infrastructure, however this guide will demonstrate a solution for integrating custom domain names with Azure DNS.

 

If you followed the guide up to this point you need to do one of two things before proceeding with custom DNS handling:

  • Uninstall the Application Routing with the following command
    az aks disable-addons --addons http_application_routing --name aksdotnet --resource-group aksdotnetcoder --no-wait
  • Delete the cluster and re-create it without the --enable-addons http_application_routing (Remember to re-apply yaml-files afterwards.)

The second approach is slightly more time-consuming, and is more work, but provides more learning by repeating the steps. As it also creates a “clean” cluster this is the basis for the steps below. (In other words - use the first approach at your own peril.)

 

The Azure DNS zone should ideally be in a resource group separate from AKS.

az group create –-name AKSCustomDNSrg –-location northeurope

Custom_DNS_01.png

Figure 75 Creating Azure DNS resource group

 

For automatic creation of DNS records we need to create a service principal that the Kubernetes cluster can use for this purpose. Run the following command:

 

az ad sp create-for-rbac --role="Contributor"
--scopes="/subscriptions/subscription-guid/resourceGroups/AKSCustomDNSrg"
-n AKSCustomDnsServicePrincipal

 

Custom_DNS_02.png

Figure 76 Creating Azure DNS Service Principal

 

Make a note of the attributes returned and use these to create a file called azure.json with contents similar to this:

 

{
"tenantId": "tenant-guid",
"subscriptionId": "subscription-guid",
"aadClientId": "appId",
"aadClientSecret": "password",
"resourceGroup": "/subscriptions/subscription-guid/resourceGroups/AKSCustomDNSrg"
}

 

Create the Azure DNS zone resource:

az network dns zone create -g AKSCustomDNSrg -n contoso.com

 

Custom_DNS_03.png

Figure 77 Creating Azure DNS Zone

 

Apply the configuration created in the azure.json file with the following command:
kubectl create secret generic azure-config-file --from-file=azure.json

 

Custom_DNS_04.png

Figure 78 Kubectl create dns config

 

When using the application routing an ingress controller was created automatically for us. When opting to deploy without this add-on it is necessary to install separately. This can be done with the following command:
helm install stable/nginx-ingress --namespace kube-system --set controller.replicaCount=2

 

While AKS is hosted in Azure it doesn’t automatically have the means for updating Azure DNS, so a configuration for allowing this is required by employing a component called External DNS.

 

Create a file called external-dns.yaml with the following contents:

 

externaldns.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: external-dns
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: external-dns
rules:
- apiGroups: [""]
resources: ["services"]
verbs: ["get","watch","list"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["get","watch","list"]
- apiGroups: ["extensions"]
resources: ["ingresses"]
verbs: ["get","watch","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["list"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: external-dns-viewer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: external-dns
subjects:
- kind: ServiceAccount
name: external-dns
namespace: default
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: external-dns
spec:
strategy:
   type: Recreate
template:
   metadata:
     labels:
       app: external-dns
   spec:
     serviceAccountName: external-dns
     containers:
     - name: external-dns
       image: registry.opensource.zalan.do/teapot/external-dns:v0.5.8
       args:
       - --source=service
      - --source=ingress
       - --domain-filter=contoso.com
       - --provider=azure
       - --azure-resource-group=AKSCustomDNSrg
       volumeMounts:
       - name: azure-config-file
         mountPath: /etc/kubernetes
         readOnly: true
     volumes:
     - name: azure-config-file
       secret:
         secretName: azure-config-file

 

Apply with kubectl create -f externaldns.yaml.

 

Custom_DNS_05.png

Figure 79 Kubectl create external-dns

 

The ingress controller was installed without seeing the helm chart for it, but the definition can be changed in the cluster. (It is not likely the ingress controller will be re-deployed as often as the microservices, so it might not be needed to do it fully automated, but that is largely a design choice.)

 

Locate the nginx-controller service in the dashboard. (Names assigned automagically so your cluster will probably not match the screenshots.)

nginx_02.png

Figure 80 nginx controller in k8s dashboard

 

Hit the three dots to get a context menu and select View/edit YAML.

nginx_03.png

Figure 81 nginx controller View/edit YAML

 

Add the following annotation to the YAML:

 

"annotations": {
"external-dns.alpha.kubernetes.io/hostname": "aksdotnet.northeurope.cloudapp.azure.com"
}

 

Replace with the DNS name of your choice. You can use either contoso.com (your own domain name), or cloudapp.azure.com (supplied by Microsoft) here if you like.

 

It goes in the metadata section like this:

nginx_04.png

Figure 82 nginx controller YAML definition

 

Once this is applied the DNS records in Azure DNS are automatically updated.

 

There is actually not a requirement to use a domain you own/control since by default Azure DNS is internal to Azure. You can delegate a domain you own by pointing the name servers to Azure, and as this is specific to your registrar the necessary steps are not included here. Azure DNS does not support DNSSEC yet though, so there might be top-level domains where it’s not so easy to delegate to Azure DNS.

 

Certificate support

It is highly recommended to enable support for TLS/SSL certificates for your web services. This is enabled through a Kubernetes plugin called Cert-Manager and uses Let’s Encrypt for issuing certificates. The steps will be similar whether you are using the auto-generated domain names or setting up a custom domain. For this guide it was tested with a custom domain name.

 

Creating DNS records isn’t the only thing required for having the right certificate issued. You also need to create a “link” between the IP address of your ingress controller and the desired FQDN. To make this work you need to look up your external IP address, and run the following CLI script:

 

# Public IP address
$IP=<ingress-external-ip>

$DNSNAME="aksdotnet"
$PUBLICIPID = az network public-ip list --query "[?ipAddress!=null]|[?contains(ipAddress, '$IP')].[id]" --output tsv

az network public-ip update --ids $PUBIPID --dns-name $DNSNAME   

 

The next step is to install Cert-Manager through Helm.

 

helm install stable/cert-manager --set ingressShim.defaultIssuerName=letsencrypt-staging --set ingressShim.defaultIssuerKind=ClusterIssuer

 

In these samples, the staging environment of Let’s Encrypt is used, but this can be changed to letsencrypt-prod instead once everything has been verified working.

 

There are a couple of configuration components needed to issue the certificates. The first thing is an issuer of certificates within Kubernetes. Here a Cluster Issuer is used, which means this component can be used for all services in the cluster. It is also possible to add an issuer that is local to a specific namespace, but this guide hasn’t covered that aspect of k8s so a further elaboration of this is left as an exercise for the reader. Create a file called cluster-issuer.yaml with the following contents:

 

apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
   server: https://acme-staging-v02.api.letsencrypt.org/directory
   email: user@contoso.com
   privateKeySecretRef:
     name: letsencrypt-staging
   http01: {}

 

Fill in a working email address to be notified when the certificate is nearing expiration time.

 

Enable with kubectl create -f cluster-issuer.yaml

 

You also need to add the certificate object for a specific host name. Create a file called certificates.yaml with the following contents:

 

apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: tls-secret
spec:
secretName: tls-secret
dnsNames:
- aksdotnet.northeurope.cloudapp.azure.com
acme:
   config:
   - http01:
       ingressClass: nginx
     domains:
     - aksdotnet.northeurope.cloudapp.azure.com
issuerRef:
   name: letsencrypt-staging
   kind: ClusterIssuer

 

Apply with kubectl create -f certificate.yaml

 

Edit values.yaml

 

 hosts:
   - aksdotnet.contoso.com
tls:
   - secretName: tls-secret
     hosts:
       - aksdotnet.contoso.com

 

Edit the helm-chart for the service:

 

# Default values for helm-charts.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.


replicaCount: 1
image:
repository: aksdotnetacr.azurecr.io/aksdotnetcoder
tag: latest
pullPolicy: IfNotPresent
nameOverride: ""
fullnameOverride: ""
service:
type: ClusterIP
port: 443
ingress:
enabled: true
annotations: {
   nginx.ingress.kubernetes.io/rewrite-target: /,
   kubernetes.io/ingress.class: nginx,
   certmanager.k8s.io/cluster-issuer: letsencrypt-staging
   # kubernetes.io/ingress.class: addon-http-application-routing
}
path: /
hosts:
   #- chart-example.local
   #- aksdotnetcoder.3f38909be204470da03e.northeurope.aksapp.io
   - aksdotnet.contoso.com
tls:
   - secretName: tls-secret
     hosts:
       - aksdotnet.contoso.com
resources: {}
nodeSelector: {}
tolerations: []
affinity: {}

 

The important part that has changed is the annotations part which is where you hook into the components that do the heavy lifting.

 

Conclusion

This guide covered a basic scenario for getting started with Azure Kubernetes Service along with Azure DevOps for Continuous Integration (CI) and Continuous Deployment (CD).

 

All the important core topics like simple networking, DNS, certificates, etc. were covered, and by following the guide one should have a proof-of-concept deployment working. This should enable the reader to further explore Kubernetes.

 

All instructions and screenshots were correct at the time of writing this publication, but the cloud moves at high speed so unfortunately it is possible that minor details have changed.

 

There are several topics that should be studied further by the reader:

Learn more

For more information, see the following resources:

The scripts and yaml files used in this guide can be found at https://github.com/ahelland/AKS-Guide

2 Comments
Copper Contributor

Hi Andreas,

 

I have followed all the steps till running the yaml file for dns. My CI and CD both were completed successfully as well. However, when I try to browse the application, I get the following JSON response. what do I need to do to browse it?

 

{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {
    
  },
  "status": "Failure",
  "message": "Unauthorized",
  "reason": "Unauthorized",
  "code": 401
}

 

Good job Andreas!

Version history
Last update:
‎Mar 13 2019 12:32 PM
Updated by: