Building Microservices with Azure Kubernetes Service and Azure DevOps - Part 2
Published Mar 07 2019 09:00 AM 6,521 Views
Iron Contributor

If you missed the first part it can be found here: 
Building Microservices with Azure Kubernetes Service and Azure DevOps - Part 1

 

After going through part 1 the preparatory steps are in place, and you should be ready to move along with creating a k8s cluster and doing some initial configuring with regards to permissions.

Creating the AKS cluster

The first step for using the Azure CLI is logging in:

az login

AZ_CLI_01.png

Figure 26 az login

 

If you have multiple subscriptions in Azure you might need to use

az account list
az account set --subscription xyz

to make sure you're working on the right one. (Verify the value of the isDefault attribute to see which one the default is.)

 

A resource group to contain the AKS instance is also needed. Technically it doesn't matter which location you deploy the resource group to, but the suggestion is going with a location that is supported for AKS clusters and stick with it throughout the setup.

az group create --name aksdotnetcoder --location northeurope

Next comes creation of the AKS cluster:

 

az aks create --name aksdotnet --resource-group aksdotnetcoder --node-count 1 --generate-ssh-keys --enable-addons http_application_routing -–aad-server-app-id x –-aad-server-app-secret x –-aad-client-app-id x -–aad-tenant-id x
--kubernetes-version 1.x.y --node-vm-size Standard_DS1_v2

 

AZ_CLI_02.png

Figure 27 az aks create

Note: Some of the documentation articles online will include -–enable-rbac as an argument. This has been deprecated, and by default all AKS clusters created with Azure CLI will have RBAC enabled unless you specifically disable it by using the inverse parameter –-disable-rbac.

 

A specific version of Kubernetes is provided in the parameter list here even though that seems to be a low-level detail when dealing with a managed service. Kubernetes is a fast-moving target and sometimes it will be necessary to be on specific versions for specific features and/or compatibility reasons. 1.12.6 is the newest version as of writing this.

 

To get a list of versions run the following command:


az aks get-versions --location "location"

If this parameter is omitted a default version will be chosen, but it is not guaranteed to be the latest version. Since AKS is a managed service there may also be delays between new releases by the Kubernetes team and the time it becomes available in Azure.

 

Existing clusters can be upgraded to a newer version with the click of a button in the Azure Portal or through the Azure CLI:

az aks upgrade --kubernetes-version –name foo --resource-group bar

To keep costs down for a test environment only one node is used, but in production one should ramp up to at least 3 for high availability and scale. The VM size is specified to be of the DS1_v2 type. (This is also the default if you omit the parameter.) It is possible to choose lower performing SKUs, but the performance takes a severe hit which will be noticed when pulling and deployment images later in this guide, so it is not advised to do so. For production use you should have a closer look at the available VM SKUs to make a choice as to which one you want to use. (Note that the availability might vary between regions.)

 

This brings out another value-adding feature of AKS. In a Kubernetes cluster you have management nodes and worker nodes. Just like one would need more than one worker to distribute the load, one needs multiple managers to have high availability. AKS takes care of the management, but not only does the service abstract away the management nodes - you don't pay for them either. You pay for the nodes, and that's it. (Disclaimer: the author cannot guarantee the current model will be in effect for perpetuity.)

 

The creation of the cluster should take 10-15 minutes.

 

To make sure that things are in a good state verify the cluster is working before proceeding. Run the following command to retrieve the correct files for the Kubernetes tools to work:

 

az aks get-credentials --resource-group aksdotnetcoder --name aksdotnet --admin

Run kubectl get nodes, which should look similar to this:

kubectl_01.png

Figure 28 kubectl get nodes

 

YAML – The Configuration Language of Kubernetes

The subsequent sections will make heavy use of Yet Another Markup Language (YAML) for configuring k8s. XML used to be a commonly used format for configuration files, and JSON has become more frequent in the past couple of years. Kubernetes uses YAML instead for most common configurations. (Both options are demonstrated in this guide.)

 

YAML is not a format specific for Kubernetes, but if you aren’t familiar, it is important to understand that indentation matters. 

Setting: foo
Value: bar

Is entirely different from

Setting: foo
  Value: bar

You should use spaces for this indentation; not tabs. You might get informative error messages, and in Visual Studio Code there are plugins that can provide hints, but if you get errors even though you copied and pasted the snippets here check that there haven’t been some auto-formatting reorganizing things.

 

Enabling RBAC for users

The -–admin parameter gave us admin access, but it hasn’t been applied permanently for RBAC for work. To do so create a file called user-rbac.yaml and paste in the following content:

 

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: aksdotnet-cluster-admins
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: user@contoso.com

Replace user@contoso.com with your user name.

 

This file needs to be applied by running kubectl create -f user-rbac.yaml:

kubectl_02.png

Figure 29 kubectl create User RBAC

In practice, you want to apply this on a group level . To do so replace the last two lines with the following (with the objectId for the group):

kind: Group
name: "guid"

 

Enabling RBAC for the Kubernetes Dashboard

These steps make sure you can use the kubectl command line options, but there is also a web-based dashboard one can use for managing an AKS cluster. However, this involves a service account which we have not assigned permissions to yet.

 

To assign these permissions create a file called dashboard-admin.yaml with the following contents:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
labels:
   k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system

Apply with

kubectl create -f dashboard-admin.yaml

You can now verify that it is possible to open the Kubernetes dashboard by running

az aks browse --resource-group aksdotnetcoder --name aksdotnet

kubectl_03.png

Figure 30 az aks browse

This will launch a browser with the graphical representation of your cluster:

kubedash_01.png

Figure 31 Kubernetes Dashboard

 

Kubectl also allows for connecting to the dashboard (kubectl proxy) but by using the Azure CLI everything is automatically piggybacked onto the current authenticated Azure session. Notice that the address is 127.0.0.1 even though it isn't local. That's just some proxy tricks where the traffic is tunneled through to Azure.

 

It could be argued that enabling the dashboard opens another attack vector and that it would be preferable to stick to kubectl. There are valid arguments for this school of thought, and it’s also true that certain actions can only be carried out through kubectl. It is however also often faster to get an overview from a web browser, and when learning Kubernetes, it is very helpful to have a graphical representation.

 

RBAC for Helm and installation of the binaries into the cluster

It was mentioned previously that Helm is a sort of package manager for Kubernetes. This applies to both the software packages you build yourself, and those supplied by other parties. The RBAC model we have in place requires that we specifically need to enable support for Helm to work properly. This can be done with the following yaml:

apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
  name: tiller
  namespace: kube-system

 

Apply with kubectl create -f helm-rbac.yaml.

 

It might not be apparent from the definition, but Tiller is the server component, and Helm is the client-side component. (Thus, we use tiller in the naming even though this is not a hardcoded requirement.)

 

Helm needs to be primed as well to be ready for later. Based on having a working cluster as verified in the previous step, helm will automagically work out where to apply its logic. You can have multiple clusters so part of the point in verifying the cluster is ok is to make sure you're connected to the right one.

 

Apply the following:

Helm init –-service-account tiller
Helm repo update

Helm_01.png

Figure 32 helm init and repo update

 

The cluster should now be ready to have images deployed. Which leads to another topic - management of images.

 

Much like one refers to images when building virtual machines, Docker uses the same concept although slightly different on the implementation level. To get running containers inside a Kubernetes cluster you need a repository for these images. The default public repo is Docker Hub, and images stored there will be entirely suited for your AKS cluster. (In fact, third-party images will be acquired from Docker Hub for completion of the guide’s setup.) However, it is not needed to have your images available on the Internet. This requires use of a private repository instead of Docker Hub. In the Azure ecosystem this is delivered by Azure Container Registry (ACR).

 

This setup can be done in the Azure Portal, but for coherency the CLI is used here. It can be placed in the same resource group as the AKS cluster, but since a registry logically speaking is a separate entity a new group will be created for the registry. This means it more visibly lends itself to be re-used across clusters too.

 

Setting up Azure Container Registry (ACR)

Create a resource group:

az group create --name aks-acr-rg --location northeurope

Create an Azure Container Registry with the Basic SKU:

az acr create --resource-group aks-acr-rg --name aksdotnetacr --sku Basic

Since the registry name needs to be globally unique you need to come up with your own moniker for the name parameter. The SKU choice is largely driven by the need for scalability. More details can be found here: https://azure.microsoft.com/en-us/pricing/details/container-registry/

 

While you can now browse the contents of the registry in the portal that does not mean that your cluster can do so. As is indicated by the message upon successful creation of the ACR component we need to create a service principal that will be used by Kubernetes, and we need to give this service principal access to ACR. If you're new to the concept of authentication in Azure AD this statement doesn't explain what a service principal is, but you can think of it as a user account for applications in this context.

 

This is one of those things that are easier to do in the PowerShell ISE:

# Get the id of the service principal configured for AKS
$AKS_RESOURCE_GROUP = "aksdotnetcoder"
$AKS_CLUSTER_NAME = "aksdotnet"
$CLIENT_ID=$(az aks show --resource-group $AKS_RESOURCE_GROUP --name $AKS_CLUSTER_NAME --query "servicePrincipalProfile.clientId" --output tsv)


# Get the ACR registry resource id
$ACR_NAME = "aksdotnetacr"
$ACR_RESOURCE_GROUP = "aks-acr-rg"
$ACR_ID=$(az acr show --name $ACR_NAME --resource-group $ACR_RESOURCE_GROUP --query "id" --output tsv)


#Create role assignment
az role assignment create --assignee $CLIENT_ID --role Reader --scope $ACR_ID

ACR_01.png

Figure 33 PowerShell ISE

The AKS cluster should now be in a state where it is ready for deployment of services, and the next steps are to create the following recipes for Azure DevOps:

  • How Azure DevOps should build the code.
  • How Azure DevOps should push the resulting artifacts to Azure Container Registry.
  • How Azure DevOps should deploy containers.

 

We're on a roll here, but it feels like a natural point to take a little break before going back to Azure DevOps in the next part.

 

4 Comments
Version history
Last update:
‎Mar 06 2019 11:41 AM
Updated by: