Note: This series is an updated version of a guide from last year which used the preview version of AKS and the "old" Visual Studio Team Services. Consider the previous guide deprecated.
If you happen to find yourself about to build a new application, and you bump into an architect they will tell you that it's very important that it needs to support a "microservices architecture". While this statement will not necessarily be true for every project it is fair to say that this will be a valid architectural choice for many of today’s applications.
Architecture is important, but for implementing this in code, you need a new set of tools compared to “classic services”. The name Kubernetes will frequently be mentioned in such conversations having become almost a de facto standard for orchestrating containers. It is often shortened k8s, so this guide will use both terms interchangeably.
A broader discussion on containers in general is outside the scope of this guide, but for the sake of this context we will assume that one or more containers is the logical boundary for a microservice. As an orchestrator, Kubernetes does not concern itself with the contents of each individual container. Instead Kubernetes ensures that containers get deployed correctly, continue to run if something breaks, etc. – this is what is referred to when using the term orchestration and it is an important part of being able to run containers on a larger scale than a single developer’s computer. To achieve this, k8s runs as a cluster of machines to achieve both redundancy and to provide extra compute resource capability.
For more information on the what part of Kubernetes, the official site has a nice intro:
There are many options available for the how part of building a Kubernetes cluster. It can run on-premises on your own equipment and it can run in different cloud platforms as well. With Azure as your cloud platform, you can run this on “raw” virtual machines that you provision yourself and configure, as well as a managed service called Azure Kubernetes Service (AKS). Being managed it takes away some of the grunt work involved with setting up the cluster from scratch and keeping it running.
Microsoft has a product called Service Fabric which is some ways is a competitor to k8s in as much that it’s also a platform for handling microservices. There are however differences between the two so it’s hard to do an apple-to-apple comparison between them. As this guide focuses on AKS and Kubernetes, those points are not relevant for the instructions provided here.
Manual handling of microservices can be painful, and while Kubernetes is good at making sure the code runs it does not handle the transition of the code a developer produces to a binary in a container. This is usually taken care of by a different system that can automate compilation and deployment. For this guide Azure DevOps will be responsible for being a repository and providing pipelines for build and deploy.
The high-level agenda of what this guide will cover:
We will use different Azure services for achieving these tasks:
This walkthrough is about how you get your code from your IDE to a production-like state. For production use you should consider scaling, monitoring and further automation. The developer "inner loop"; what you do before you push to prod is not covered extensively here.
There are several ways you can architect and implement a Kubernetes cluster, and this guide is not the ultimate or necessarily best approach for you. It is however an approach that should bring you to a working setup and give you an understanding of the basics of using Kubernetes for your microservices.
The decision was also made to deliberately keep some features out of scope. Networking is an important architectural point for microservices in general, and Azure Kubernetes Service has both basic and advanced network configurations on offer. These deserve a more in-depth treatment than possible in this guide, and as such the implementation here is on the basic end in this area.
The focus is on the steps needed to get things working, and less on the architectural decisions behind them. Many parts could have been elaborated on, sometimes in minute detail, but that would have taken the focus away from the walkthrough experience and would be more suited for a separate treatment.
You will need an Azure subscription, and an Azure DevOps account to follow the instructions. Azure DevOps is free for a single developer team, and you should be able to run the Azure services with the credits included in a trial account.
Visual Studio 2017 is used in this guide for handling the code – the free Community edition will suffice. The steps do not have a hard requirement on Visual Studio, so it is possible to use other tooling if you prefer.
This scenario employs a DevOps methodology to move the code from one phase to another, but the guide does not assume intimate knowledge of DevOps practices.
The walkthrough has been created on a Windows installation, but the primary tools used are available on Linux and MacOS as well, so it should be possible to use these operating systems instead with minor changes.
Since this guide isn't about how to build a web app in general you can technically just step through a wizard in Visual Studio to get a Hello World type web app. For the sake of simplicity, the samples here will be using an app which can be downloaded here:
Whether using the sample solution or creating your own it is important that before moving things to containers and the cloud you should make sure that this solution builds and runs locally. (Restore NuGet packages, Build, and F5 to debug.)
If the code builds as expected, and you are able to test things on localhost, you will want to add Docker support for containerization. If you don't have it up and running already it is recommended to download Docker for Windows and install it. You will need Windows 10, and you will need virtualization support in your CPU for enabling Hyper-V. (If you don't have this you can still follow along and get parts of the guide working; you will however not be able to test the image locally.)
Docker Stable vs Docker Edge
There are two types of builds of Docker; the Stable and the Edge (beta). If you run the standard builds of Windows 10 you can choose to with Stable. If you run builds from the Windows Insider Program, things occasionally break Docker. When the fix arrives (assuming the fault is in Docker), it comes to the Docker Edge release first. This means that with beta Windows you should go for beta Docker. (Edge can be installed on non-Insider Windows builds too.)
Visual Studio 2017 supports several options for deploying code directly from your laptop to a hosted environment, be it on-premises or cloud. However, even for your hobby projects, it is easier to check code into a repository and handle things from there. GitHub is perhaps the most well-known and popular repository solution, but this guide will use Azure DevOps to do more than “hosting a repo”. Azure DevOps offers some integration points to Azure that GitHub currently doesn’t so the story is slightly more streamlined for some use cases.
It is possible to use GitHub to deploy to Azure as well, but it is not in scope here. For more advanced scenarios you can implement part of the infrastructure in GitHub and other parts in Azure DevOps.
It is assumed that you already have an account for Azure DevOps; if not you can create one at https://azure.microsoft.com/en-us/services/devops/
Note that it will be easier in subsequent steps if you are using the same account for signing into Azure DevOps as your Azure subscription, and the guide assumes this to be the case.
Once you have created your Azure DevOps account, or have signed in to your existing account you need to create a new project for this walkthrough.
Figure 1 Create a new project in Azure DevOps
There are different approaches for the initial setup depending on whether you're starting from scratch or not. If you downloaded the zip file from GitHub with the sample code unzip it to a directory on your hard drive and use that as a basis for the initial check-in. (You can also clone from GitHub, but the approach taken here is to start fresh.)
Azure DevOps will also provide you with instructions for this.
Figure 2 Instructions for importing code into Azure DevOps
Go to the command line and initialize the repository in the root directory.
Figure 3 Git init
Execute the following instruction on the command line:
Git remote add origin https://contosoRepos.visualstudio.com/_git/AKSdotnetcoder
(replace “ContosoRepos” with the name of your VSTS account.)
Figure 4 Adding git repo to Azure DevOps
Heading back to Visual Studio, and the Team Explorer tab, selecting Changes should indicate there is a change to your files:
Figure 5 Initial commit
Commit these changes.
Figure 6 Syncing repo
Choose to Sync, followed by Push.
Figure 7 Pushing repo
Returning to Azure DevOps in the browser, and going to the Code tab there should now be a folder along with all the files available:
Figure 8 Initial commit in Azure DevOps
There are things that could differ from the computer used for capturing these instructions and the computer used for following them. Git could have been installed before Visual Studio 2017, be in a different path, etc., and if so adjustments should be made accordingly.
If everything went to plan there should now be code both locally, and in the cloud. This means the next step is adding Docker support for the project. To do this right-click the project name in Visual Studio 2017 and choose Add->Container Orchestrator Support.
Figure 9 Adding container orchestration support
Choose Linux as the operating system. (Windows containers are not supported in AKS at the time of writing. The code in the sample project has no dependencies on the platform so this should not be a problem.)
Figure 10 Selecting operating system
This should add a Dockerfile and a docker-compose project to your solution:
Figure 11 Docker files in Visual Studio
Note: There is a slight change introduced in Visual Studio 2017 15.8.3 compared to previous versions to how to deploy to a service like Kubernetes. Previously support was added by right-clicking the project in Visual Studio 2017 and selecting Add->Docker Support. This is still available as an option, but will only give you the Dockerfile, not the docker-compose part.
To verify that everything still is good on the development box re-run the F5 experience. By default, the code was probably deployed to IIS Express as the target, and this should now have changed to Docker as the deploy target. (The default image contains the necessary pieces to host web components without any adjustments.)
This doesn't technically mean we have built a microservice yet. What we have is a monolithic app that has been primed for promotion to a microservice when the time is right. For a simple sample app this is not likely to be happening as it makes no sense to break up in smaller pieces – the take away is that a container does not equate to a microservice but makes it easier to achieve a microservice architecture.
Let's pause for a moment here. This image is a basic building block for proceeding. If you move this code to a different computer where Docker is installed, you can have this up and running in a matter of minutes. (Visual Studio 2017 isn't needed either.) This means the "it works on my machine" hurdle that often prevents mass deployment has been removed.
Note: There are still things that might trip you up, like a proxy component on the network level, different Docker version on the host, etc. In general, you have a module that can be moved across environments with a more reliable result than previous deployment models.
If you start splitting up your code into different images you could have www.contoso.com in one, and api.contoso.com in another. This leads you to setting up a shared Docker host that developers push images to, and you can replace the www part without affecting the api part. This is great for separating components, but you might run into new challenges.
Host goes down => services go down.
Service A needs to communicate with Service B => how do they do that?
Who takes ownership of the Docker host, opening firewall ports, mapping DNS and the like => developers, or operations?
Dockerfiles just describe one individual service, and if you have 5 services, you then have 5 Dockerfiles to maintain. Docker provides another abstraction layer where you define docker-compose files, and these describe relationships between containers. These will be used for the remaining setup in this guide.
The new concern is that while managing 10 monolithic services can be daunting, managing 100 microservices isn't necessarily less work. Which is why you need something to "herd" your services. This is often referred to as an orchestrator, and Kubernetes is just one of several options. At the risk of repeating things; it is already decided that AKS is the choice for this purpose so there will not be a comparison of orchestrators, or different Kubernetes configurations.
Make sure the code is saved and checked in to Azure DevOps before proceeding.
Creation of resources in Azure is the next step before building and releasing code. It is intuitive to create these components in the Azure Portal, but with k8s it is very useful to become familiar with the command line.
The first step is to install the Azure CLI:
Note: If you have an old version installed already it is recommended to upgrade.
You should also download two important Kubernetes tools as well:
Kubectl - https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.13.md
Note: Go to the newest "Client Binaries" and grab it.
Helm - https://github.com/kubernetes/helm/releases
They are plain executables, so no installer; just paste them into a folder of your choosing like C:\k8s.
For the sake of simplicity, you should add this folder to your path in Windows:
Figure 12 Adding directory to path
Kubectl is the "control app" for Kubernetes, and Helm is the package manager (or the equivalent of NuGet in the .NET world if you like) for Kubernetes. More on the usage of these tools later.
You might ask yourself why you need Kubernetes native tools when you are using a managed Kubernetes service. That is a valid question. A lot of managed services put an abstraction on top of the underlying service and hides the original in various manners. An important thing to understand about AKS is that while it certainly abstracts parts of the k8s setup away from you, it does not hide the fact that it is k8s. This means that you can interact with the cluster just like if you had set it up from scratch. Which also means that if you are already a k8s ninja you can still feel at home, and vice versa. If you come from knowing nothing about k8s, it's recommended to learn at least some of the tooling.
For a complete reference on what kubectl can do you can add the kubectl Cheat Sheet to your favorites:
Visual Studio 2017 is perfect for writing the C# code, but it is not suited for all the tasks needed so Visual Studio Code, PowerShell ISE and the command line will also be used.
The default and recommended configuration for securing access to a Kubernetes cluster is a Role-based Access Control (RBAC) model. To make this work with AKS there are a few preparatory steps needed before creating the cluster.
You will need two app registrations in Azure Active Directory. These can be registered in the Azure Portal:
Azure Active Directory=>App registrations=>New application registration
Figure 13 Azure AD Server App
Select Web App / API as the application type and fill in any valid URI for the Sign-On URL. It does not have to be a working URL.
After the app has been created click Settings and Edit manifest.
Figure 14 Azure AD Server App Settings
In the manifest you need to make sure groupMembershipClaims is set to “All”. Hit Save after making the change to persist it.
Figure 15 Azure AD Server App Manifest
Next you need to generate a key – Settings => Key. Copy this off from the UI before navigating from the view since you will not be able to retrieve this afterwards.
Figure 16 Azure AD Server App Client Secret
Next, we need to grant permissions for the app. Navigate to
Settings=>Required permissions=> Add => Select an API and select the Microsoft Graph.
Figure 17 Azure AD Server App Select API
Check Read directory data under Application Permissions.
Figure 18 Azure AD Server App API Application Permissions
Check Signin and read user profile and Read directory data under Delegated Permissions.
Figure 19 Azure AD Server App API Delegated Permissions
Click Grant permissions to make these modifications go live.
Note: you need to be a Global Admin in Azure Active Directory to be able to do this. (Subscription Owner does not suffice.)
Figure 20 Azure AD Server App Grant Permissions
The output of this step should be the server application id and the server application secret.
Create a new app registration – this time of type Native and naming to indicate it is a client app.
Figure 21 Azure AD Client App
This app requires permissions to the app we created in the previous step, so AKSAADServer should be in the list.
Figure 22 Azure AD Client App Select API
Check Access AKSAADServer.
Figure 23 Azure AD Client App Delegated Permissions
Figure 24 Azure AD Client App Grant Permissions
You should make a note of the application id. A native app does not have a client secret.
The final piece of information needed here is the tenant id. This can be acquired in several ways, but since you’re already in the AAD section of Azure Portal it can be found under
Azure Active Directory => Properties under the name Directory ID.
Figure 25 Azure AD Tenant Id
With the underpinnings of RBAC in place proceeding to the creation of a cluster follows in the next part.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.