Welcome to this series where you will learn the various best practices for deploying Azure Virtual Machines and how to manage them at scale. To get started, we are going to take a look at various deployment options; how you scale this out from your initial deployment and finally to a complete self-service solution for your enterprise IT customers.
Azure Resource Manager and the Azure Portal
The Azure Portal is what you are most likely already familiar with deploying resources. This is the manual way to deploy, however, Azure does include a wizard to guide you through the various items you need to answer, to ensure the deployment meets your configuration.
Behind the scenes, the portal deployment is actually constructing an Azure ARM template which leverages the exact same resource model you will use for Automated Deployments. This is known as the “Azure Resource Manager” (Azure Resource Manager overview - Azure Resource Manager | Microsoft Docs)
Azure Resource Manager overview
As you can see, no matter how you deploy something in Azure, you have to at some point go through ARM. This makes it extremely powerful and has allowed for all your favorite 3rd party automation tools to also be used. As you leverage the Azure portal, you will also see that right before you hit create, you have the option to review your deployment as an “ARM Template”. You can use this to get familiar with ARM by building manually, and then looking at the code Microsoft Azure generates.
You can see where the code is assigning variables you have typed in through the portal wizard.
ARM Code
Azure Automation Tools
At this point we can deploy a VM, but it isn’t very automated and isn’t going to work at scale. The next step is to consider the host of Azure provided automation tools, that you can leverage, and may have use cases for each.
Azure PowerShell
Azure PowerShell Documentation | Microsoft Docs
Azure PowerShell can be installed locally on your machine or used in the Cloud Shell accessed through the Azure portal. Below we create a new Resource Group in East US
New-AzResourceGroup -ResourceGroupName "myResourceGroup" -Location "EastUS"
Learn Module to get started: Automate Azure tasks using scripts with PowerShell - Learn | Microsoft Docs
Azure CLI
Azure Command-Line Interface (CLI) - Overview | Microsoft Docs
The power of Azure CLI is that it works across all platforms Mac, Linux, and Windows. If you aren't familiar with PowerShell learning Azure CLI might be what you're looking for. It can also be installed locally or used through the Cloud Shell. The commands are shorter and maybe easier to remember. As you can see this command is creating a Resource Group in EastUS but the commands are different. It can run in Windows PowerShell, Cmd, or Bash and other Unix shells.
az group create –name myresourcegroup –location eastus
Learn Module to get started: Create Azure resources by using Azure CLI - Learn | Microsoft Docs
Azure ARM Templates
ARM template documentation | Microsoft Docs
As mentioned before when you deploy resources in the Portal you are essentially using ARM. Templates can be created and saved for future deployments. When you deploy from template, Resource Manager converts the template into REST API operations. You can use Azure PowerShell or Azure CLI to call upon your template saved in JSON. There are many quick starts available and below is an example of deploying a template with PowerShell
$projectName = Read-Host -Prompt "Enter the same project name"
$templateFile = Read-Host -Prompt "Enter the template file path and file name"
$resourceGroupName = "${projectName}rg"
New-AzResourceGroupDeployment `
-Name DeployMyTemplate `
-ResourceGroupName $resourceGroupName `
-TemplateFile $templateFile `
-projectName $projectName `
-verbose
The actual template file is written in JSON as you saw earlier in the screenshot where you could download your VM template before deployment. Below is a snippet of code for a template. You can find quick start templates here Tutorial - Deploy a local Azure Resource Manager template - Azure Resource Manager | Microsoft Docs
You can also create and deploy ARM Templates through the portal by following this tutorial Deploy template - Azure portal - Azure Resource Manager | Microsoft Docs
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"metadata": {
"_generator": {
"name": "bicep",
"version": "0.6.18.56646",
"templateHash": "4523590120167697900"
}
},
"parameters": {
"storageAccountType": {
"type": "string",
"defaultValue": "Standard_LRS",
"allowedValues": [
"Premium_LRS",
"Premium_ZRS",
"Standard_GRS",
"Standard_GZRS",
"Standard_LRS",
"Standard_RAGRS",
"Standard_RAGZRS",
"Standard_ZRS"
],
"metadata": {
"description": "Storage Account type"
}
},
"location": {
"type": "string",
"defaultValue": "[resourceGroup().location]",
"metadata": {
"description": "Location for the storage account."
}
},
Azure Bicep
Bicep documentation | Microsoft Docs
Bicep offers the same capabilities as ARM templates but with a syntax that's easier to use. You can definite how your Azure resources should be deployed and configured. Each Bicep file is automatically converted to an ARM template during deployment. Instead of writing in JSON, Bicep is a domain-specific language (DSL) that uses declarative syntax to deploy Azure resources which it makes it much easier to utilize. In most cases, Bicep uses syntax that is less verbose than JSON. You can use PowerShell or Azure CLI to call upon Bicep templates. Below is a snippet of a Bicep template (which can also be written with VS Code with this extension)
resource runPowerShellInline 'Microsoft.Resources/deploymentScripts@2020-10-01' = {
name: 'runPowerShellInline'
location: resourceGroup().location
kind: 'AzurePowerShell'
identity: {
type: 'UserAssigned'
userAssignedIdentities: {
'/subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myID': {}
}
}
properties: {
forceUpdateTag: '1'
containerSettings: {
containerGroupName: 'mycustomaci'
}
storageAccountSettings: {
storageAccountName: 'myStorageAccount'
storageAccountKey: 'myKey'
}
azPowerShellVersion: '6.4' // or azCliVersion: '2.28.0'
arguments: '-name \\"John Dole\\"'
environmentVariables: [
{
name: 'UserName'
value: 'jdole'
}
{
name: 'Password'
secureValue: 'jDolePassword'
}
]
scriptContent: '''
param([string] $name)
$output = \'Hello {0}. The username is {1}, the password is {2}.\' -f $name,\${Env:UserName},\${Env:Password}
Write-Output $output
$DeploymentScriptOutputs = @{}
$DeploymentScriptOutputs[\'text\'] = $output
''' // or primaryScriptUri: 'https://raw.githubusercontent.com/Azure/azure-docs-bicep-samples/main/samples/deployment-script/inlineScript.ps1'
supportingScriptUris: []
timeout: 'PT30M'
cleanupPreference: 'OnSuccess'
retentionInterval: 'P1D'
}
}
Learn Module to get started: Build your first Bicep template - Learn | Microsoft Docs
3rd Party Tools
Of course, beyond all of these Azure native tools there are 3rd party options to automate your deployments
Ansible - RedHat
Chef and so many more. One may be more suitable for your needs than another and it's great to have so many choices for Infrastructure as Code.
For a deeper dive into IaC tools, check out AprilYoho blog post
CI/CD & Other Orchestration Options
As you scale up, even the forementioned automation tools, while useful on their own, may require stitching together with other tools in the ecosystem.
Before I go into detail here, you need to understand the personas of people requesting IT services.
Traditional and DevOps Users.
Traditional Users are typically consuming IT services in a Shared Services model. i.e. They haven’t adopted DevOps workflow principles and are used to requesting something from IT, and then deploying their software on it. Perhaps they only do one deployment a year and the workloads are steady state. In many cases, they want to simply deploy a COTS (Commerical of the Shelf) application on top supplied and managed by a vendor, but they need it on a VM connected to your Enterprise network as it needs to access other systems. These Traditional or Share Service consumers need to be accounted for.
DevOps people have API, Scripting, CI/CD experience don’t want the slowdown of having to put a request in and then waiting. They are eager to consume services via an API and often will be leveraging tools like Azure DevOps to form a complete release cycle. These users also need to be accounted for.
So how do we provide services to each at scale?
Shared Services:
Consider the requestors for Shared Services, who are choosing their VMs based on size, disk, network location etc. They may still require items to be filled out such as Change Control (ServiceNow, Cherwell etc.) and other 3rd party security requirements that need to happen after the VM is deployed. Our goal is to get IT out of the way and automate this end to end. This means if they have required tagging, we automatically create these tags for them based on a series of business questions.
Options:
Leverage a combination of these tools to run a script based on a series of inputs which will then automate the downstream deployment of the ARM template, as well as any other 3rd party systems. Azure DevOps can be used outside of the release cycle here, as an orchestrator for your deployments where you have pre-canned workflows that create VMs based on a set of criteria
Example:
You want to create an automated workflow that generates the name and tags, deploys the VM, and then runs a security scan with Qualys, before finally closing out a change record
We will cover this in more detail in a future post, but customers who automate these steps end to end see reduced errors, improved speed, and a better experience for requestors all around.
- DevOps Users
We can’t end off without going back to talk about our DevOps users. They want to deploy their VMs (or containers and other platforms) leveraging their pipelines. Leveraging GitOps flows they will define their entire stacks as code, from the application down to the infrastructure and we also need to assist them in a way that doesn’t slow them down. In talking with customers, many will shift their attention to instead provide sets of modules/code with standards applied that meets the governance policies of the organization. Combine these with code inspection techniques and Azure policy to ensure workloads are deployed which are compliant but can be deployed as needed.
- Stitch these both together!
What if you have a development team that wants the complete automated workflow you built with integrated security etc. The answer, have their Azure DevOps Pipeline call the Infrastructure pipeline. As long as they pass the correct variables in and hit the Azure DevOps pipeline via API, the entire service can be deployed.
- Service Catalog Option
Last but certainly not least, once you reach a point of maturity and have multiple workflows available. Publish these into a catalog and present to the business via forms or via API. Let them decide the way they want to consume, but now they meet the standards required to manage your environment at scale.
NEXT: In the next post, I will dive a little deeper into automating management and how you can leverage tags, Azure policy etc. in more detail to get a handle on your environment.
Questions or comments? Feel free to leave below!
Updated Aug 10, 2022
Version 4.0AmyColyer
Microsoft
Joined January 25, 2022
ITOps Talk Blog
Follow this blog board to get notified when there's new activity