Azure Stack Hub Infrastructure as code using Terraform
Published May 08 2020 05:21 PM 9,066 Views
Microsoft

In the “Start your Infrastructure as Code journey with AzStackHub” post, we have explored how to use Azure Resource Manager (ARM) Templates to capture existing workloads running on Azure Stack Hub and start a Infrastructure as Code approach. This approach would take advantage of the benefits of Azure Stack IaaS . As the previous post is mainly focused on our native solution, using ARM Templates, we have invited Heyko Oelrichs, who is a Microsoft Azure Customer Engineer, to explore some additional paths on how this approach could look like using the widely used open-source infrastructure as code software tool Terraform. 

 

Terraform, created by our partner HashiCorp, is using the same ARM REST APIs as a foundation, but instead of describing deployments and configurations as ARM templates in JSON, Terraform is using a language called HashiCorp Configuration Language (HCL).

 

HCL templates are easy for operators to get started with. They are human readable and the extensible provider model allows us to address a broad set of different infrastructure types including Azure, Azure Stack, Kubernetes and also on-premises infrastructure. The extensible provider model is one of Terraform’s major value-adds, allowing us to use a single toolset to configure and deploy infrastructure, configuration and application deployments on different platforms and layers.

 

To show you the power of Azure Stack Hub in combination with Terraform we will start with a similar example to the “Start your Infrastructure as Code journey with AzStackHub” post.

 

Our example will use:

  • 3 Virtual Machines (VMs) of different sizes, each with various NSGs and rules
  • One virtual network, which all these VMs are linked to
  • One storage account used to host the boot diagnostics for 3 VMs

We will not cover the example in full detail, but the rest of this post should give you a good understanding of how to implement a scenario like this using Terraform.

 

Where to start?

First, you’ll need Terraform. Terraform itself comes as a single binary that can be downloaded from https://www.terraform.io/downloads.html. It is available for a wide variety of platforms. For the ease of use, make sure that the terraform binary is in your $PATH variable.

 

That is it. Now that ‘terraform’ is installed, we can easily call it using the ‘terraform’ command.

 

Give it a try. Open a command prompt like ‘cmd’ on Windows or ‘bash’ on Linux and run ‘terraform’.

 

rctibi_0-1588951572655.png

 

This will give us a list of all available options for the ‘terraform’ binary. Next step is now to create a directory that will contain our terraform configuration files:

  • Open a cmd or powershell window (or your linux shell)
  • Run ‘mkdir terraform‘ to create a new working directory
  • Enter the directory ‘cd terraform’

In this directory we are going to create a main.tf file that will contain our configuration. HCL is human-readable and you can use an editor of your choice to create and modify your .tf files.

 

The first thing we must define is how terraform can access our Azure Stack environment. This is done in our “provider” configuration:

 

# Configure the Azure Stack Provider

provider "azurestack" {

  version = "=0.9.0" 

  arm_endpoint    = "" https://management.local.azurestack.external (for ASDK)

  client_id       = ""

  client_secret   = ""

  subscription_id = ""

  tenant_id       = ""

}

 

 

Go to terraform.io/docs to learn more about the Terraform Azure Stack Provider. Or to the terraform-provider-azurestack repository on GitHub, as the provider itself is open-source as well.

 

What you can see in the example above is the minimal configuration to access a subscription on our Azure Stack Hub Instance (in this example we are using an Azure Stack Development Kit):

  • version specifies the version of the AzureStack Terraform provider.  You can find the latest version and its release notes on GitHub.
  • arm_endpoint specifies the tenant or admin management endpoint. To deploy something into the tenant space you’ve to use “management.*”.
  • client_id and client_secret are the ID and Secret of the service principal that is used to access the subscription. The SP needs appropriate permissions in your subscription.
  • subscription_id is the ID of your target subscription on your Azure Stack Hub.
  • tenant_id is the ID of the Azure Active Directory your SP resides in.

You can either specify the values here, which is not an ideal solution as the secrets are then stored in plaintext, or provide these values using environment variables.

 

Now that we have configured the connection to our Azure Stack Hub instance we can validate our configuration:

  • Save your main.tf
  • Run ‘terraform init’ (in the same directory)

rctibi_1-1588951572673.png

 

terraform init’ will check our configuration, download all required provider plugins (in our case only Azure Stack in the version we have defined in main.tf) and initialize terraform.

 

The next task is now to add real configuration to our deployment. Let us start with a virtual network and a resource group. The first one is the resource group:

 

# Create a resource group

resource "azurestack_resource_group" "deployment" {

  name     = "terraformrg"

  location = "local"

}

 

We are specifying a TF resource ‘azurestack_resource_group’, we call it deployment and we specify a name “terraformrg” and a location “local” for it. Now that we have a resource group, we are going to create a virtual network:

 

# Create a virtual network within the resource group

resource "azurestack_virtual_network" "deployment" {

  name                = "terraform-vnet"

  address_space       = ["10.0.0.0/16"]

  location            = azurestack_resource_group.deployment.location

  resource_group_name = azurestack_resource_group.deployment.name

}

 

Same procedure as before, we are specifying a resource ‘azurestack_virtual_network’, we call it ‘deployment’ and we are configuring the resource with a name ‘terraform-vnet’ and a location. New in this case is that we are pointing to the previously defined resource group for location and the name of the resource group.

 

Let us now save our ‘main.tf’ and run a ‘terraform plan’ to validate our configuration. ‘terraform plan’ uses Terraform’s built-in state management and will now provide us with a detailed execution plan how our deployment will look like, which resources will be created, destroyed or changed:

rctibi_2-1588951572690.png

 

And you can see that applying our configuration would create (+) two new resources. A resource group and a virtual network. Terraform modifies only what is necessary to reach your desired state. This allows you to version control not only your configurations but also your state so you can see how the infrastructure evolved over time.

Before we proceed, let us now apply our configuration and check the result.

  • Run ‘terraform apply

 

When you now go to your Azure Stack Hub portal, you will see that terraform has created a resource group and a virtual network for us:

rctibi_3-1588951572697.png

 

This was easy, right? Next, complete the example for the first virtual machine.

We are now going to add a few more resources to our deployment, I will not cover all of them in the same detail as before. Let us continue with a subnet we want to add to our previously created virtual network:

 

# Azure Stack Virtual Network Subnet

resource "azurestack_subnet" "default" {

    name                 = "default"

    resource_group_name  = azurestack_resource_group.deployment.name

    virtual_network_name = azurestack_virtual_network.deployment.name

    address_prefix       = "10.0.1.0/24"

}

 

Now a public IP address for our first VM:

 

# Public IP Address

resource "azurestack_public_ip" "terraform-vm1-pip" {

    name                         = "terraform-vm1-pip"

    location                     = azurestack_resource_group.deployment.location

    resource_group_name          = azurestack_resource_group.deployment.name

    public_ip_address_allocation = "static"

}

 

And a NIC for our first virtual machine:

 

resource "azurestack_network_interface" "terraform-vm1-nic" {

    name                = "terraform-vm1-nic"

    location            = azurestack_resource_group.deployment.location

    resource_group_name = azurestack_resource_group.deployment.name

 

    ip_configuration {

        name                          = "testconfiguration1"

        subnet_id                     = azurestack_subnet.default.id

        private_ip_address_allocation = "dynamic"

        public_ip_address_id          = azurestack_public_ip.terraform-vm1-pip.id

    }

}

 

And finally, we are tying all the components together by deploying a virtual machine using the previously created NIC and public IP:

 

resource "azurestack_virtual_machine" "terraform-vm1" {

    name                  = "terraform-vm1"

    location              = azurestack_resource_group.deployment.location

    resource_group_name   = azurestack_resource_group.deployment.name

    network_interface_ids = [

        azurestack_network_interface.terraform-vm1-nic.id

        ]

    vm_size               = "Standard_F2"

  

    storage_image_reference {

      publisher = "Canonical"

      offer     = "UbuntuServer"

      sku       = "18.04-LTS"

    }

    storage_os_disk {

      name              = "terraform-vm1-osdisk"

      create_option     = "FromImage"

      managed_disk_type = "Standard_LRS"

    }

    os_profile {

      computer_name  = "hostname"

      admin_username = "testadmin"

      admin_password = "Password1234!"

    }

}

 

Please keep in mind that these examples contain only the minimum set of parameters. We recommend looking into the terraform documentation for each of these resources and providers to see what is available for you to configure.

 

Let us now apply the modified configuration.

  • Run ‘terraform apply

The deployment itself will take some time and after a few minutes you will see a fully featured Azure VM in your resource group on Azure Stack Hub:

rctibi_4-1588951572701.png

 

As mentioned in the beginning of this post, we would also like to have a specific network security group (NSG) for our VM in place. Let us add a Network Security Group (NSG) now and attach it to our VM:

resource "azurestack_network_security_group" "terraform-vm1-nsg" {

    name                = "terraform-vm1-nsg"

    location            = azurestack_resource_group.deployment.location

    resource_group_name = azurestack_resource_group.deployment.name

  

    security_rule {

      name                       = "RuleAllowRDP"

      priority                   = 100

      direction                  = "Inbound"

      access                     = "Allow"

      protocol                   = "Tcp"

      source_port_range          = "*"

      destination_port_range     = "3389"

      source_address_prefix      = "*"

      destination_address_prefix = "*"

    }

}

 

This is enough to create a new NSG with a single rule ‘RuleAllowRDP’. To attach it to our VM we have to update our NIC configuration.

 

resource "azurestack_network_interface" "terraform-vm1-nic" {

    name                = "terraform-vm1-nic"

    location            = azurestack_resource_group.deployment.location

    resource_group_name = azurestack_resource_group.deployment.name

 

    network_security_group_id = azurestack_network_security_group.terraform-vm1-nsg.id

 

    ip_configuration {

        name                          = "testconfiguration1"

        subnet_id                     = azurestack_subnet.default.id

        private_ip_address_allocation = "dynamic"

        public_ip_address_id          = azurestack_public_ip.terraform-vm1-pip.id

    }

}

 

The important piece here is ‘network_security_group_id’.

Let us now run ‘terraform plan’ to see what happens:

rctibi_5-1588951572722.png

 

First of all, our resource ‘azurestack_network_interface’ will be updated (~) in place. Second thing that happens is that a new NSG will be created (+):

rctibi_6-1588951572731.png

 

Let us now apply our configuration... et voilá we have a new NSG attached to our NIC:

 

rctibi_7-1588951572735.png

 

rctibi_8-1588951572741.png

 

 

We hope this helps you to get you started on Infrastructure as Code with Azure Stack Hub and Terraform. You can find the terraform code snippets we have used above here on GitHub.

 

1 Comment
Version history
Last update:
‎Jul 15 2020 04:22 PM
Updated by: