Blog Post

Apps on Azure Blog
6 MIN READ

Case Study: Data Driven DevOps with ADO

jimlane's avatar
jimlane
Icon for Microsoft rankMicrosoft
Feb 26, 2025

Customer Scenario:

One of Microsoft's manufacturing partners provides services to a large number of customers that require recurring deployments of cloud-based artifacts. This can be in service of onboarding new customers, as well as updates and new releases of existing applications. The customer approached Microsoft to assist them in developing a new system with the following attributes:

  • Automate releases
  • Provide a templatized approach to deploying and configuring new releases
  • Allow for reuse of existing releases
  • Provide for meta-data tagging of deployed artifacts
  • Utilize a data driven approach to artifact generation
  • Be easy to use for non-IT based employees

After researching numerous commercially available deployment management products, the partner desired to develop a custom system in-house, with Microsoft’s assistance. They indicated that they wanted to get out of the “script management business" and wanted a new system with a web-based interface, that employees outside of IT could utilize to deploy applications and systems into Azure for new and existing subscriptions.

The last two requirements meant that BICEP and ARM could not be the main components of their new system. They desired a deployment management system that would abstract away the intricacies of deployment script production.

 

Solution:

With this in mind, it was decided that a consulting partner would build the new deployment management system, and that Microsoft would provide architectural advisement. In order to attain management approval and funding, a POC was initiated to prove the architectural approach and demonstrate the user-friendly functionality.

 

Architecture:

The architecture developed for the POC is depicted in the following diagram:

 

Figure 1 - System Architecture

 

Main components of the POC architecture:

  • Azure Web App – simple SPA providing a web interface to initiate a sample deployment
  • Azure SQL Database – provides data structures to represent all data objects behind a sample deployment
  • Azure DevOps – a build pipeline was created to receive the deployment request, as well as a release pipeline to complete the requested deployment. The choice of Azure DevOps over GitHub was driven by preference of the customer

 

Architectural Details:

Web App Front-End – a simplistic ASP.Net SPA providing a web page to initiate a sample deployment. A REST API call is made to Azure DevOps to initiate the build pipeline. The body of the HTTP response is parsed to show the state of the pipeline to the user. The body also includes a URL link which is also displayed to the user allowing them to jump over ADO and further interrogate the disposition of the release.

For authentication between the web page and ADO a Personal Access Token (PAT) was employed since from a POC perspective this was the quickest and easiest route to configure security.

Figure 2 below provides a code snippet from this REST API call:

 

Figure 2 - ADO REST API call code snippet

 

In order to properly authorize the REST call to initiate the build pipeline, line 51 has the PAT string being retrieved from the project’s config file and adding it to the HTTP request headers, while line 55 retrieves the URL to the ADO build pipeline for the HTTP request. Note that on line 54, all REST API calls to initiate an ADO pipeline must utilize the POST verb.

Figure 3 provides a code snippet demonstrating how the HTTP response body is parsed to provide the initial status of the pipeline to the user:

 

Figure 3 - HTTP response parsing

 

The HTTP response is cast to a JSON object, which allows for easy indexing by element name. Line 66 retrieves the initial pipeline state for display, and line 68 retrieves the URL for the ADO pipeline status for the same purpose.

 

Database Backend – The partner specifically requested a data driven approach for their new release system. They wanted to easily share releases and deployments between customers, while keeping them unique and customized for each customer. They viewed a relational backend as the best solution for these requirements.

The general approach for the backend was to store generic BICEP templates for each type of Azure resource. Customization would be accomplished with placeholder tags in each template which are replaced at runtime with the desired values specific to each customer deployment. While parameter files are a possible alternative, challenges were encountered with this approach during the POC, which is what drove the currently described architecture.

In addition to customer specific parameters, the partner desired the ability to apply one to many metadata tags to each deployed artifact. These would assist in post-production billing and reporting. So in addition to a table to relate parameters per artifact, there should be an additional table to relate tags to artifacts.

The following figure displays the relational data model employed for the POC:

 

Figure 4 - Database Schema

 

As shown in Figure 4, the Customers and CusomerModels tables represent the relationship where customers can have one to many models that they would want to deploy, with each model identified with both a name and unique number.

Each model would contain one or more Azure artifacts to be deployed. This relationship is represented by the CustomerModelComponents table.

For each component to be deployed there would be an entry in the OptraMetaModel table. Each row in this table would contain the generic BICEP script needed to generate a specific Azure artifact, along with any desired tags to be applied to the Azure artifact during deployment. The Azure artifacts are identified by the ResourceType column, which is a foreign key from the AzureResources table.

And lastly, the CustomerModelComponentParms table provides the values to be used for each parameter(s) found within the scripts contained within the OptraMetaModel table, allowing for the models to be reused between customers and deployments, each with a unique set of parameters.

It should be noted that this database schema would need to be expanded for production scenarios. For instance, the current schema only allows for a single tag to be applied to each deployed artifact. This was done for expediency. For production an additional table would be required to allow for multiple tags to be applied to each artifact.

Build & Release Pipelines – As mentioned previously, the release process is produced by both a build and release pipeline within Azure DevOps. We’ll examine both of these closely in the following paragraphs.

The build pipeline performs two main functions. The first is the retrieval of all necessary BICEP templates based on parameters from the user request. The second is the compilation into a deployable template for use by the release pipeline. The overall flow of this process is shown in Figure 5

 

Figure 5 - ADO build and release process

 

The following figure displays a snippet of yaml from the release pipeline:

 

Figure 6 - build pipeline yaml

 

As you can see, the majority of processing within the build pipeline is accomplished via inline PowerShell script. Within this snippet, lines 28 through 35 shows the SQL select command used to retrieve the BICEP script indicated by the customer number passed to the pipeline. In a real-world scenario, the user would have selected a customer name and model number to deploy from an onscreen selection within the web app. To operate within the time constraints of the POC, a single customer and a single model were created within the database. Therefore, a hard coded customer number was passed to the pipeline from the REST API call.

 

Figure 7 – build pipeline YAML

 

In a similar fashion, the SQL on lines 43 through 50 in figure 7 retrieves the parameters associated with this component for the desired customer model.

 

Figure 8 - build pipeline YAML

 

As the last part of this pipeline’s first task, Figure 8 depicts how the PWS script parses each line of the BICEP script. On line 67 it attempts to match the parameter name within the script to a parameter name pulled from the CustomerModelComponentParms table. If a match is found, then on line 68 the placeholder tag is replaced with the parameter value from the table.

 

Figure 9 - build pipeline YAML

 

As depicted in Figure 9, the next task in the pipeline invokes an Azure CLI call to build the BICEP script that was produced in the first step. Line 91 invokes the CLI command. Note the locations for both the input and output files.

 

Figure 10 - build pipeline YAML

 

The last task in the build pipeline, as depicted in figure 10, takes the compiled BICEP file from the prior task and publishes it for consumption by the release pipeline.

 

Figure 11 - release pipeline YAML

 

The release pipeline is rather simplistic compared to the build pipeline, with only a single step and task.

Figure 11 depicts the YAML from the release pipeline, where the compiled BICEP script from the release pipeline is picked up by the ARM template deployment task. Note that the Azure resource group and location are hard coded for the POC. These values would need to be parameterized for a production environment.

 

Conclusion – The POC was able to demonstrate how a deployment of Azure resources could be data driven, customizable and repeatable. It delivered a set of Azure resources to a given resource group after being initiated by user action from a web page.

Updated Feb 26, 2025
Version 2.0