In my previous article, I demonstrated how you could take advantage of CI/CD to automate the delivery of your Synapse Workspace artifacts across multiple environments. As a complement to this, I'm adding this new tutorial where you can learn how to complement that automation by adding the delivery of your Synapse SQL pool objects across multiple environments.
In this article I will describe a scenario where I'm using Visual Studio and Azure DevOps to automate the delivery of database changes from a lower environment to a target environment.
This article is divided into three stages:
First stage - The Visual Studio side of things -
This is the stage where you will set up your development environment using Visual Studio and prepare the foundations for the CI/CD processes.
Open Visual Studio and click "Create a new Project" when prompted to Get started.
When prompted to "Create a new project" search for the keyword "database" and select the "SQL Server Database Project" from your search results.
Configure your new project and select "Create" once finished
After creating your database project, the first thing you need to do is to integrate your Visual Studio Solution with your Git Repository. In this case, you are going to select the "Git" menu and then select the "Create Git Repository..." menu option (you can integrage with an existing repository in case you already have one). In this case I'm integrating with a new Azure DevOps Git repo.
Select "Create and Push" to start the syncronization between your Visual Studio solution and your Git repository.
You can open a new browser tab and explore your Git repository. Check the VS solution and the project files hosted in your master branch.
Once you get your VS solution integrated with your Git repo, you will start importing your source database. From the Solution Explorer, right click on your project name and select Import --> Database…
Hit the "Select Connection...." button to connect to your source database. You can check/uncheck the import settings options according to your requirements.
Note: you can uncheck the "Import application-scoped objects only" to make sure you import server-level objects as well.
Select "Start" to start importing the database objects to your VS solution. Once the import is finished, you can go to the "Git Changes" blade in Visual Studio and flag all imported objects as pending changes waiting to be pushed to your Git repository.
To push these pending changes to your Git Repository, add a brief description to this commit , click on the down arrow close to the "Commit All" button and select the "Commit All and Push" option.
Once the Commit All and Push is completed, go to your Git repo and check the new folder structure containing the sql script files for all your database objects.
Once you have reached this step and you have confirmed the folder structure containing the sql files for your objects, you can move forward to the next stage.
Second stage - The Continuous Integration side of things -
Let's start this second stage, where you will use Azure DevOps Pipelines to build the Continuous Integration process.
From your Azure DevOps project menu, select the "Pipelines" menu option to create a new CI Pipeline.
Create a new Pipeline and select the "Use the classic editor" option to configure your own Pipeline.
Select the source that is hosting your Git repo, in my case I'm selecting "Azure Repos Git" .
After selecting "Continue", select the ".NET Deskop" template and hit the "Apply" button
Name your pipeline (In my case: "Dacpac Automation - CI") and make sure you remove all template tasks except these below:
Select "Save & Queue" and when prompted to run the pipeline, select "Save and Run".
Wait for the pipeline run to finish and then click in your agent job name (in my case "Agent job 1") under the "Jobs" blade to get more details about the pipeline execution
From the left menu, you can see the pipeline tasks and you can click in the "Build solution" task to get more details about this build. The highlighted code represents the Dacpac file that was generated.
Once you confirm that the dacpac file has been generated, you can move forward to the third and final stage: the CD stage
Third and final stage - The Continuous Delivery side of things -
Once you have completed all the steps above to generate your dacpac file, you will use the Azure DevOps Release Pipeline to set up the Continuous Delivery process and publish the generated Dacpac file to your target SQL pool.
From your Azure DevOps project menu, select "Pipelines" and then select "Releases" to create a new Release Pipeline. Create a New Release Pipeline.
When prompted to select a template, select "Empty job"
Name your release pipeline (in this case: Dacpac Automation CD) and then select "+ Add an artifact". This will define a source location to be used by your release tasks.
When configuring your artifacts, select the source type as "Build". Then select your build pipeline that was created in the previous stage (in this case Dacpac Automation - CI ). Hit the "Add" button to create your artifact.
Configure your release stage by selecting the "1 job , 0 task" link under your stage name.
From your Agent Job task, click on the plus (+) sign to add a new task to your Release Pipeline.
Search for "Data Warehouse" and select "Add" to add the Azure SQL Data Warehouse deployment task to your release.
Before configuring this task, you can create some variables to avoid exposing some task parameter values. Select the "Variables" tab and add the following variables to your release pipeline:
Note: you can secure any password in DevOps by using the locker icon when adding the variable. This will change the variable type to secret
You can use these variables when configuring your task. Just type $(variable_name). Choose "SQL DACPAC file" as your deploy type and browse your .dacpac file from your artifact location.
When browsing your dacpac file you should be looking into a path similar to this one below:
Save your release and then select the "Create Release" option. Hit the "Create" button to trigger a new release.
You can check the release progress by selecting the "Logs" option
Once the release is successfully finished, you can log into your target database to confirm the objects that have been deployed.
Conclusion
When using Azure Synapse Analytics across multiple environments, you can take advantage of Azure DevOps capabilities to automate the integration and delivery of your work, either resulting from Synapse Studio (Workspace artifacts) or from your SQL pools (database objects). With regard to the latter, you learned how to use Visual Studio capabilities to improve your Continuous Integration process by integrating all the database changes in Azure DevOps. Finally, and adding to this, you also learned how to improve your Continuous Delivery process by using Release Pipelines to deliver these changes to a target environment.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.