Blog Post

Azure Synapse Analytics Blog
4 MIN READ

Deploying Synapse SQL Serverless objects across environments using SSDT

RuiCunha's avatar
RuiCunha
Icon for Microsoft rankMicrosoft
Mar 03, 2023

At Microsoft, we love to hear your voice! The Synapse community asked for SqlPackage’s long-awaited support for Azure Synapse Analytics Serverless SQL Pools, and we delivered! Now you can automate the deployment of your Synapse serverless SQL objects along with your Synapse dedicated SQL objects!

 

SqlPackage release 161.8089.0, dated February 13, 2023, brings a new feature: support for Synapse serverless SQL pools. According to the release notes, you can use SqlPackage to extract and publish both external and internal objects from serverless SQL pools. This includes the T-SQL surface area of serverless SQL pools is limited by design to external tables, external views, stored procedures, functions, statistics, and credentials. 

 

The following features are included in support for serverless SQL pools:

  • delta external file format
  • openrowset provider CosmosDB with PROVIDER, CONNECTION, OBJECT, and CREDENTIAL or SERVER_CREDENTIAL specified
  • openrowset format SStream
  • with () clause on openrowset

 

Quick demonstration

In this article I will demonstrate how you can use the Azure SQL Datawarehouse Deployment task in a DevOps pipeline to replicate your SQL serverless objects from a lower environment (DEV) to a target environment (UAT). If you don't have this task installed in your DevOps organization, you can check out Azure SQL Data Warehouse deployment for more information.

 

Step 1: Configuring the Release Pipeline in Azure DevOps

After accessing your Azure DevOps Project, from the left navigational bar, select "Pipelines" and then "Releases".

 

 

 

Select "New" and then "+ New release pipeline" to start configuring your new release pipeline.

 

 

When prompted for a template, select "Empty job".

 

 

I'm renaming the stage as "SQL OD Deployment"

 

 

 

Step 2: Configuring the Azure SQL Data Warehouse deployment tasks

After renaming the stage, select the task link to start the tasks configuration.

 

 

You need to add two tasks to your stage:

  1. A first one that will run the SqlPackage utility using the EXTRACT action to generate the DACPAC file from the source database;
  2. A second one that will run the SqlPackage utility using the PUBLISH action to deploy the DACPAC to the target database;

 

Let's start by adding the first (EXTRACT) task:

 

Hit the plus "+" sign to add a new task to the stage.

 

Search for "sql data warehouse" to select the "Azure SQL Data Warehouse deployment" task.

Note: If you haven't installed the task yet, it will show under the "Marketplace" section. You just need to hit the "Get it free" button to install this extension in your Azure DevOps organization.

 

 

After adding this task to your stage, you need to populate these items to complete the task configuration.

 

Notice that I'm not exposing the user credentials directly on the task. Instead, I'm using two Pipeline variables, that can be defined by selecting the "Variables" tab. You can reference these variables in your task by using $(variable_name).

 

Also notice that the "pass" variable type was changed to secret instead of plain text. You can switch the variable type by clicking on the locker button.

 

 

 

Back to the EXTRACT task, after configuring the items above, you must configure the task's Deployment Package section. You must select the "SQL DAPAC file" deploy type and the "Extract" action. Under "Output Variables", don't forget to add a reference name to the generated output file. This variable will be used by the next task, as it will require the DACPAC file location.

 

 

Now that you have completed this task configuration, you must add a second task, which will be responsible for publishing this DACPAC to your target Serverless instance.

 

When configuring this task, you must provide the target server name (the serverless SQL endpoint) and the target database name (in case the target database does not exist in your server, it will be automatically created).

 

Select the "SQL DACPAC file" deploy type and the "Publish" action. You will use the "dacpac" reference name (output variable) from the first task, as it contains the file path to the generated DACPAC.

 

 

 

After completing the tasks configuration, you can "Save" the release pipeline and hit the "Create release" button.

 

 

Select the "Create button" to start your deployment. To track the release progress, click on the release name and then mouse over the stage to select the "Logs" button.

 

 

 

When viewing the "EXTRACT the DACPAC" task log, you can see the SqlPackage cmd line utility being executed using the "Extract" action:

 

 

and your output variable (dacpac) pointing to the DACPAC file location. 

 

 

 

When viewing the "PUBLISH the DACPAC" task log, you can see the SqlPackage cmd line utility being executed using the "Publish" action and the DACPAC file path referenced by the output variable "dacpac".

 

 

 

And that's it! This is how you can easily deploy your Synapse SQL Serverless objects across different environments/workspaces.

 

Conclusion

You can now take advantage of SSDT new features to improve the CICD lifecycle for your Synapse SQL pools by automating the deployment of Synapse Serverless SQL pools objects.

 

In this article, we have covered a simple approach using Azure DevOps release pipelines to replicate your SQL serverless pool objects from a lower environment (DEV) to a target environment (UAT). Keep in mind that, as of the time of writing, the T-SQL surface area of serverless SQL pools is limited by design to external tables, external views, stored procedures, functions, statistics, and credentials.

Updated Mar 01, 2023
Version 1.0

13 Comments

  • rocketporg's avatar
    rocketporg
    Brass Contributor

    This is kind of useful, but unfortunately only in a limited way... if you literally want to just replicate your serverless objects then this is great. However, for me serverless is cool but out of consideration for real production work and only useful as a kind of adhoc data query tool mainly in dev. I really don't want to script out all the objects and faff around with deploying them to other environments like others have done...

     

    I agree with the questions posed by jacovg91 there are still 2 issues that hold serverless back from being 'perfect' for me:-

     

    1) Still can't use SSDT, Visual Studio 2022 or other tools to keep this under source control, e.g. in a database project

    2) Only works if your development external data sources point to the same place as your production (or other environment) external data sources do. If your organisation is like ours then this is highly unlikely to be the same place, for example we have a dev datalake and and prod datalake. So in this case you'd have to run some sort of dynamic SQL after deployment to drop/create the external data sources so they're pointing to the correct places, but this defeats the point of replicating them from dev in the first place. Note that this is a challenge even in dedicated SQL pool, but there we use some dynamic SQL with 'COPY INTO' to get around this problem instead of using external tables, but that's not possible in serverless.

     

    If these could be sorted out and we'd use serverless in production, which would be great... So near, but so far, sorry...

  • swake's avatar
    swake
    Copper Contributor

    When can we expect that this tooling will be available in VS2022? Would love to create new database projects in VS2022 for Synapse SQL Serverless and be able to Publish/Sync as I do with all of my other SQL Server DB projects today. 

  • jacovg91's avatar
    jacovg91
    Copper Contributor

    Hi RuiCunha

     

    Thanks for this post!

    I've got 2 questions:

    • How would you go about overwriting things in for example external data sources? The external data sources differ between environments. For example, the one on local points to a different ADLS than the one on UAT. 
    • I know SSDT support (using a database project) is on the roadmap for Serverless SQL pools. Can you shine some light on the time it'll take before this is released? Extracting an already deployed database in my opinion beats the purpose of having a nice CI/CD setup with triggers and having all your databases' content in GIT. 

    Thank you!