<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>Azure Synapse Analytics Blog articles</title>
    <link>https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/bg-p/AzureSynapseAnalyticsBlog</link>
    <description>Azure Synapse Analytics Blog articles</description>
    <pubDate>Sat, 18 Apr 2026 09:03:27 GMT</pubDate>
    <dc:creator>AzureSynapseAnalyticsBlog</dc:creator>
    <dc:date>2026-04-18T09:03:27Z</dc:date>
    <item>
      <title>Preview: Azure Synapse Runtime for Apache Spark 3.5</title>
      <link>https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/preview-azure-synapse-runtime-for-apache-spark-3-5/ba-p/4418118</link>
      <description>&lt;P&gt;We’re thrilled to announce that we have made Azure Synapse Runtime for Apache Spark 3.5 for our Azure Synapse Spark customers in preview, while they get ready and prepare for migrating to Microsoft Fabric Spark.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;What Does This Mean for You?&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;You can now create&amp;nbsp;Azure Synapse Runtime for Apache Spark 3.5.&amp;nbsp;The essential changes include features which come from upgrading Apache Spark to version 3.5 and Delta Lake 3.2.&amp;nbsp;Please review the official&amp;nbsp;&lt;U&gt;&lt;A href="https://github.com/microsoft/synapse-spark-runtime/tree/main/Synapse/spark3.5" target="_blank" rel="noopener"&gt;release notes for Apache Spark 3.5&lt;/A&gt;&lt;/U&gt; to check the complete list of fixes and features. In addition, review the&amp;nbsp;&lt;A href="https://spark.apache.org/docs/latest/sql-migration-guide.html#upgrading-from-spark-sql-34-to-35" target="_blank" rel="noopener"&gt;migration guidelines between Spark 3.4 and 3.5&lt;/A&gt;&amp;nbsp;to assess potential changes to your applications, jobs and notebooks.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For additional details check&amp;nbsp;&lt;U&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/synapse-analytics/spark/apache-spark-35-runtime" target="_blank" rel="noopener"&gt;Azure Synapse Runtime for Apache Spark 3.5&lt;/A&gt;&lt;/U&gt; documentation.&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;What is next?&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;We offer Azure Synapse Runtime for Apache Spark 3.5 to our Azure Synapse Spark customers. However, we strongly recommend that customers plan to migrate to Microsoft Fabric Spark to benefit from the latest innovations and optimizations exclusive to Microsoft Fabric Spark. For example, the &lt;STRONG&gt;&lt;A href="https://learn.microsoft.com/en-us/fabric/data-engineering/native-execution-engine-overview?tabs=sparksql" target="_blank" rel="noopener"&gt;Native Execution Engine (NEE)&lt;/A&gt;&lt;/STRONG&gt; significantly enhances query performance at no additional cost. &lt;STRONG&gt;&lt;A href="https://learn.microsoft.com/en-us/fabric/data-engineering/spark-compute" target="_blank" rel="noopener"&gt;Starter pools&lt;/A&gt;&lt;/STRONG&gt; allow the creation of a Spark session within seconds, unified security in the lakehouse enables the definition of RLS (Row-Level Security) and CLS (Column-Level Security) for objects in the lakehouse. Additionally, newly announced Materialized Views and many other features are available.&lt;/P&gt;</description>
      <pubDate>Mon, 02 Jun 2025 16:00:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/preview-azure-synapse-runtime-for-apache-spark-3-5/ba-p/4418118</guid>
      <dc:creator>ArshadAliTMMBA</dc:creator>
      <dc:date>2025-06-02T16:00:00Z</dc:date>
    </item>
    <item>
      <title>Enhancing Team Collaboration in Azure Synapse Analytics using a Git Branching Strategy – Part 2 of 3</title>
      <link>https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/enhancing-team-collaboration-in-azure-synapse-analytics-using-a/ba-p/4414921</link>
      <description>&lt;H2 class="lia-align-justify"&gt;Introduction&lt;/H2&gt;
&lt;P class="lia-align-justify"&gt;In the &lt;A class="lia-internal-link lia-internal-url lia-internal-url-content-type-blog" href="https://techcommunity.microsoft.com/blog/azuresynapseanalyticsblog/enhancing-team-collaboration-in-azure-synapse-analytics-using-a-git-branching-st/4405882" target="_blank" rel="noopener" data-lia-auto-title="first part of this blog series" data-lia-auto-title-active="0"&gt;first part of this blog series&lt;/A&gt;, we introduced a Git branching strategy designed to enhance collaboration within Azure Synapse Studio. By enabling multiple teams to work in parallel within a shared Synapse workspace, this approach can accelerate not only the development cycle of Synapse code but also the entire Synapse CI/CD flow.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;In this second part of this blog series, we take a practical step forward by demonstrating how to implement a CI/CD flow that supports this Git branching strategy. This flow will help streamline the Synapse code development cycle for our Data Engineering and Data Science teams, accelerating code releases across different environments without interfering with their respective work.&lt;BR /&gt;Although this article series demonstrates a scenario where different teams working on separate projects share the same Synapse development workspace, you can adapt this CI/CD flow to fit your own Git branching strategy.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Whether you're managing a single team or coordinating across multiple projects, this guide will help you build a scalable and efficient deployment workflow tailored for Azure Synapse Analytics.&lt;/P&gt;
&lt;H2&gt;Prerequisites:&lt;/H2&gt;
&lt;P class="lia-align-justify"&gt;- An&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/devops/organizations/projects/create-project?view=azure-devops" target="_blank" rel="noopener"&gt;Azure DevOps project&lt;/A&gt;.&lt;BR /&gt;- An ability to run pipelines on Microsoft-hosted agents. You can either purchase a &lt;A href="https://learn.microsoft.com/en-us/azure/devops/pipelines/licensing/concurrent-jobs?view=azure-devops" target="_blank" rel="noopener"&gt;parallel job&lt;/A&gt; or you can request a free tier.&lt;BR /&gt;- Basic knowledge of YAML and Azure Pipelines. For more information, see &lt;A href="https://learn.microsoft.com/en-us/azure/devops/pipelines/create-first-pipeline?view=azure-devops" target="_blank" rel="noopener"&gt;Create your first pipeline&lt;/A&gt;.&lt;BR /&gt;- Permissions:&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; To add environments, the &lt;A href="https://learn.microsoft.com/en-us/azure/devops/pipelines/library/add-resource-protection?view=azure-devops#environments" target="_blank" rel="noopener"&gt;Creator role for environments&lt;/A&gt; in your project. By default, members of the B&lt;STRONG&gt;uild Administrators&lt;/STRONG&gt;, &lt;STRONG&gt;Release Administrators&lt;/STRONG&gt;, and &lt;STRONG&gt;Project Administrators&lt;/STRONG&gt; groups can also create environments.&lt;BR /&gt;- The appropriate assigned user roles to create, view, use, or manage a service connection. For more information, see &lt;A href="https://learn.microsoft.com/en-us/azure/devops/pipelines/policies/service-connection-permissions?view=azure-devops" target="_blank" rel="noopener"&gt;Service connection permissions&lt;/A&gt;.&lt;BR /&gt;&lt;BR /&gt;To learn more about setting up Azure DevOps Environments for pipelines and setting up Service Connections, please refer to these documents:&lt;BR /&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/devops/pipelines/process/environments?view=azure-devops#prerequisites" target="_blank" rel="noopener"&gt;Create and target Azure DevOps environments for pipelines - Azure Pipelines | Microsoft Learn&lt;BR /&gt;&lt;BR /&gt;&lt;/A&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/devops/pipelines/library/service-endpoints?view=azure-devops#prerequisites" target="_blank" rel="noopener"&gt;Service connections - Azure Pipelines | Microsoft Learn&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;H2&gt;Defining Azure DevOps Environments&lt;/H2&gt;
&lt;P class="lia-align-justify"&gt;An environment represents a logical target where your pipeline deploys software. Common environment names include Dev, Test, QA, Staging, and Production. You can learn more about environments &lt;A href="https://learn.microsoft.com/en-us/azure/devops/pipelines/process/environments?view=azure-devops" target="_blank" rel="noopener"&gt;here&lt;/A&gt;.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Since our Git branching strategy is based on environment-specific branches, we’ll leverage this Azure DevOps environments feature to monitor and track Synapse code deployments by environment/team. From a security perspective, this also ensures that pipeline execution can be authorized and approved by specific users per environment.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;⚠️ Note: Azure DevOps environments are not available in Classic pipelines. For Classic pipelines, &lt;A href="https://learn.microsoft.com/en-us/azure/devops/pipelines/process/stages?view=azure-devops&amp;amp;tabs=classic" target="_blank" rel="noopener"&gt;Release Stages&lt;/A&gt; offer similar functionality.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Let’s begin by creating the necessary Azure DevOps environments for our Synapse CI/CD flow.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;In this first step, we’ll create four environments:&amp;nbsp;&lt;STRONG&gt;DEV&lt;/STRONG&gt;,&amp;nbsp;&lt;STRONG&gt;UAT&lt;/STRONG&gt;,&amp;nbsp;&lt;STRONG&gt;PRD&lt;/STRONG&gt;, and&amp;nbsp;&lt;STRONG&gt;EMPTY&lt;/STRONG&gt;. Each environment will be associated with its corresponding environment branch. The purpose of the&amp;nbsp;&lt;STRONG&gt;EMPTY&lt;/STRONG&gt;&amp;nbsp;environment is to ensure that the deployment job only runs when the branch is recognized as valid (e.g.,&amp;nbsp;environments/&amp;lt;team&amp;gt;/dev,&amp;nbsp;environments/&amp;lt;team&amp;gt;/uat, or&amp;nbsp;environments/&amp;lt;team&amp;gt;/prd). Even if someone modifies the trigger or manually runs the pipeline from another branch, the job will be automatically skipped.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;To create these environments, follow these steps:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Sign in to your Azure DevOps organization at&amp;nbsp;https://dev.azure.com/{yourorganization}&amp;nbsp;and open your project.&lt;/LI&gt;
&lt;LI&gt;Go to&amp;nbsp;&lt;STRONG&gt;Pipelines &amp;gt; Environments &amp;gt; Create environment&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;/OL&gt;
&lt;img /&gt;
&lt;P class="lia-align-center"&gt;Figure 1: How to create your pipeline Environments&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Once you’ve created all four environments, your environment list should resemble the one shown in the figure below.&lt;/P&gt;
&lt;img /&gt;
&lt;PRE class="lia-align-center"&gt;Figure 2: All environments for this tutorial created&lt;/PRE&gt;
&lt;P class="lia-align-justify"&gt;&lt;BR /&gt;To add an extra layer of security to each of these environments, we can configure an approval step and specify the user(s) authorized to approve pipeline execution in each environment.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;After selecting your environment, go to the Approvals and checks tab, then click the + icon to add a new check.&lt;/P&gt;
&lt;img /&gt;
&lt;PRE class="lia-align-center"&gt;Figure 3: Adding approvers to your pipeline environments&lt;/PRE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Select Approvals, and then select Next.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Add users or groups as your designated Approvers, and, if desired, provide instructions for the approvers. Specify if you want to permit or restrict approvers from approving their own runs, and specify your desired Timeout. If approvals aren't completed within the specified Timeout, the stage is marked as skipped.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;PRE class="lia-align-center"&gt;Figure 4: Adding an approver to your pipeline environment&lt;/PRE&gt;
&lt;H2&gt;Creating the Pipeline for Synapse Code Deployment&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;With the Azure DevOps environments defined, we can now create the pipeline that will drive the CI/CD flow.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;From the left Navigation menu, Go to "&lt;STRONG&gt;Pipelines&lt;/STRONG&gt;" and select "&lt;STRONG&gt;New pipeline&lt;/STRONG&gt;"&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&lt;STRONG&gt;Note:&lt;/STRONG&gt; The following images correspond to the native Azure DevOps pipeline configuration experience.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Since we are using Azure DevOps, we will select the first option presented.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;PRE class="lia-align-center"&gt;Figure 5: Selecting your git provider&lt;BR /&gt;&lt;BR /&gt;&lt;/PRE&gt;
&lt;P&gt;Select your repository&lt;/P&gt;
&lt;img /&gt;
&lt;PRE class="lia-align-center"&gt;Figure 6: Selecting your repository&lt;/PRE&gt;
&lt;P&gt;&lt;BR /&gt;and then select the “Starter pipeline” option&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;PRE class="lia-align-center"&gt;Figure 7: Configuring your pipeline&lt;/PRE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Now it’s time to define the code that our pipeline will use to deploy Synapse code to the corresponding environments.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;H2 class="lia-align-justify"&gt;Configuring your Pipeline&lt;/H2&gt;
&lt;img /&gt;
&lt;PRE class="lia-align-center"&gt;Figure 8: Reviewing your YAML pipeline&lt;/PRE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;Replace the existing sample code with this code below.&lt;/P&gt;
&lt;P&gt;trigger:&lt;/P&gt;
&lt;P&gt;- environments/data_eng/dev&lt;BR /&gt;- environments/data_eng/uat&lt;BR /&gt;- environments/data_eng/prd&lt;BR /&gt;- environments/data_sci/dev&lt;BR /&gt;- environments/data_sci/uat&lt;BR /&gt;-&amp;nbsp;environments/data_sci/prd&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;variables:&lt;BR /&gt;- name: workspaceEnv&lt;BR /&gt;&amp;nbsp;&amp;nbsp;${{ if endsWith(variables['Build.SourceBranch'], '/uat') }}:&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;value: 'UAT'&lt;BR /&gt;&amp;nbsp;&amp;nbsp;${{ elseif endsWith(variables['Build.SourceBranch'], '/dev') }}:&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;value: 'DEV'&lt;BR /&gt;&amp;nbsp;&amp;nbsp;${{ elseif endsWith(variables['Build.SourceBranch'], '/prd') }}:&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;value: 'PRD'&lt;BR /&gt;&amp;nbsp;&amp;nbsp;${{ else }}:&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;value: 'EMPTY'&lt;BR /&gt;jobs:&lt;BR /&gt;- deployment: deploy_workspace&lt;BR /&gt;&amp;nbsp;&amp;nbsp;displayName: Deploying to ${{ variables.workspaceEnv }}&lt;BR /&gt;&amp;nbsp;&amp;nbsp;environment: $(workspaceEnv)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;condition: and(succeeded(), not(eq(variables['workspaceEnv'], 'EMPTY')))&lt;BR /&gt;&amp;nbsp;&amp;nbsp;strategy:&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;runOnce:&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;deploy:&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;steps:&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;- checkout: self&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;- template: /adopipeline/deploy_template.yml&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;parameters:&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;serviceConnection: 'Service Connection name goes here'&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;resourceGroup: 'Target workspace resource group name goes here'&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;${{ if endsWith(variables['Build.SourceBranch'], '/dev') }}:&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;workspace: 'Development workspace name goes here'&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;${{ elseif endsWith(variables['Build.SourceBranch'], '/uat') }}:&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;workspace: ' UAT workspace name goes here '&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;${{ elseif endsWith(variables['Build.SourceBranch'], '/prd') }}:&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;workspace: ' Production workspace name goes here '&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;${{ else }}:&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;workspace: ''&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&lt;STRONG&gt;⚠️Important notes:&lt;/STRONG&gt;&lt;BR /&gt;&lt;BR /&gt;In case you don’t have a service connection created yet, you can refer to this document &lt;A href="https://learn.microsoft.com/en-us/azure/devops/pipelines/library/connect-to-azure?view=azure-devops" target="_blank" rel="noopener"&gt;ARM service connection&lt;/A&gt; and create one. &lt;BR /&gt;&lt;BR /&gt;Because &lt;STRONG&gt;the Synapse Workspace Deployment task does not support the “Workload Identity Federation” credential type&lt;/STRONG&gt;, you must select the “&lt;STRONG&gt;Secret&lt;/STRONG&gt;” credential type.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;PRE class="lia-align-center"&gt;Figure 9: Setting the credential type for your Azure Service Connection&lt;BR /&gt;&lt;BR /&gt;&lt;/PRE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;In the YAML pipeline provided above, you should replace the highlighted placeholder, with your service connection name.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;PRE class="lia-align-center"&gt;Figure 10: Configuring the serviceConnection parameter&lt;/PRE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;The service connection is the resource used to provide the credentials on the task execution allowing it to connect to the workspace for deployment. In our example, the same service connection is allowing access to all of our workspaces. You may need to provide a different service connection depending on the workspace and the pipeline will need to be adjusted for this use case. Same logic should apply to the resourceGroup parameter. If your workspaces belong to different resource groups, you can adapt the if condition in the parameters section, including the resource group parameter on each if clause to assign a different value to the resourceGroup parameter depending on the environment branch that is triggering the YAML pipeline.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Creating a service connection in Azure DevOps, using automatic App registration, will trigger the provisioning of a new service principal in your Microsoft Entra ID.&lt;BR /&gt;Before starting the CI/CD flow to promote Synapse code across different workspaces, this service principal must be granted the appropriate Synapse RBAC role — either&amp;nbsp;&lt;STRONG&gt;Synapse Administrator&lt;/STRONG&gt;&amp;nbsp;or&amp;nbsp;&lt;STRONG&gt;Synapse Artifact Publisher&lt;/STRONG&gt;, depending on whether your Synapse deployment task is configured to deploy&amp;nbsp;&lt;STRONG&gt;Managed Private Endpoints.&lt;/STRONG&gt;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&lt;STRONG&gt;How can you identify the service principal associated with the service connection?&lt;/STRONG&gt;&lt;BR /&gt;In your DevOps project settings, go to&amp;nbsp;&lt;STRONG&gt;Service Connections&lt;/STRONG&gt;&amp;nbsp;and select your service connection. On the&amp;nbsp;&lt;STRONG&gt;Overview&lt;/STRONG&gt;&amp;nbsp;tab, click the&amp;nbsp;&lt;STRONG&gt;"Manage App registration"&lt;/STRONG&gt;&amp;nbsp;link. This will take you to the Azure Portal, specifically to Microsoft Entra ID, where you can copy details such as the display name of the service principal.&lt;/P&gt;
&lt;img /&gt;
&lt;PRE class="lia-align-center"&gt;Figure 11: Service connection details - selecting the Manage App registration &lt;/PRE&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Then, in the destination Synapse Studio environment, you can assign the appropriate Synapse RBAC role to this service principal.&lt;BR /&gt;If you skip this step, the Synapse code deployment will fail with an authorization error (&lt;STRONG&gt;HTTP 403 – Forbidden&lt;/STRONG&gt;).&lt;/P&gt;
&lt;img /&gt;
&lt;PRE class="lia-clear-both lia-align-center"&gt;Figure 12: Granting Synapse RBAC to the SPN associated to your DevOps service connection&lt;/PRE&gt;
&lt;P class="lia-clear-both lia-align-center"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-clear-both lia-align-justify"&gt;&lt;BR /&gt;Once you're done, don’t forget to rename your pipeline and save it in your preferred branch location.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;In this example, I’m saving the pipeline.yaml file inside the “adopipeline” folder.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;After renaming the file, &lt;STRONG&gt;save your pipeline — but do not run it yet&lt;/STRONG&gt;.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;PRE class="lia-align-center"&gt;Figure 13: Saving your YAML pipeline&lt;/PRE&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 class="lia-align-left"&gt;Configuring the Synapse Deployment Task&lt;/H2&gt;
&lt;P class="lia-align-justify"&gt;You may have noticed that this pipeline uses another file as a template, named deploy_template.yml. Templates allow us to create steps, jobs, stages and other resources that we can re-use across multiple pipelines for easier management of shared pipeline components.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Let’s go ahead and create that file.&lt;/P&gt;
&lt;img /&gt;
&lt;PRE class="lia-align-center"&gt;Figure 14: Saving your template files in your branch&lt;/PRE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We’ll start by adding the following content to our new file:&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;parameters:&lt;BR /&gt;- name: workspace&lt;BR /&gt;&amp;nbsp;&amp;nbsp;type: string&lt;BR /&gt;- name: resourceGroup&lt;BR /&gt;&amp;nbsp;&amp;nbsp;type: string&lt;BR /&gt;- name: serviceConnection&lt;BR /&gt;&amp;nbsp;&amp;nbsp;type:&amp;nbsp;string&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;steps:&lt;BR /&gt;- task: AzureSynapseWorkspace.synapsecicd-deploy.synapse-deploy.Synapse workspace &amp;nbsp;&amp;nbsp;displayName:&amp;nbsp;'Synpase&amp;nbsp;deployment&amp;nbsp;task&amp;nbsp;for&amp;nbsp;workspace:&amp;nbsp;${{&amp;nbsp;parameters.workspace&amp;nbsp;}}'&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;inputs:&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;operation: validateDeploy&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;ArtifactsFolder: '$(System.DefaultWorkingDirectory)/workspace'&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;azureSubscription: '${{ parameters.serviceConnection }}'&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;ResourceGroupName: '${{ parameters.resourceGroup }}'&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;TargetWorkspaceName: '${{ parameters.workspace }}'&lt;BR /&gt;&amp;nbsp;&amp;nbsp;condition:&amp;nbsp;and(succeeded(),&amp;nbsp;not(eq(length('${{&amp;nbsp;parameters.workspace&amp;nbsp;}}'),&amp;nbsp;0)))&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;This template is responsible for adding the Synapse Workspace Deployment Task, which handles deploying Synapse code to the target environment.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;We configure this task using the “Validate and Deploy” operation — a key enabler of our Git branching strategy. It allows Synapse code to be deployed from any user branch, not just the publish branch.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Previously, Synapse users could only deploy code that existed in the publish branch. This meant they had to manually publish their changes in Synapse Studio to ensure those changes were reflected in the ARM templates generated in that branch. With the new “Validate and Deploy” operation, users can now automate this publishing process — as described in [&lt;A class="lia-internal-link lia-internal-url lia-internal-url-content-type-blog" href="https://techcommunity.microsoft.com/blog/azuresynapseanalyticsblog/automating-the-publishing-of-workspace-artifacts-in-synapse-cicd/3603042" target="_blank" rel="noopener" data-lia-auto-title="this article" data-lia-auto-title-active="0"&gt;this article&lt;/A&gt;].&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;⚠️ &lt;STRONG&gt;Important note about the ArtifactsFolder input:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;The specified path must match the Root Folder defined in the Git repository information associated with your Synapse Workspace.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;PRE class="lia-align-center"&gt;Figure 15: The Git configuration in your Development Synapse workspace&lt;/PRE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Once this file is saved, your Azure DevOps setup is complete and ready to support the development and promotion of Synapse code across multiple environments leveraging our Git branching strategy!&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;In the next and final blog post of this series, we’ll walk through an end-to-end demonstration of the Synapse CI/CD flow using our Git branching strategy.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Conclusion&lt;/H2&gt;
&lt;P class="lia-align-justify"&gt;In this second part of our blog series, we demonstrated how to implement a CI/CD flow for Azure Synapse Analytics that fully leverages our Git branching strategy.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;With this CI/CD flow in place, teams are now equipped to develop, test, and promote Synapse artifacts across environments in a streamlined, secure, and automated manner.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;In the final post of this series, we’ll walk through a complete end-to-end demonstration of this CI/CD flow in action — showcasing how our Git branching strategy empowers collaborative work in Synapse Studio and turbo-charges your code release cycles.&lt;/P&gt;</description>
      <pubDate>Tue, 27 May 2025 16:00:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/enhancing-team-collaboration-in-azure-synapse-analytics-using-a/ba-p/4414921</guid>
      <dc:creator>RuiCunha</dc:creator>
      <dc:date>2025-05-27T16:00:00Z</dc:date>
    </item>
    <item>
      <title>Enhancing Team Collaboration in Azure Synapse Analytics using a Git Branching Strategy – Part 1 of 3</title>
      <link>https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/enhancing-team-collaboration-in-azure-synapse-analytics-using-a/ba-p/4405882</link>
      <description>&lt;H1&gt;Introduction&lt;/H1&gt;
&lt;P class="lia-align-justify"&gt;&lt;BR /&gt;Over the past few years of working with numerous &lt;A class="lia-external-url" href="https://web.azuresynapse.net/en/" target="_blank" rel="noopener"&gt;Synapse Studio&lt;/A&gt; users, many have asked&amp;nbsp; how to make the most of &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/synapse-analytics/cicd/source-control" target="_blank" rel="noopener"&gt;collaborative work in Synapse Studio&lt;/A&gt; —especially in complex development scenarios where developers work on different projects in parallel within a single Synapse workspace.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Based on our experience and internal feedback from other Synapse experts, our general recommendation is that each development team or project should have its own Synapse workspace. This approach is particularly effective when the maturity level of the teams—both in&amp;nbsp; &lt;A class="lia-external-url" href="https://azure.microsoft.com/en-us/products/synapse-analytics" target="_blank" rel="noopener"&gt;Synapse &lt;/A&gt;and &lt;A class="lia-external-url" href="https://git-scm.com/" target="_blank" rel="noopener"&gt;Git&lt;/A&gt;, is still developing. In such cases, having separate workspaces simplifies the CI/CD journey.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;However, in scenarios where teams demonstrate greater maturity (especially in Git) and the number or complexity of Synapse projects is relatively low, it is possible for multiple teams and projects to coexist within a single Synapse development workspace. In these cases, evaluating your team’s maturity in both Synapse and Git is crucial. Teams must honestly assess their comfort level with these technologies. For example, expecting success from teams that are just beginning their Synapse journey and have limited Git experience—or planning to develop more than five projects in parallel within a single workspace—would likely lead to challenges. Managing even a single project in Synapse can be complex; doing so for multiple projects without sufficient expertise in both Synapse and Git could be a recipe for disaster.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;That said, the main objective of this article is to demonstrate how a simple Git branching strategy can enhance collaborative work in Synapse Studio, enabling different projects to be developed in parallel within a single Synapse workspace. This guide can help teams at the beginning of their Synapse journey assess their current maturity level (in both Synapse and Git) and understand what level they should aim for to adopt this approach confidently. For teams with a reasonable level of maturity, this article can help validate whether this strategy can further improve their collaborative efforts in Synapse.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;This is the first of three articles, where we’ll show how to implement a simple branching strategy that allows two development teams working on separate projects to share a single Synapse workspace. The strategy supports isolated code promotion through various environments without interfering with each team’s work. While we use&amp;nbsp;&lt;A class="lia-external-url" style="font-style: normal; font-weight: 400; background-color: rgb(255, 255, 255);" href="https://azure.microsoft.com/en-us/products/devops/?nav=min" target="_blank" rel="noopener"&gt;Azure DevOps&lt;/A&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt; as our Git provider throghout these articles, the approach is also applicable to GitHub &lt;/SPAN&gt;&lt;A class="lia-external-url" style="font-style: normal; font-weight: 400; background-color: rgb(255, 255, 255);" href="https://github.com/" target="_blank" rel="noopener"&gt;GitHub&lt;/A&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;Start elevating your collaborative work in Synapse Studio, by implementing a simple and effictive Git branching strategy&lt;/H1&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Let’s begin by outlining our scenario: two development teams—&lt;STRONG&gt;Data Engineering&lt;/STRONG&gt;&amp;nbsp;and&amp;nbsp;&lt;STRONG&gt;Data Science&lt;/STRONG&gt;—are about to start their projects in Synapse. Both teams have substantial experience with Synapse and Git. Together, they’ve agreed on a simple Git branching strategy that will enable them to collaborate effectively in Synapse Studio while supporting a CI/CD flow designed to automate the promotion of their code from the development environment to higher environments.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;The Git branching strategy involves creating&amp;nbsp;&lt;STRONG&gt;feature branches&lt;/STRONG&gt;&amp;nbsp;and&amp;nbsp;&lt;STRONG&gt;environment branches&lt;/STRONG&gt;, organized by team, as illustrated in the following diagram.&lt;/P&gt;
&lt;img /&gt;
&lt;PRE class="lia-align-center"&gt;Figure 1: A Simple Git Branching Strategy&lt;/PRE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&lt;STRONG class="lia-align-justify"&gt;Important note on governance of the branching strategy:&lt;/STRONG&gt;&lt;BR /&gt;The first branches that should be created are the&amp;nbsp;&lt;STRONG class="lia-align-justify"&gt;environment branches&lt;/STRONG&gt;. Once these are in place, any time a developer needs to create a feature branch, it must always be based on the&amp;nbsp;&lt;STRONG class="lia-align-justify"&gt;production environment branch&lt;/STRONG&gt;&amp;nbsp;of their respective team. In this strategy, the production branch serves as the team’s&amp;nbsp;&lt;STRONG class="lia-align-justify"&gt;collaboration branch&lt;/STRONG&gt;, ensuring consistency and alignment across development efforts.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;PRE class="lia-align-center"&gt;Figure 2: Creating a Feature Branch Based on the Production Environment Branch&lt;/PRE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;In the initial phase of implementing this strategy, environment branches can be created using the&amp;nbsp;&lt;STRONG&gt;"Branches"&lt;/STRONG&gt;&amp;nbsp;feature in Azure DevOps, or locally in a developer’s repository and then pushed to the remote repository. Alternatively, teams can use the&amp;nbsp;&lt;STRONG&gt;branch selector&lt;/STRONG&gt;&amp;nbsp;functionality within Synapse Studio. The team should choose the method they are most comfortable with.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Below is an example of the branch structure that will be developed throughout this article:&lt;/P&gt;
&lt;img /&gt;
&lt;PRE class="lia-align-center"&gt;Figure 3: Example of Branching Structure Visualization from DevOps&lt;/PRE&gt;
&lt;P class="lia-align-center"&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;Start at the feature branch level...&lt;/H1&gt;
&lt;P class="lia-align-justify"&gt;With the branching strategy defined, we can now demonstrate how the two teams will carry out their respective developments within a single Synapse development workspace.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Let’s begin with Mary from the Data Engineering team, who will develop a new pipeline. She creates this pipeline in her feature branch: &lt;EM&gt;&lt;STRONG&gt;features/data_eng/mary/mktetl&lt;/STRONG&gt;&lt;/EM&gt;.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;PRE class="lia-align-center"&gt;Figure 4: Creating a Pipeline in a Feature Branch of the Data Engineering Team&lt;/PRE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Meanwhile, Anna, a developer from the Data Science team, also begins working on a new feature for the Data Science project.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;PRE class="lia-align-center"&gt;Figure 5: Creating a Notebook in a Feature Branch of the Data Science Team&lt;/PRE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Both teams are ready to start their unit testing independently, at different times, and with distinct code executions.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;This is where the&amp;nbsp;&lt;STRONG&gt;Environment Branches&lt;/STRONG&gt; come into play.&lt;/P&gt;
&lt;H1&gt;…and end at the Environment branch level!&lt;/H1&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;After completing the development of her feature, Anna promotes her changes to the&amp;nbsp;&lt;STRONG&gt;development environment&lt;/STRONG&gt;. It’s important to note that the code has only been committed to Git—it has&amp;nbsp;&lt;STRONG&gt;not&lt;/STRONG&gt;&amp;nbsp;been published to&amp;nbsp;&lt;STRONG&gt;Live Mode&lt;/STRONG&gt;&amp;nbsp;yet.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;You might wonder why Anna didn’t simply use the&amp;nbsp;&lt;STRONG&gt;Publish&lt;/STRONG&gt; button in Synapse Studio to push her changes live. That would be a valid question—if both teams were sharing a single collaboration branch&amp;nbsp;(as described &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/synapse-analytics/cicd/source-control#version-control" target="_blank" rel="noopener"&gt;here&lt;/A&gt;).&amp;nbsp; In such a setup, the collaboration branch would contain code from both the Data Engineering and Data Science teams. However, that’s not the goal of our branching strategy.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Our strategy is designed to ensure&amp;nbsp;&lt;STRONG&gt;segregation at both the source control and CI/CD levels&lt;/STRONG&gt;&amp;nbsp;for all teams working within a shared Synapse development workspace. Instead of using a single collaboration branch for everyone, each team uses its own&amp;nbsp;&lt;STRONG&gt;production environment branch&lt;/STRONG&gt;&amp;nbsp;as its collaboration branch. In this context, using the&amp;nbsp;&lt;STRONG&gt;Publish&lt;/STRONG&gt;&amp;nbsp;button in Synapse Studio is not appropriate.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Instead, we leverage a feature of the&amp;nbsp;&lt;STRONG&gt;Synapse public extension&lt;/STRONG&gt;—specifically, the &lt;A href="https://marketplace.visualstudio.com/items?itemName=AzureSynapseWorkspace.synapsecicd-deploy" target="_blank" rel="noopener"&gt;the Synapse Workspace Deployment Task&lt;/A&gt; in Azure DevOps (or the &lt;A href="https://github.com/Azure/Synapse-workspace-deployment" target="_blank" rel="noopener"&gt;GitHub Action for Synapse Workspace Artifacts Deployment&lt;/A&gt;, if using GitHub). This extension allows us to publish Synapse artifacts to any environment from any user branch—in this case, from the&amp;nbsp;&lt;STRONG&gt;environment branches&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Therefore, when configuring Git for your Synapse development workspace under this strategy, you can set the collaboration branch to any placeholder (e.g.,&amp;nbsp;main,&amp;nbsp;master, or&amp;nbsp;develop), as it will be&amp;nbsp;&lt;STRONG&gt;ignored&lt;/STRONG&gt;. This approach ensures that each team maintains&amp;nbsp;&lt;STRONG&gt;code isolation&lt;/STRONG&gt;&amp;nbsp;throughout the development and deployment lifecycle.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;It’s important to understand that the decision&amp;nbsp;&lt;STRONG&gt;not&lt;/STRONG&gt;&amp;nbsp;to use the Publish functionality in Synapse Studio is intentional and directly tied to our strategy of supporting&amp;nbsp;&lt;STRONG&gt;multiple teams and multiple projects&lt;/STRONG&gt; within a single Synapse workspace.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;PRE class="lia-align-center"&gt;Figure 6: Data Science Team: Creating a Pull Request from the Feature Branch to an Environment Branch in Synapse Studio&lt;/PRE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;PRE class="lia-align-center"&gt;Figure 7: Data Science Team: Configuring the Pull Request in DevOps, Indicating the Source (Feature Branch) and Destination (DEV Environment Branch)&lt;/PRE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Meanwhile, Mary, our Data Engineer, has also completed the development of her feature and is now ready to publish her pipeline to the development environment.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;PRE class="lia-align-center"&gt;Figure 8: Data Engineering Team: Creating a Pull Request from the Feature Branch to an Environment Branch in Synapse Studio&lt;/PRE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;PRE class="lia-align-center"&gt;Figure 9: Data Engineering Team: Configuring the Pull Request in DevOps, Indicating the Source (Feature Branch) and Destination (DEV Environment Branch)&lt;/PRE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;Conclusion&lt;/H1&gt;
&lt;P class="lia-align-justify"&gt;In conclusion, this article has demonstrated how different development teams can effectively leverage a Git branching strategy to develop their code within a single Synapse development workspace. By creating both &lt;STRONG&gt;feature branches&lt;/STRONG&gt;&amp;nbsp;and&amp;nbsp;&lt;STRONG&gt;environment branches&lt;/STRONG&gt;, the teams are able to work in parallel without interfering with each other’s development processes. This approach ensures proper isolation and enables smooth code promotion across environments.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;As we move forward, the next article in this series will explore how this strategy helps both teams accelerate their development lifecycle and streamline the CI/CD flow in Synapse.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;PRE class="lia-align-center"&gt;&amp;nbsp;&lt;/PRE&gt;</description>
      <pubDate>Thu, 15 May 2025 16:00:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/enhancing-team-collaboration-in-azure-synapse-analytics-using-a/ba-p/4405882</guid>
      <dc:creator>RuiCunha</dc:creator>
      <dc:date>2025-05-15T16:00:00Z</dc:date>
    </item>
    <item>
      <title>Runtime 1.1, based on Apache Spark 3.3, will be retired and disabled as of March 31, 2025</title>
      <link>https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/runtime-1-1-based-on-apache-spark-3-3-will-be-retired-and/ba-p/4396044</link>
      <description>&lt;P&gt;Microsoft Fabric Runtime 1.1 will be retired and disabled as of March 2025.  End of support for Microsoft Fabric Runtime 1.1 was announced July 12, 2024. We recommend that you upgrade your Fabric workspace and environments to use&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/fabric/data-engineering/runtime-1-3" target="_blank"&gt;Runtime 1.3 (Apache Spark 3.5 and Delta Lake 3.2)&lt;/A&gt;. This latest Fabric Runtime 1.3 also offers a Native Execution Engine to boost performance and flexibility at no additional&amp;nbsp;cost. Learn more about Native Execution Engine&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/fabric/data-engineering/native-execution-engine-overview" target="_blank"&gt;here&lt;/A&gt;. For the complete lifecycle and support policies of Apache Spark runtimes in Fabric, refer to&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/fabric/data-engineering/lifecycle" target="_blank"&gt;&lt;STRONG&gt;Lifecycle of Apache Spark runtimes in Fabric&lt;/STRONG&gt;&lt;/A&gt;.&lt;/P&gt;</description>
      <pubDate>Wed, 26 Mar 2025 05:32:44 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/runtime-1-1-based-on-apache-spark-3-3-will-be-retired-and/ba-p/4396044</guid>
      <dc:creator>JeanC750</dc:creator>
      <dc:date>2025-03-26T05:32:44Z</dc:date>
    </item>
    <item>
      <title>We're moving!</title>
      <link>https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/we-re-moving/ba-p/4275903</link>
      <description>&lt;P&gt;We’re moving to the &lt;A href="https://techcommunity.microsoft.com/t5/analytics-on-azure-blog/bg-p/AnalyticsonAzure" target="_self"&gt;Analytics on Azure Tech Community&lt;/A&gt;! All new Azure Synapse Analytics content will be published there. In the next few days all existing content will be migrated over.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Thank you, and we look forward to seeing you all at Analytics on Azure!&lt;/P&gt;</description>
      <pubDate>Mon, 21 Oct 2024 21:23:20 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/we-re-moving/ba-p/4275903</guid>
      <dc:creator>ryanmajidi</dc:creator>
      <dc:date>2024-10-21T21:23:20Z</dc:date>
    </item>
    <item>
      <title>Upgrade to Azure Synapse runtimes for Apache Spark 3.4 &amp; previous runtimes deprecation</title>
      <link>https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/upgrade-to-azure-synapse-runtimes-for-apache-spark-3-4-amp/ba-p/4177758</link>
      <description>&lt;P class="lia-align-left"&gt;&lt;SPAN class="TextRun SCXW124775924 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW124775924 BCX8"&gt;It is important to stay ahead of the curve and keep services up to date.&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW124775924 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW124775924 BCX8"&gt;That's&lt;/SPAN&gt;&lt;SPAN class="NormalTextRun SCXW124775924 BCX8"&gt; why we encourage all Azure Synapse customers with Apache Spark workloads to migrate to the newest GA version, &lt;/SPAN&gt;&lt;/SPAN&gt;&lt;A class="Hyperlink SCXW124775924 BCX8" href="https://learn.microsoft.com/en-us/azure/synapse-analytics/spark/apache-spark-34-runtime" target="_blank" rel="noreferrer noopener"&gt;&lt;SPAN class="TextRun Underlined SCXW124775924 BCX8" data-contrast="none"&gt;&lt;SPAN class="NormalTextRun SCXW124775924 BCX8" data-ccp-charstyle="Hyperlink"&gt;Azure Synapse Runtime for Apache Spark 3.4&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN class="TextRun SCXW124775924 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW124775924 BCX8"&gt;. The update brings Apache Spark to version 3.4 and Delta Lake to version 2.4, introduces Mariner as the new operating system, and updates Java from version 8 to 11.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="EOP SCXW124775924 BCX8" data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:6,&amp;quot;335551620&amp;quot;:6,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:279}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="lia-align-left"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-left"&gt;&lt;SPAN class="EOP SCXW124775924 BCX8" data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:6,&amp;quot;335551620&amp;quot;:6,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:279}"&gt;Within a few days/weeks, we are &lt;STRONG&gt;disabling &lt;SPAN data-contrast="auto"&gt;Apache Spark&lt;/SPAN&gt;&amp;nbsp;2.4, 3.1, 3.2 job execution&lt;/STRONG&gt;. If you are affected you have already been notified. Using the runtime after EOS date is at one's &lt;STRONG&gt;own risk&lt;/STRONG&gt;, and with the agreement and acceptance of the risks that jobs will eventually stop executing.&amp;nbsp;All &lt;STRONG&gt;support tickets will be auto-resolved&lt;/STRONG&gt;.&amp;nbsp;&lt;SPAN data-contrast="auto"&gt;Learn more about the &lt;A href="https://learn.microsoft.com/en-us/lifecycle/" target="_blank" rel="noopener"&gt;Microsoft Lifecycle Policy&lt;/A&gt;.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="lia-align-center"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-center"&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P class="lia-align-center"&gt;&lt;FONT size="2"&gt;&lt;STRONG&gt;&lt;SPAN class="TextRun SCXW95647915 BCX8" data-contrast="none"&gt;&lt;SPAN class="NormalTextRun SCXW95647915 BCX8" data-ccp-parastyle="heading 1"&gt;Migrate to the latest GA version of Azure Synapse runtimes for Apache Spark 3.4 before the deprecation and disablement of &lt;/SPAN&gt;&lt;SPAN class="NormalTextRun SCXW95647915 BCX8" data-ccp-parastyle="heading 1"&gt;previous&lt;/SPAN&gt;&lt;SPAN class="NormalTextRun SCXW95647915 BCX8" data-ccp-parastyle="heading 1"&gt; versions.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="EOP SCXW95647915 BCX8" data-ccp-props="{&amp;quot;134245418&amp;quot;:true,&amp;quot;134245529&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559738&amp;quot;:360,&amp;quot;335559739&amp;quot;:80,&amp;quot;335559740&amp;quot;:279}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Please refer to the following article for more information on the lifecycle and supportability of our runtimes:&amp;nbsp;&lt;A href="http://&amp;nbsp;https://learn.microsoft.com/en-us/azure/synapse-analytics/spark/apache-spark-version-support#migration-between-apache-spark-versions---support" target="_blank" rel="noopener"&gt;Azure Synapse runtimes.&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Go to &lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/synapse-analytics/spark/runtime-for-apache-spark-lifecycle-and-supportability" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;Synapse runtime for Apache Spark lifecycle and supportability - Azure Synapse Analytics | Microsoft Learn&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:6,&amp;quot;335551620&amp;quot;:6,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:279}"&gt;&amp;nbsp;and&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://spark.apache.org/docs/3.4.1/core-migration-guide.html" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;Migration Guide: Spark Core - Spark 3.4.1 Documentation (apache.org)&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt; for more details on how to migrate and how to change Apache Spark-based runtime in Azure Synapse Analytics.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:6,&amp;quot;335551620&amp;quot;:6,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:279}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 27 Jun 2024 22:38:23 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/upgrade-to-azure-synapse-runtimes-for-apache-spark-3-4-amp/ba-p/4177758</guid>
      <dc:creator>eskot</dc:creator>
      <dc:date>2024-06-27T22:38:23Z</dc:date>
    </item>
    <item>
      <title>ADF\Synapse Analytics - Replace Columns names using Rule based mapping in Mapping data flows</title>
      <link>https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/adf-synapse-analytics-replace-columns-names-using-rule-based/ba-p/4039551</link>
      <description>&lt;P&gt;In real time, the column names from source might not be uniform, some columns will have a space in it, some other columns will not.&lt;/P&gt;
&lt;P&gt;For example,&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Sales Channel&lt;/LI&gt;
&lt;LI&gt;Item Type,&lt;/LI&gt;
&lt;LI&gt;Region&lt;/LI&gt;
&lt;LI&gt;Country&lt;/LI&gt;
&lt;LI&gt;Unit Price&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;It is a good practice to replace all the spaces in a column name before doing any transformation for easy handling. This also helps with auto mapping, when the sink column names do not come with spaces!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Select transformation in data flow makes it simpler to automatically detect spaces in column names and then remove them for the rest of the dataflow.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Consider the below source, with the given column names.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Here as we can see, few columns have spaces, few columns like Region and Country do not have spaces in it.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Using the below configuration in select transformation, we can get rid of the spaces in the column names with a simple expression.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In the Input columns, Click on Add mapping button, and choose Rule-based Mapping.&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;Then give the below expression:&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;on Source1's column: true()&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;on Name as column: replace($$,' ','')&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;What it does? &lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;It will return true() for all the columns which have ' ' (space) in it and replace it with '' (no space).&lt;/P&gt;
&lt;P&gt;Upon data preview, we get to see the below result,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;As we are seeing here, all the columns with spaces are coming without spaces in between.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If not for the Rule based mapping, one would have to manually remove space from all the columns. It would be a nightmare if the number of columns are more! Thanks to rule-based mapping!&lt;/P&gt;</description>
      <pubDate>Fri, 23 Feb 2024 07:06:25 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/adf-synapse-analytics-replace-columns-names-using-rule-based/ba-p/4039551</guid>
      <dc:creator>Subashri_Vasu</dc:creator>
      <dc:date>2024-02-23T07:06:25Z</dc:date>
    </item>
    <item>
      <title>Interpreting Script activity output json with Azure Data Factory\Synapse analytics</title>
      <link>https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/interpreting-script-activity-output-json-with-azure-data-factory/ba-p/4030594</link>
      <description>&lt;P&gt;Script activity in Azure Data Factory\ Synapse analytics is very helpful to run queries against data sources mentioned here in &lt;A title="Script activity" href="https://learn.microsoft.com/en-us/azure/data-factory/transform-data-using-script" target="_self"&gt;this&lt;/A&gt; document.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;When we use two or more queries in the script activity, it is important to understand the output json of script activity to write expressions based on the output in the subsequent activities.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Consider the below Pipeline design:&lt;/P&gt;
&lt;P&gt;We have two select queries as follows in script activity, and each of which will give a resultSet.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;select top 2 * from tbl_adf;&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;Select * from tbladf&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;When debugged, it will give output as below.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="json"&gt;{
"resultSetCount": 2,
"recordsAffected": 0,
"resultSets": [
{
"rowCount": 2,
"rows": [
{
"Order Date": null,
"Order ID": "271735084",
"Ship Date": "2020-01-24T00:00:00Z",
"Region": "AUSTRALIA AND OCEANIA",
"Unit Cost": "524.96",
"Total Revenue": "849829.05"
},
{
"Order Date": null,
"Order ID": "252502572",
"Ship Date": "2018-12-29T00:00:00Z",
"Region": "MIDDLE EAST AND NORTH AFRICA",
"Unit Cost": "152.58",
"Total Revenue": "1522290.66"
}
]
},
{
"rowCount": 1,
"rows": [
{
"lastmodified": "2024-01-12T08:29:48.1508043Z"
}
]
}
]&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;So,&amp;nbsp; as per line #2, resultSet count =2. It is because, we have two select queries in the script activity.&lt;/P&gt;
&lt;P&gt;In case we want to get the Total Revenue value from Row#1, we have to write below expression.&lt;/P&gt;
&lt;P class="lia-align-center"&gt;&amp;nbsp;&lt;STRONG&gt;&lt;EM&gt;@activity('Script1_copy1').output.resultSets[0].rows[0]['Total Revenue']&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;where, resultSets[0]: First select query result&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;rows[0]: first row in resultSets[0]&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Same way, if we want to get the Total Revenue value from Row#2, we have to write below expression.&lt;/P&gt;
&lt;P class="lia-align-center"&gt;&amp;nbsp;&lt;STRONG&gt;&lt;EM&gt;@activity('Script1_copy1').output.resultSets[0].rows[1]['Total Revenue']&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;where, resultSets[0]: First select query result&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;rows[1]: second row in resultSets[0]&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;And, the below expression gets the rowcount from each resultset.&lt;/P&gt;
&lt;P class="lia-align-center"&gt;&amp;nbsp;&lt;STRONG&gt;&lt;EM&gt;@activity('Script1_copy1').output.resultSets[0].RowCount&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P class="lia-align-center"&gt;&lt;STRONG&gt;&lt;EM&gt;&amp;nbsp;@activity('Script1_copy1').output.resultSets[1].RowCount&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;So, by understanding the structure of output json, we are able to write expressions to access individual elements of the output of any activity in ADF\Synapse.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 01 Feb 2024 06:20:20 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/interpreting-script-activity-output-json-with-azure-data-factory/ba-p/4030594</guid>
      <dc:creator>Subashri_Vasu</dc:creator>
      <dc:date>2024-02-01T06:20:20Z</dc:date>
    </item>
    <item>
      <title>Synapse Connectivity Series Part #4 - Advanced network troubleshooting and network trace analysis</title>
      <link>https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/synapse-connectivity-series-part-4-advanced-network/ba-p/3945481</link>
      <description>&lt;P&gt;&lt;FONT size="3"&gt;Continuing the series of this blog posts I would like to go more advanced on troubleshooting connectivity issues. I would like to thank also&amp;nbsp;&lt;STRONG&gt;Salam Al Hasan (&lt;a href="javascript:void(0)" data-lia-user-mentions="" data-lia-user-uid="1371597" data-lia-user-login="Salamalhasan" class="lia-mention lia-mention-user"&gt;Salamalhasan&lt;/a&gt;)&amp;nbsp;&lt;/STRONG&gt;that helped me with some real case scenarios from our customers.&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;This is part 4 of a series related to Synapse Connectivity - check out the other blog articles:&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/synapse-connectivity-series-part-1-inbound-sql-dw-connections-on/ba-p/3589170" target="_blank" rel="noopener"&gt;Synapse Connectivity Series Part #1 - Inbound SQL DW connections on Public Endpoints&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/synapse-connectivity-series-part-2-inbound-synapse-private/ba-p/3705160" target="_blank" rel="noopener"&gt;Synapse Connectivity Series Part #2 - Inbound Synapse Private Endpoints&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/synapse-connectivity-series-part-3-synapse-managed-vnet-and/ba-p/3706983" target="_blank" rel="noopener"&gt;Synapse Connectivity Series Part #3 - Synapse Managed VNET and Managed Private Endpoints&lt;/A&gt;&amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="3"&gt;In this post I will speak about how to &lt;STRONG&gt;capture a network trace&lt;/STRONG&gt; and how to do some &lt;STRONG&gt;basic troubleshooting&lt;/STRONG&gt; using Wireshark to &lt;STRONG&gt;investigate connection and disconnection issues&lt;/STRONG&gt;, not limited to samples error messages below:&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;EM&gt;&lt;FONT size="2"&gt;An existing &lt;STRONG&gt;connection was forcibly closed by the remote host&lt;/STRONG&gt;, The specified network name is no longer available, The semaphore timeout period has expired.&lt;/FONT&gt;&lt;/EM&gt;&lt;/LI&gt;
&lt;LI style="margin-top: 0; margin-bottom: 0; vertical-align: middle;"&gt;&lt;EM&gt;&lt;FONT size="2"&gt;&lt;STRONG&gt;Connection Timeout Expired&lt;/STRONG&gt;. The timeout period elapsed while attempting to consume the &lt;STRONG&gt;pre-login handshake&lt;/STRONG&gt; acknowledgement. This could be because the&amp;nbsp;pre-login handshake failed&amp;nbsp;or the server was unable to respond back in time. The duration spent while attempting to connect to this server was - [Pre-Login] initialization=5895; handshake=29;&lt;/FONT&gt;&lt;/EM&gt;&lt;/LI&gt;
&lt;LI style="margin-top: 3pt; margin-bottom: 0pt; vertical-align: middle;"&gt;&lt;EM&gt;&lt;FONT size="2"&gt;A connection was successfully established with the server, but then an&amp;nbsp;error occurred during the &lt;STRONG&gt;pre-login handshake&lt;/STRONG&gt;. (provider: TCP Provider, error: 0 - The semaphore timeout period has expired.)&lt;/FONT&gt;&lt;/EM&gt;&lt;/LI&gt;
&lt;LI style="margin-top: 3pt; margin-bottom: 0pt; vertical-align: middle;"&gt;&lt;EM&gt;&lt;FONT size="2"&gt;A connection was successfully established with the server, but then an&amp;nbsp;&lt;STRONG&gt;error occurred during the login process&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/EM&gt;&lt;/LI&gt;
&lt;LI style="margin-top: 3pt; margin-bottom: 0pt; vertical-align: middle;"&gt;&lt;EM&gt;&lt;FONT size="2"&gt;Failed to copy to SQL Data Warehouse from blob storage. A connection was successfully established with the server, but then an&amp;nbsp;&lt;STRONG&gt;error occurred during the login process&lt;/STRONG&gt;. (provider: SSL Provider, error: 0 - An &lt;STRONG&gt;existing connection was forcibly closed by the remote host&lt;/STRONG&gt;.) An existing connection was forcibly closed by the remote host&lt;/FONT&gt;&lt;/EM&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;&lt;FONT size="4"&gt;Index&lt;/FONT&gt;&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;FONT size="2"&gt;&lt;STRONG&gt;0 - Scoping&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="2"&gt;&lt;STRONG&gt;1 - How to CAPTURE a network trace&lt;/STRONG&gt;&lt;/FONT&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;FONT size="2"&gt;1.1 - Input&lt;/FONT&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;FONT size="2"&gt;1.1.1 - Capture filter&lt;/FONT&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="2"&gt;1.2 - Output&lt;/FONT&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;FONT size="2"&gt;1.2.1 - Reproducible error?&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="2"&gt;1.2.2 - Transient error?&lt;/FONT&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="2"&gt;1.3 - Options&lt;/FONT&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="2"&gt;&lt;STRONG&gt;2 - Capture Alternatives&lt;/STRONG&gt;&lt;/FONT&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;FONT size="2"&gt;2.1 - NETSH - Native on Windows&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="2"&gt;2.2 - TCPDUMP - Native on Linux / MacOs&lt;/FONT&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="2"&gt;&lt;STRONG&gt;3 - Tool configuration - Wireshark&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="2"&gt;&lt;STRONG&gt;4 - Analysis&lt;/STRONG&gt;&lt;/FONT&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;FONT size="2"&gt;4.1 - SCENARIO 1 - Success path&lt;/FONT&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;FONT size="2"&gt;4.1.1 - Simple sample on Wireshark&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="2"&gt;4.1.2 - Direction of the packages&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="2"&gt;4.1.3 - Redirect&lt;/FONT&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="2"&gt;4.2 - SCENARIO 2 - The server was not found or was not accessible&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="2"&gt;4.3 - SCENARIO 3 - Redirect port 11xxx range closed&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="2"&gt;4.4 - SCENARIO 4 - Connection timeout (Middle connection) / PreLogin handshake&lt;/FONT&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;FONT size="2"&gt;4.4.1 - SCENARIO 4.1 - TLS Blocked&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="2"&gt;4.4.2 - SCENARIO 4.2 - Long connection&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="2"&gt;4.4.3 - SCENARIO 4.3 - Azure firewall&lt;/FONT&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="2"&gt;4.5 - SCENARIO 5 - Connection Dropped&lt;/FONT&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;FONT size="4"&gt;0 - Scoping&lt;/FONT&gt;&lt;/H2&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT size="3"&gt;Before we can even start thing about a connection or disconnection issue we need to better understand the scenario and we need to explore the following questions:&lt;/FONT&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT size="3"&gt;&amp;nbsp;&lt;BR /&gt;1. &lt;STRONG&gt;Which tool&lt;/STRONG&gt; is being utilized?&lt;BR /&gt;2. Is the client machine situated on an &lt;STRONG&gt;Azure VM&lt;/STRONG&gt; or an &lt;STRONG&gt;on-premises machine&lt;/STRONG&gt;?&lt;BR /&gt;3. What &lt;STRONG&gt;type of firewall&lt;/STRONG&gt; is in use: Azure Firewall or a third-party application?&lt;BR /&gt;4. What is the &lt;STRONG&gt;operating system&lt;/STRONG&gt; of the host machine for the client?&lt;BR /&gt;5. Is the connection type &lt;STRONG&gt;private or public (Using Private endpoints)&lt;/STRONG&gt;?&lt;BR /&gt;6. What type of user is the customer employing (&lt;STRONG&gt;SQL User, AAD User&lt;/STRONG&gt;)?&lt;BR /&gt;7. Are &lt;STRONG&gt;connection retry policies&lt;/STRONG&gt; in place?&lt;/FONT&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;&lt;FONT size="3"&gt;-&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-sql/database/troubleshoot-common-connectivity-issues?view=azuresql#retry-logic-for-transient-errors" target="_self"&gt;Retry logic for transient errors&lt;/A&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;&lt;FONT size="3" color="#DF0000"&gt;- IMPORTANT: &lt;EM&gt;Transient failure are a normal occurrence and should be expected from time to time. They can occur for many reasons such as balance and deployments in the region your server is in, network issues. A transient failure can takes some seconds or minutes, when this takes more time we can look to see if there was a larger underlying reason. &lt;/EM&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT size="3"&gt;8. What is the &lt;STRONG&gt;timeout&lt;/STRONG&gt; value set for the connection?&lt;BR /&gt;9. Are you using &lt;STRONG&gt;OLEDB,&lt;/STRONG&gt;&amp;nbsp;&lt;STRONG&gt;ODBC, JDBC, other&lt;/STRONG&gt;? Neglecting to use the &lt;STRONG&gt;latest version&lt;/STRONG&gt; can sometimes lead to disconnections.&lt;/FONT&gt;&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;&lt;FONT size="4"&gt;1 - How to CAPTURE a network trace&lt;/FONT&gt;&lt;/H2&gt;
&lt;P&gt;First we need to know how to start a capture and what tools to use. The tool selection depend on your personal taste and the OS being used. Remember that "&lt;STRONG&gt;&lt;EM&gt;The best tool is always the one you know&lt;/EM&gt;".&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;First will explain my preferred, &lt;A href="https://www.wireshark.org/download.html" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Wireshark&lt;/STRONG&gt;&lt;/A&gt;. It is a &lt;STRONG&gt;free tool&lt;/STRONG&gt; that work on multiple platforms what makes easy to support different kind of clients.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;When you open it, you could just start collecting the trace. My suggestion is always start with the "&lt;STRONG&gt;config data capture settings&lt;/STRONG&gt;" button.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT size="4"&gt;&lt;img /&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;1.1 - Input&lt;/H3&gt;
&lt;P&gt;&lt;FONT size="3"&gt;First you need to &lt;STRONG&gt;select the network cards&lt;/STRONG&gt; that will be monitored.&amp;nbsp;&lt;/FONT&gt;&lt;FONT size="3"&gt;The selection of network cards is just the &lt;FONT color="#0000FF"&gt;&lt;STRONG&gt;blue&lt;/STRONG&gt; &lt;/FONT&gt;or white rows that you select with shift and/or control commands, as you can see on image above.&amp;nbsp;&lt;/FONT&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT size="3"&gt;&lt;img /&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;FONT size="3"&gt;Notice in this interface there are some checkboxes (&lt;STRONG&gt;Promiscuous Mode&lt;/STRONG&gt;), ignore them. They are not selection box, you can actually unselect them.&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;&lt;FONT size="3" color="#DF0000"&gt;IMPORTANT: If you select default (first card) you may not collect anything useful. Make sure you select the correct interface. Or just select all if not sure&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;&lt;FONT size="4"&gt;1.1.1 - Capture filter&lt;/FONT&gt;&lt;/H4&gt;
&lt;P&gt;&lt;FONT size="3"&gt;In the bottom you can also filter the capture, ideally you &lt;STRONG&gt;should not filter&lt;/STRONG&gt;, but if need to&amp;nbsp;&lt;STRONG&gt;save resources / space in disk&lt;/STRONG&gt; might be useful, so you capture traffic only of specific ports and IP addresses. You can add as sample&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;FONT size="3"&gt;SQL port 1433 + 11xxx range needed for redirect + 53 for DNS requests&lt;/FONT&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;PRE&gt;&lt;FONT size="2"&gt;tcp portrange 11000-12000 or tcp port 1433 or tcp port 53&lt;/FONT&gt;&lt;/PRE&gt;
&lt;P&gt;&lt;FONT size="3"&gt;&lt;img /&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;H3&gt;&lt;FONT size="4"&gt;1.2 - Output&lt;/FONT&gt;&lt;/H3&gt;
&lt;P&gt;This output area will depend if you can easily repro the issue, or if it is more like a transient issue.&lt;/P&gt;
&lt;H3&gt;&lt;FONT size="4"&gt;1.2.1 - Reproducible error?&lt;/FONT&gt;&lt;/H3&gt;
&lt;P&gt;&lt;FONT size="3"&gt;If the error you are &lt;/FONT&gt;facin&lt;FONT size="3"&gt;g is repeatable, you can capture &lt;STRONG&gt;one single file&lt;/STRONG&gt;, repro the issue and stop data collection, like sample below&lt;/FONT&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT size="3"&gt;&lt;img /&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;H3&gt;&lt;FONT size="4"&gt;1.2.2 - Transient error?&lt;/FONT&gt;&lt;/H3&gt;
&lt;P&gt;&lt;FONT size="3"&gt;If the error is intermittent, you might need to caputure what we call a &lt;STRONG&gt;CIRCULAR&lt;/STRONG&gt; network trace capture. That means that trace will be capturing in a &lt;STRONG&gt;loop&lt;/STRONG&gt; until you stop the data collection, it will keep as many files of the specified size you define.&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="3"&gt;&lt;STRONG&gt;Check some important information below:&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;FONT size="3"&gt;Do &lt;STRONG&gt;NOT&lt;/STRONG&gt; create files over 2GB, will be a problem to open it, instead, increase number of files. Just need more disk&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="3"&gt;As soon as error happens, try to stop collection or else you might lose telemetry, you can also increase number of files to keep more data&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="3"&gt;It is hard to estimate how much time each 2GB file can handle. It will depend on volume of data you are sending&lt;/FONT&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;&lt;FONT size="4"&gt;1.3 - Options&lt;/FONT&gt;&lt;/H3&gt;
&lt;P&gt;&lt;FONT size="3"&gt;The last part you could create some stop condition like stop after some time. Keep default options here&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT size="3"&gt;&lt;img /&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="3"&gt;Then just click to &lt;STRONG&gt;start collection&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;FONT size="4"&gt;2 - Capture Alternatives&lt;/FONT&gt;&lt;/H2&gt;
&lt;P&gt;&lt;FONT size="3"&gt;Before we start analyzing it, we need to talk about alternatives&lt;/FONT&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;H3&gt;&lt;FONT size="4"&gt;2.1 - NETSH - Native on Windows.&lt;/FONT&gt;&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;FONT size="3"&gt;No need to install new tool on environment&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="3"&gt;&lt;STRONG&gt;ETL file output does not open on Wireshark nativelly&lt;/STRONG&gt;, you need to convert it. Can use application (&lt;A href="https://github.com/microsoft/etl2pcapng" target="_blank" rel="noopener"&gt;https://github.com/microsoft/etl2pcapng&lt;/A&gt;)&amp;nbsp;&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;
&lt;H4&gt;&lt;FONT size="3"&gt;&lt;STRONG&gt;NETSH Short capture&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/H4&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;FONT size="3"&gt;Only 1 file of 2GB&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="3"&gt;Start CMD (Run as Admin)&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="3"&gt;Start the process:&lt;/FONT&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;PRE&gt;&lt;FONT size="3"&gt;netsh trace start persistent=yes capture=yes tracefile=%temp%\%computername%_nettrace.etl maxsize=2048&lt;/FONT&gt;&lt;/PRE&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="3"&gt;Reproduce the issue.&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="3"&gt;Stop the process:&lt;/FONT&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;PRE&gt;&lt;FONT size="3"&gt;netsh trace stop&lt;/FONT&gt;&lt;/PRE&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;H4&gt;&lt;FONT size="3"&gt;&lt;STRONG&gt;NETSH Circular capture&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/H4&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;FONT size="3"&gt;Only 1 file of 2GB you can get more data truncating packages (packettruncatebytes=512)&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="3"&gt;Start CMD (Run as Admin)&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="3"&gt;Start the process:&amp;nbsp;&lt;/FONT&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;PRE&gt;&lt;FONT size="3"&gt;netsh trace start capture=yes packettruncatebytes=512 tracefile=%temp%\%computername%_nettrace.etl maxsize=2048 filemode=circular overwrite=yes report=no&lt;/FONT&gt;&lt;/PRE&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="3"&gt;Reproduce the issue.&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="3"&gt;Stop the process:&lt;/FONT&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;PRE&gt;&lt;FONT size="3"&gt;netsh trace stop&lt;/FONT&gt;&lt;/PRE&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;H3&gt;&lt;FONT size="4"&gt;&lt;STRONG&gt;2.2 -&amp;nbsp;&lt;/STRONG&gt;&lt;STRONG&gt;TCPDUMP - Native on&amp;nbsp;&lt;/STRONG&gt;&lt;STRONG&gt;Linux / MacOs&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;H4&gt;&lt;FONT size="3"&gt;&lt;STRONG&gt;TCPDUMP Short capture&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/H4&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;PRE&gt;&lt;FONT size="3"&gt;tcpdump -n -w /var/tmp/traffic.pcap&lt;/FONT&gt;&lt;/PRE&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;&lt;FONT size="3"&gt;TCPDUMP Circular capture&lt;/FONT&gt;&lt;/STRONG&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;FONT size="3"&gt;10 files of 2GB&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;
&lt;PRE&gt;&lt;FONT size="3"&gt;tcpdump -n -w /var/tmp/trace.pcap -W 10 -C 2000&lt;/FONT&gt;&lt;/PRE&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;&lt;FONT size="3"&gt;TCPDUMP permission&lt;/FONT&gt;&lt;/STRONG&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;FONT size="3"&gt;If needed can also run with sudo&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;
&lt;PRE&gt;&lt;FONT size="3"&gt;SUDO tcpdump -n -w /var/tmp/traffic.pcap&lt;/FONT&gt;&lt;/PRE&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;H4&gt;&lt;STRONG&gt;&lt;FONT size="3"&gt;TCPDUMP Filter interface&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/H4&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;FONT size="3"&gt;For linux servers, you can use "netstat -i" to list the available INTERFACE devices as well.&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;
&lt;PRE&gt;&lt;FONT size="3"&gt;sudo netstat -i&lt;/FONT&gt;&lt;/PRE&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="3"&gt;Kernel Interface table&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT size="3"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; Iface&amp;nbsp; &amp;nbsp; &amp;nbsp; MTU&amp;nbsp; ........&amp;nbsp; &amp;nbsp;&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT size="3"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; eth0&amp;nbsp; &amp;nbsp; &amp;nbsp; 9001&amp;nbsp; ........&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;
&lt;PRE&gt;&lt;FONT size="3"&gt;tcpdump &lt;STRONG&gt;-i&lt;/STRONG&gt; &lt;STRONG&gt;eth0&lt;/STRONG&gt; -n -w /var/tmp/traffic.pcap&lt;/FONT&gt;&lt;/PRE&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;&lt;FONT size="2"&gt;References&lt;/FONT&gt;&lt;/STRONG&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;FONT size="2"&gt;&lt;A href="https://www.tcpdump.org/" target="_blank" rel="noopener"&gt;https://www.tcpdump.org/&amp;nbsp;&lt;/A&gt;&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="2"&gt;&lt;A href="https://www.tcpdump.org/manpages/tcpdump.1.html" target="_blank" rel="noopener"&gt;https://www.tcpdump.org/manpages/tcpdump.1.html&lt;/A&gt;&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="2"&gt;&lt;A href="https://superuser.com/questions/904786/tcpdump-rotate-capture-files-using-g-w-and-c" target="_blank" rel="noopener"&gt;https://superuser.com/questions/904786/tcpdump-rotate-capture-files-using-g-w-and-c&lt;/A&gt;&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="2"&gt;&lt;A href="https://community.boomi.com/s/article/How-To-Capture-TCP-Traffic-Continuously-For-Intermittent-Issues" target="_blank" rel="noopener"&gt;https://community.boomi.com/s/article/How-To-Capture-TCP-Traffic-Continuously-For-Intermittent-Issues&amp;nbsp;&lt;/A&gt;&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="2"&gt;&lt;A href="https://developer.apple.com/documentation/network/recording_a_packet_trace" target="_blank" rel="noopener"&gt;https://developer.apple.com/documentation/network/recording_a_packet_trace&amp;nbsp;&lt;/A&gt;&lt;/FONT&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;&lt;FONT size="4"&gt;3 - Tool configuration - Wireshark&lt;/FONT&gt;&lt;/H2&gt;
&lt;P&gt;&lt;FONT size="3"&gt;Now that capture is done, we can start looking into the capture, just need to set something on Wireshark to make sure that the interface is ready to analyze and is showing the information needed to analyse it.&lt;/FONT&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;H2&gt;&lt;FONT size="3"&gt;&lt;STRONG&gt;Date time format to UTC&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;FONT size="3"&gt;Make sure to set date time format correct, so this way won't exist confusion with timezone (from server side, client side vs client machine timezone). Use UTC&lt;/FONT&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-indent-padding-left-90px"&gt;&lt;FONT size="3"&gt;&lt;img /&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;H2&gt;&lt;FONT size="3"&gt;&lt;STRONG&gt;Source and destination ports &lt;/STRONG&gt;&lt;/FONT&gt;&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;FONT size="3"&gt;Source and Destination IP might not be enough. To simplify analysis, add also&amp;nbsp;&lt;STRONG&gt;source and destination ports as columns.&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="3"&gt;In the package (below), look for the &lt;STRONG&gt;Transmission Control Protocol (TCP)&lt;/STRONG&gt; and get Source and Destination ports. Add them as columns. Like image below&lt;/FONT&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-indent-padding-left-90px"&gt;&lt;FONT size="3"&gt;&lt;img /&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;FONT size="4"&gt;4 - Analysis&lt;/FONT&gt;&lt;/H2&gt;
&lt;P&gt;&lt;FONT size="3"&gt;With the file captured you will use &lt;STRONG&gt;display filters&lt;/STRONG&gt; that will depend on the error that you are looking for. The most common used are below&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;FONT size="3"&gt;Get SQL Comunication + DNS resolution&lt;/FONT&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;PRE&gt;&lt;FONT size="3"&gt;(tcp.port in {1433, 11000..11999} or (dns.qry.name contains "database.windows.net") or (dns.qry.name contains "sql.azuresynapse.net"))&lt;/FONT&gt;&lt;/PRE&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="3"&gt;Look for SQL connections that were reseted (Disconnection scenarios)&lt;/FONT&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;PRE&gt;&lt;FONT size="3"&gt;(tcp.port in {1433, 11000..11999}) and (tcp.flags.reset == 1)&lt;/FONT&gt;&lt;/PRE&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="3"&gt;Find below some scenarios that will help you get some ideas:&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;FONT size="4"&gt;4.1 - SCENARIO 1 - Success path&lt;/FONT&gt;&lt;/H3&gt;
&lt;P&gt;&lt;FONT size="3"&gt;Let's firs understand the success path. What is expected on a success connection&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="2"&gt;Ref to&amp;nbsp;&lt;/FONT&gt;&lt;FONT size="2"&gt;&lt;A href="https://mermaid.live/edit#pako:eNqdVVFvmzAQ_isnP60a6QIk0KCpEgLaoqQ0KtG0Tnlx4JJYIjYDZ2tW9b_PQJsuaxrS-gHB3Xef7z4fvgeSiBSJQ0r8uUaeoM_ooqCrKQe16FoKvl7NsHj6TqQoALyMIZeNKaeFZAnLKZcQY_HrGRoJiSDU5xNaa5wO6DDxxnDlRn585Q6DBl1gIqFYzD7pA10DwzDVo98_aZzVakg65-fPNIrEgfgueoE0HgVpsC8Q-AyuN2zj2kKQpy0lVAV4gnOVNBMcYklnGSuXmMLXWfHl_NNYFBJYCSJHrkEpWZYBFxISscozlJietGxgVGm74ziA8W0wurkMow_L5McOjAvsjMSC8QVcY1nSBR5SrYporHCFWSbez_4eLU0YBncQfPdUR1we2w2vkx79l7QGHqrGnLOESjxUQRXYWGGIGwjukyXlhwWqQ2oUeCxfqi3jHJO2TS4Yr3vkWGH-abBt38BvJpdQrpNECd0S3wPfnbgwuXWj-DqM4_Dmw00U8KTY5NX-PpX0gDT7gO2V9iH0RwF4N1EUeJPj88yEyCFQFBswu1BiInhavrjf_tWHQTDuuKPwW7CL3n-D7Nwd22ray7LAD-N3F0UzWR3w9nwPV3IRRtpugkfW8TZsP2XLnVkrkpUIc8oydfz6MdG38WR_tHFMnrvRr45FLaKRFRYrylI13x4q85TIJa5wShz1muKcrjM5JVP-qKDVrIs3PCGOLNaokXWeqqvjaRw-GzFlagJeNyOznpwaUaOPOA_knji6NTg1B7bR61lndtc2DI1siNOx-qe6bZpnva511rf7A_1RI3-EUKT6qW4Ylm13LXNgDEzLtGu6H7VzTpUkj38B-jw9ng" target="_blank" rel="noopener"&gt;Mermaid script&lt;/A&gt;&amp;nbsp;used to create above diagram if you want to reuse this on different context&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;FONT size="2"&gt;&lt;STRONG&gt;TCP Handshake&lt;/STRONG&gt;: This is the first step when a client (for example, your computer) wants to establish a connection with a server (like Synapse). It involves three steps (&lt;STRONG&gt;three-way handshake)&lt;/STRONG&gt;&lt;/FONT&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;FONT size="2"&gt;The client sends a SYN (synchronize) packet to the server.&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="2"&gt;The server acknowledges this by sending back a SYN-ACK (synchronize-acknowledge) packet.&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="2"&gt;Finally, the client sends an ACK (acknowledge) packet back to the server.&amp;nbsp;&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="2"&gt;Now, the connection is established and ready for data transfer.&lt;/FONT&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="2"&gt;&lt;STRONG&gt;Prelogin to Synapse&lt;/STRONG&gt;: When connecting to Synapse or Azure SQL DB, there’s a pre-login handshake that happens. This is where the client and server agree on certain settings for the connection.&amp;nbsp;If this handshake fails in the middle, you might see pre-login handshake errors as connection to gateway completed, but not able to complete login process.&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="2"&gt;&lt;STRONG&gt;Key Exchange&lt;/STRONG&gt;: After the pre-login handshake, there’s a key exchange process. This is where the client and server agree on a secret key that they’ll use for encrypting and decrypting the data they send to each other.&amp;nbsp;This involves exchanging random numbers and a special number called the Pre-Master Secret&amp;nbsp;.&amp;nbsp;These numbers are combined with additional data permitting client and server to create their shared secret, called the Master Secret.&amp;nbsp;The Master Secret is used by client and server to generate the session keys used for hashing and encryption.&amp;nbsp;Once the session keys are established, the client sends a “Change cipher spec” notification to server to indicate that it will start using the new session keys for hashing and encrypting messages&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="2"&gt;&lt;STRONG&gt;Data Transfer&lt;/STRONG&gt;: Now, client and server can exchange application data over the secured channel they have established.&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="2"&gt;&lt;STRONG&gt;Idle connection / Keep Alive&lt;/STRONG&gt;:&amp;nbsp;Sometimes, a connection between two devices might be open, but no data is being sent. This could be because the user is not performing any action that requires data to be sent or received like a SQL Update command.&lt;/FONT&gt;
&lt;P&gt;&lt;FONT size="2"&gt;Now, if one device doesn’t hear from the other for a long time, it might think that the connection has been lost. To prevent this from happening, devices send small packets called “Keep-Alive” packets. These packets are like a small nudge or a ping saying “&lt;STRONG&gt;Hey, I’m still here!&lt;/STRONG&gt;”. They help in maintaining the connection alive even when no actual data is being transferred. &lt;FONT color="#DF0000"&gt;&lt;STRONG&gt;Idle connections that keeps for long time (30 min) can still be disconnected&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;FONT size="2"&gt;Microsoft DOC&lt;/FONT&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;FONT size="2"&gt;&lt;A href="https://learn.microsoft.com/en-us/troubleshoot/azure/synapse-analytics/dedicated-sql/dsql-conn-dropped-connections#connection-policy-configuration-for-gateway-issues" target="_blank"&gt;https://learn.microsoft.com/en-us/troubleshoot/azure/synapse-analytics/dedicated-sql/dsql-conn-dropped-connections#connection-policy-configuration-for-gateway-issues&lt;/A&gt;&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="2"&gt;"&lt;EM&gt;The Azure SQL Database gateway terminates sessions that are idle for more than 30 minutes. This scenario frequently affects pooled, idle connections. For&amp;nbsp;dedicated SQL pool (formerly SQL DW), you can switch the&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-sql/database/connectivity-architecture?WT.mc_id=pid:13491:sid:32745426#connection-policy" data-linktype="absolute-path" target="_blank"&gt;connection policy&lt;/A&gt;&amp;nbsp;for your server from&amp;nbsp;proxy&amp;nbsp;to&amp;nbsp;redirect. The redirect setting bypasses the gateway after it's connected. This eliminates the issue.&lt;/EM&gt;"&lt;/FONT&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="2"&gt;Check also post from&amp;nbsp;&lt;a href="javascript:void(0)" data-lia-user-mentions="" data-lia-user-uid="1081558" data-lia-user-login="hugo_sql" class="lia-mention lia-mention-user"&gt;hugo_sql&lt;/a&gt;&amp;nbsp;&lt;/FONT&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;FONT size="2"&gt;&lt;A href="https://techcommunity.microsoft.com/t5/azure-database-support-blog/azure-sql-idle-sessions-are-killed-after-about-30-minutes/ba-p/3268601" target="_blank"&gt;https://techcommunity.microsoft.com/t5/azure-database-support-blog/azure-sql-idle-sessions-are-killed-after-about-30-minutes/ba-p/3268601&lt;/A&gt;&lt;/FONT&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;FONT size="2"&gt;&lt;STRONG&gt;Finish with success:&amp;nbsp;&lt;/STRONG&gt;The FIN flag in a TCP packet stands for FINish. It is used to indicate that the sender has finished sending data and wants to terminate the TCP connection.&amp;nbsp;This process is known as the &lt;STRONG&gt;four-way handshake&lt;/STRONG&gt; and is used to gracefully terminate a TCP connection. It ensures that both sides have received all the data before the connection is closed&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="2"&gt;Here’s a simple explanation of how it works:&lt;/FONT&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;P&gt;&lt;FONT size="2"&gt;When an application is done sending data, it sends a TCP packet with the FIN flag set.&lt;/FONT&gt;&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;FONT size="2"&gt;The receiving side acknowledges this by sending back a packet with an ACK (acknowledgement) flag.&lt;/FONT&gt;&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;FONT size="2"&gt;The receiver then sends its own FIN packet when it’s done sending data.&lt;/FONT&gt;&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;FONT size="2"&gt;Finally, the original sender acknowledges this with another ACK.&lt;/FONT&gt;&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;FONT size="2"&gt;&lt;STRONG&gt;Finish without success&lt;/STRONG&gt;:&amp;nbsp;&lt;/FONT&gt;&lt;FONT size="2"&gt;In a network, when a device sends a TCP packet to another device, it expects an acknowledgement in return. However, there might be situations where the receiving device cannot process the packet properly or the connection is not valid anymore. In such cases, the receiving device sends a TCP packet with the Reset (&lt;STRONG&gt;RST&lt;/STRONG&gt;) flag set.&amp;nbsp;&lt;/FONT&gt;&lt;FONT size="2"&gt;The RST packet is like a message saying “&lt;STRONG&gt;I can’t process this, let’s terminate this connection&lt;/STRONG&gt;”. It’s a way for a device to signal that &lt;STRONG&gt;something has gone wrong in the communication process&lt;/STRONG&gt;. Even though a reset coming from Server to client does not mean server is down. It could be dropped because of some security reason like taking long time in the login process. Check more info later on scenarios&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="2"&gt;Here are some reasons why an RST packet might be sent:&lt;/FONT&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;FONT size="2"&gt;The receiving device was restarted and forgot about the connection.&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="2"&gt;The packet was sent to a closed port.&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="2"&gt;The packet was unexpected or not recognized by the receiving device.&lt;/FONT&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Additional ref:&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;FONT size="1 2 3 4 5 6 7"&gt;&lt;A href="https://learn.microsoft.com/en-us/windows/win32/secauthn/tls-handshake-protocol" target="_self"&gt;https://learn.microsoft.com/en-us/windows/win32/secauthn/tls-handshake-protocol&lt;/A&gt;&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="1 2 3 4 5 6 7"&gt;&lt;A href="https://learn.microsoft.com/en-us/troubleshoot/windows-server/networking/three-way-handshake-via-tcpip" target="_blank" rel="noopener"&gt;https://learn.microsoft.com/en-us/troubleshoot/windows-server/networking/three-way-handshake-via-tcpip&lt;/A&gt;&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="1 2 3 4 5 6 7"&gt;&lt;A href="https://en.wikipedia.org/wiki/Transmission_Control_Protocol" target="_blank" rel="noopener"&gt;https://en.wikipedia.org/wiki/Transmission_Control_Protocol&lt;/A&gt;&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="1 2 3 4 5 6 7"&gt;&lt;A href="https://cabulous.medium.com/tcp-3-way-handshake-and-how-it-works-8c5f8d6ea11b" target="_blank" rel="noopener"&gt;https://cabulous.medium.com/tcp-3-way-handshake-and-how-it-works-8c5f8d6ea11b&lt;/A&gt;&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="1 2 3 4 5 6 7"&gt;&lt;A href="https://learn.microsoft.com/en-us/sql/connect/jdbc/connection-resiliency" target="_blank" rel="noopener"&gt;https://learn.microsoft.com/en-us/sql/connect/jdbc/connection-resiliency&lt;/A&gt;&lt;/FONT&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;4.1.1 - Simple sample on Wireshark&lt;/H4&gt;
&lt;P&gt;And here we can see a real communication on Wireshark transfer. The numbers on the right match with explanation above&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;It was a simple SQLCMD command with a wait of 30 seconds before executing the command so we could see a keep alive package also&lt;/P&gt;
&lt;PRE&gt;PS C:&amp;gt; sqlcmd -S Servername.sql.azuresynapse.net -d master -U ******** -P ********&lt;BR /&gt;1&amp;gt; select 1&lt;BR /&gt;2&amp;gt; go&lt;BR /&gt;-----------&lt;BR /&gt;1&lt;BR /&gt;(1 rows affected)&lt;BR /&gt;1&amp;gt; exit&lt;/PRE&gt;
&lt;P&gt;Or could have ended in a bad way, in this scenario, forcing a RST package by &lt;STRONG&gt;killing powershell terminal&lt;/STRONG&gt;, so it could not end connection properly&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;4.1.2 - Direction of the packages&lt;/H4&gt;
&lt;P&gt;One important thing to notice here is the &lt;STRONG&gt;direction of the packages&lt;/STRONG&gt;. Destination of Synapse or Azure SQL DB will always be 1433 and/or 11000-11999 in case of redirect, and source port will be a big port number,&amp;nbsp;this is called &lt;STRONG&gt;ephemeral ports&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT size="2"&gt;&lt;EM&gt;Ephemeral ports are temporary communication points used for internet connections. They’re like temporary phone numbers that your computer uses to talk to other computers on the internet.&lt;/EM&gt;&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT size="2"&gt;&lt;EM&gt;When you visit a website or use an app, your computer will automatically pick an available ephemeral port from a specific range of numbers. This port is used for that specific connection only and once the conversation is over, the port is closed and can be reused for another connection&lt;/EM&gt;.&lt;/FONT&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT size="2"&gt;&lt;EM&gt;The range of these ports can vary depending on your operating system. For example, many Linux systems use ports 32768-60999, while Windows systems use ports 49152-65535.&lt;/EM&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;H4&gt;&amp;nbsp;&lt;/H4&gt;
&lt;H4&gt;4.1.3 - Redirect&lt;/H4&gt;
&lt;P&gt;For more information on Proxy vs Redirect check&amp;nbsp;&lt;A href="https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/synapse-connectivity-series-part-1-inbound-sql-dw-connections-on/ba-p/3589170" target="_self"&gt;Part 1 - Inbound SQL DW connections on Public Endpoints&lt;/A&gt;, here we I just want to show this comunication from network trace point of view. And below we can see&lt;/P&gt;
&lt;P&gt;&amp;nbsp;- the Source / Destination ports = 1433 means you are speaking with &lt;STRONG&gt;Synapse Gateway&lt;/STRONG&gt;,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;- when you see port 11xxx range means you are using &lt;STRONG&gt;redirect&lt;/STRONG&gt;, that means that you are comunicating directly with host server (Called &lt;STRONG&gt;Tenant Ring&lt;/STRONG&gt;)&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;To better understand this scenario check doc below&lt;/P&gt;
&lt;P&gt;&lt;FONT size="2"&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-sql/database/connectivity-architecture?view=azuresql#connectivity-from-within-azure" target="_blank" rel="noopener"&gt;Azure Synapse Analytics connectivity architecture&lt;/A&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;img /&gt;&lt;/P&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;4.2 - SCENARIO 2 - The server was not found or was not accessible&lt;/H3&gt;
&lt;P&gt;Let's imagine you are looking for simple error&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;EM&gt;&lt;FONT size="2"&gt;===================================&lt;/FONT&gt;&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;&lt;FONT size="2"&gt;Cannot connect to WRONGSERVENAME.sql.azuresynapse.net.&lt;/FONT&gt;&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;&lt;FONT size="2"&gt;===================================&lt;/FONT&gt;&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;&lt;FONT size="2"&gt;A network-related or instance-specific error occurred while establishing a connection to SQL Server. &lt;/FONT&gt;&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;&lt;FONT size="2"&gt;&lt;STRONG&gt;The server was not found or was not accessible&lt;/STRONG&gt;. Verify that the instance name is correct and &lt;/FONT&gt;&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;&lt;FONT size="2"&gt;that SQL Server is configured to allow remote connections. &lt;/FONT&gt;&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;&lt;FONT size="2"&gt;(provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) &lt;/FONT&gt;&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;&lt;FONT size="2"&gt;(Framework Microsoft SqlClient Data Provider)&lt;/FONT&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For "&lt;STRONG&gt;Server not found&lt;/STRONG&gt;" error you should be looking for &lt;A href="https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/synapse-connectivity-series-part-1-inbound-sql-dw-connections-on/ba-p/3589170" target="_self"&gt;Part 1 - Inbound SQL DW connections on Public Endpoints&lt;/A&gt;&amp;nbsp;and &lt;A href="https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/synapse-connectivity-series-part-2-inbound-synapse-private/ba-p/3705160" target="_self"&gt;Part 2 - Inbound Synapse Private Endpoints&lt;/A&gt;&amp;nbsp; of my blog posts series. You can easily check it with &lt;STRONG&gt;NSLOOKUP &lt;/STRONG&gt;command. There is &lt;STRONG&gt;NO need to capture network trace in this kind of error&lt;/STRONG&gt;. But anyway to show the network trace troubleshooting, I used the display filters below, That can be used to filter Synapse connection related&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;PRE&gt;(tcp.port in {1433, 11000..11999} or (dns.qry.name contains "database.windows.net") or (dns.qry.name contains "sql.azuresynapse.net"))&lt;/PRE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;TIP: If you are NOT seeing DNS requests may be because you already have them on cache. Try to clear it before capturing&lt;/P&gt;
&lt;PRE&gt;ipconfig /flushdns&lt;/PRE&gt;
&lt;P&gt;&lt;BR /&gt;In the sample above We can see that I sent requests to my 2 DNS servers (Using google DNS servers) and for both I got same answer back from DNS saying "No such name A XXXXXXXX" that mean that&amp;nbsp;&lt;STRONG&gt;this server does not exists&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;If the name was correct, then you could have a DNS problem. This is a&amp;nbsp;&lt;STRONG&gt;common issue when using Private Endpoints&lt;/STRONG&gt;. Check &lt;A href="https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/synapse-connectivity-series-part-2-inbound-synapse-private/ba-p/3705160" target="_self"&gt;Part 2 - Inbound Synapse Private Endpoints&lt;/A&gt; of these blog posts series as mentioned above&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;Another common issue for same error message could be &lt;STRONG&gt;port is closed&lt;/STRONG&gt;. Best option is to use a &lt;STRONG&gt;&lt;A href="https://github.com/microsoft/Azure-Synapse-Connectivity-Checker" target="_blank" rel="noopener"&gt;Azure Synapse Connectivity Checker&lt;/A&gt;&lt;/STRONG&gt;. This script helps us verify various aspects.&amp;nbsp;Follow the instructions on Git Main Page to execute script.&amp;nbsp;Upon executing the script, you can check if you have &lt;STRONG&gt;name resolution working fine&lt;/STRONG&gt; and&amp;nbsp;&lt;STRONG&gt;all needed ports open,&amp;nbsp;&lt;/STRONG&gt;illustrated in the sample below:&lt;BR /&gt;&amp;nbsp;&lt;BR /&gt;&amp;nbsp;&lt;FONT size="2"&gt;&lt;EM&gt;PORTS OPEN (Used CX DNS or Host File entry listed above)&lt;/EM&gt;&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT size="2"&gt;&lt;EM&gt;&amp;nbsp; &amp;nbsp;- TESTS FOR ENDPOINT - XXX.sql.azuresynapse.net - CX DNS IP (XXXXX)&lt;/EM&gt;&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT size="2"&gt;&lt;EM&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;- PORT 1433 - RESULT: &lt;FONT color="#DF0000"&gt;&lt;STRONG&gt;CLOSED&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/EM&gt;&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT size="2"&gt;&lt;EM&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;- PORT 1443 - RESULT: &lt;FONT color="#DF0000"&gt;&lt;STRONG&gt;CLOSED&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/EM&gt;&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT size="2"&gt;&lt;EM&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;- PORT 443&amp;nbsp; - RESULT: CONNECTED&lt;/EM&gt;&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT size="2"&gt;&lt;EM&gt;&amp;nbsp; &amp;nbsp;- TESTS FOR ENDPOINT - XXX.sql.azuresynapse.net - CX DNS IP (XXXXX)&lt;/EM&gt;&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT size="2"&gt;&lt;EM&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;- PORT 1433 - RESULT: &lt;FONT color="#DF0000"&gt;&lt;STRONG&gt;CLOSED&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/EM&gt;&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT size="2"&gt;&lt;EM&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;- PORT 1443 - RESULT: &lt;FONT color="#DF0000"&gt;&lt;STRONG&gt;CLOSED&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/EM&gt;&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT size="2"&gt;&lt;EM&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;- PORT 443&amp;nbsp; - RESULT: CONNECTED&lt;/EM&gt;&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT size="2"&gt;&lt;EM&gt;&amp;nbsp; &amp;nbsp;- TESTS FOR ENDPOINT - XXX.database.windows.net - CX DNS IP (XXXXX)&lt;/EM&gt;&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT size="2"&gt;&lt;EM&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;- PORT 1433 - RESULT: &lt;FONT color="#DF0000"&gt;&lt;STRONG&gt;CLOSED&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/EM&gt;&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT size="2"&gt;&lt;EM&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;- PORT 1443 - RESULT: &lt;FONT color="#DF0000"&gt;&lt;STRONG&gt;CLOSED&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/EM&gt;&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT size="2"&gt;&lt;EM&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;- PORT 443&amp;nbsp; - RESULT: CONNECTED&lt;/EM&gt;&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT size="2"&gt;&lt;EM&gt;&amp;nbsp; &amp;nbsp;- TESTS FOR ENDPOINT - XXX.dev.azuresynapse.net - CX DNS IP (XXXXX )&lt;/EM&gt;&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT size="2"&gt;&lt;EM&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;- PORT 443&amp;nbsp; - RESULT: CONNECTED&lt;/EM&gt;&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT size="2"&gt;&lt;EM&gt;&amp;nbsp;&amp;nbsp;&lt;/EM&gt;&lt;/FONT&gt;&lt;BR /&gt;After analyzing the information provided above, we observed that &lt;STRONG&gt;port 1433 is closed&lt;/STRONG&gt;. This port is essential for establishing connections from Power BI, SSMS other clients.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;If your issues is from &lt;STRONG&gt;Synapse Studio&lt;/STRONG&gt; make sure to check &lt;STRONG&gt;443&lt;/STRONG&gt; and &lt;STRONG&gt;1443&lt;/STRONG&gt; as documented at &lt;A href="https://learn.microsoft.com/en-us/azure/synapse-analytics/security/synapse-workspace-ip-firewall#connect-to-azure-synapse-from-your-own-network" target="_blank" rel="noopener"&gt;https://learn.microsoft.com/en-us/azure/synapse-analytics/security/synapse-workspace-ip-firewall#connect-to-azure-synapse-from-your-own-network&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;4.3 - SCENARIO 3 - Redirect port 11xxx range closed&lt;/H3&gt;
&lt;P&gt;Another similar error to above, but with different conclusion. The customer had a Synapse workspace with &lt;STRONG&gt;public network access enabled&lt;/STRONG&gt; and was attempting to connect to the SQL endpoint using SSMS. However, they were unable to complete the login process due to the error below:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT size="2"&gt;&lt;EM&gt;Cannot Connect to Server.sql.azuresynapse.net&lt;/EM&gt;&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT size="2"&gt;&lt;EM&gt; A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: D - A&amp;nbsp;connection attempt failed because the connected party did not properly respond after a period of&amp;nbsp;time, or established connection failed because connected host has failed to respond.) (Microsoft&amp;nbsp;SQL server, Error: 10060)&lt;/EM&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;&lt;STRONG&gt;Troubleshooting&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;If we do the NSLOOKUP and test port command as suggested above we could see that connection to gateway was OK.&amp;nbsp;Looking further on network trace using filter similar as above&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;PRE&gt;(tcp.port in {1433, 11000..11999} or (dns.qry.name contains "database.windows.net") or (dns.qry.name contains "sql.azuresynapse.net"))&lt;/PRE&gt;
&lt;P&gt;&lt;BR /&gt;In the following example, the traffic indicates that communication TO the redirect port (11008) was unsuccessful. Client is trying to stabilish a TCP connection (SYN / SYN-ACK / ACK) but there is no reply back.&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;&lt;STRONG&gt;Solution&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;The customer should enable &lt;STRONG&gt;outbound communication&lt;/STRONG&gt;&amp;nbsp;on CX side firewall from the client to &lt;STRONG&gt;ALL Azure SQL IP&lt;/STRONG&gt; &lt;STRONG&gt;addresses&lt;/STRONG&gt; within the region on ports within the range of 11000 to 11999 when using public endpoint and using redirect mode. Utilizing the &lt;STRONG&gt;Service Tags for SQL&lt;/STRONG&gt; can simplify the management of this process.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-sql/database/connectivity-architecture?view=azuresql#connection-policy" target="_blank" rel="noopener"&gt;&lt;FONT size="2"&gt;https://learn.microsoft.com/en-us/azure/azure-sql/database/connectivity-architecture?view=azuresql#connection-policy&lt;/FONT&gt;&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="2"&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/virtual-network/service-tags-overview#service-tags-on-premises" target="_blank" rel="noopener"&gt;https://learn.microsoft.com/en-us/azure/virtual-network/service-tags-overview#service-tags-on-premises&lt;/A&gt;&lt;/FONT&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;4.4 - SCENARIO 4 - Connection timeout (Middle connection) / PreLogin handshake&lt;/H3&gt;
&lt;P&gt;Here are some samples of error messages you might receive:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;FONT size="2"&gt;&lt;EM&gt;Connection Timeout Expired. The timeout period elapsed while attempting to consume the pre-login handshake acknowledgement. This could be because the &lt;FONT color="#DF0000"&gt;&lt;STRONG&gt;pre-login handshake failed&lt;/STRONG&gt;&lt;/FONT&gt; or the server was unable to respond back in time. The duration spent while attempting to connect to this server was - [Pre-Login] initialization=5895; handshake=29;&lt;/EM&gt;&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="2"&gt;&lt;EM&gt;A connection was successfully established with the server, but then an &lt;FONT color="#DF0000"&gt;&lt;STRONG&gt;error occurred during the pre-login handshake&lt;/STRONG&gt;&lt;/FONT&gt;. (provider: TCP Provider, error: 0 - The semaphore timeout period has expired.)&lt;/EM&gt;&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="2"&gt;&lt;EM&gt;A connection was successfully established with the server, but then an &lt;FONT color="#DF0000"&gt;&lt;STRONG&gt;error occurred during the login process&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/EM&gt;&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="2"&gt;&lt;EM&gt;Failed to copy to SQL Data Warehouse from blob storage. A connection was successfully established with the server, but then an &lt;FONT color="#DF0000"&gt;&lt;STRONG&gt;error occurred during the login process&lt;/STRONG&gt;&lt;/FONT&gt;. (provider: SSL Provider, error: 0 - An existing connection was forcibly closed by the remote host.) An existing connection was forcibly closed by the remote host&lt;/EM&gt;&lt;/FONT&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;BR /&gt;&lt;FONT color="#0000FF"&gt;This error means that &lt;STRONG&gt;client WAS able to reach Synapse gateway&lt;/STRONG&gt;, or else I would get "Server not found" error as mentioned above, but I still could not complete the connection&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This usually indicate some issue on network level. Below you can find some related scenarios for above issue:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;4.4.1 - SCENARIO 4.1 - TLS Blocked&lt;/H4&gt;
&lt;P&gt;Something that we see eventually in a customer is where we see &lt;STRONG&gt;TCP communication is OK&lt;/STRONG&gt;, so &lt;STRONG&gt;PORT IS OPEN&lt;/STRONG&gt;. If you do simple test, &lt;STRONG&gt;port looks open&lt;/STRONG&gt;. But encrypted messages&amp;nbsp;&lt;STRONG&gt;TDS / TLS does not get through&lt;/STRONG&gt;. From cliente point of view you have pre-login timeout because you reach server, but the connection was not able to complete the login.&lt;/P&gt;
&lt;P&gt;In this scenario you need to check client network team need to review firewall configuration. For real CX scenario, their company firewall was blocking this comunication&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Here is a sample&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;4.4.2 - SCENARIO 4.2 - Long connection&lt;/H4&gt;
&lt;P&gt;Another possible scenario is when client machine have some issues like as sample &lt;STRONG&gt;high CPU&lt;/STRONG&gt; for long time and &lt;STRONG&gt;delaying package receive / send&lt;/STRONG&gt;. We have an internal security measure that if a connection is taking long time, we might eventually disconnect you. From client point of view, this is a pre-login handshake failure.&lt;/P&gt;
&lt;P&gt;&lt;FONT color="#DF0000"&gt;&lt;STRONG&gt;In this scenario is not Server fault, it was just client with CPU so HIGH, that cannot handle network packages fast enough. And after long time Synapse will disconnect this connection because it did not complete in the expected time.&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;4.4.3 - SCENARIO 4.3 - Azure firewall&lt;/H4&gt;
&lt;P&gt;In this other scenario the customer had a Synapse workspace with public network access and attempted to connect to the Dedicated SQL pool using SSMS through an Azure VM. However, they were unable to complete the login process due to the error below:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT size="2"&gt;&lt;EM&gt;Cannot connect to Server.sql.azuresynapse.net. &lt;/EM&gt;&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT size="2"&gt;&lt;EM&gt;A was successfully established with the server, but then an error occurred during the login process. (provider: SSL Provider, error: 0 - An existing connection was forcibly closed by the&amp;nbsp;&lt;/EM&gt;&lt;/FONT&gt;&lt;FONT size="2"&gt;&lt;EM&gt;remote host.) (Microsoft 10054) &lt;/EM&gt;&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT size="2"&gt;&lt;EM&gt;An existing connection was forcibly closed by the remote host &lt;/EM&gt;&lt;/FONT&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;img /&gt;&lt;/P&gt;
&lt;H4&gt;&amp;nbsp;&lt;BR /&gt;Troubleshooting&lt;/H4&gt;
&lt;P&gt;The error indicates that the connection has been successfully established, but the login process was not completed. Therefore, the ports are open. However, now we need to check the communication traffic by capturing the network trace.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Now, let's have a look at the trace below. We will note the FIN packages (&lt;STRONG&gt;Explanation of FIN packages, check above&lt;/STRONG&gt;) coming from port 1433 to the destination port 63417. The communication starts at 10:38.42, and the &lt;FONT color="#DF0000"&gt;&lt;STRONG&gt;FIN occurred at the same time as start. Something in the middle is breaking communication.&lt;/STRONG&gt;&lt;/FONT&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;After some troubleshooting, we noted that there was an &lt;STRONG&gt;Azure Firewall &lt;/STRONG&gt;that was part o &lt;STRONG&gt;CX network&lt;/STRONG&gt;. Therefore, the flow of communication will be as follows:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;VM Machine ---&amp;gt; Azure FW ---&amp;gt; Synapse workspace.&amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;Solution&lt;/H4&gt;
&lt;P&gt;In this scenario we involved Azure Firewall support team that explained the following:&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT size="2"&gt;&lt;EM&gt;Azure Firewall supports &lt;STRONG&gt;rules&lt;/STRONG&gt; and &lt;STRONG&gt;rule collections&lt;/STRONG&gt;. A rule collection is a set of rules that share the same order and priority. Rule collections are executed in order of their priority. &lt;STRONG&gt;Network rule collections are higher priority than application rule collections&lt;/STRONG&gt;, and all rules are terminating.&lt;/EM&gt; &lt;/FONT&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT size="2"&gt;&lt;EM&gt;There are three types of rule collections:&lt;/EM&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI class="lia-indent-padding-left-30px"&gt;&lt;FONT size="2"&gt;&lt;EM&gt;&lt;STRONG&gt;Application rules&lt;/STRONG&gt;: Configure fully qualified domain names (FQDNs) that can be accessed from a Virtual Network.&lt;/EM&gt;&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI class="lia-indent-padding-left-30px"&gt;&lt;FONT size="2"&gt;&lt;EM&gt;&lt;STRONG&gt;Network rules&lt;/STRONG&gt;: Configure rules that contain source addresses, protocols, destination ports, and destination addresses.&lt;/EM&gt;&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI class="lia-indent-padding-left-30px"&gt;&lt;FONT size="2"&gt;&lt;EM&gt;&lt;STRONG&gt;NAT rules&lt;/STRONG&gt;: Configure DNAT rules to allow incoming Internet connections.&lt;/EM&gt;&lt;/FONT&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT size="2"&gt;Ref:&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/firewall/firewall-faq#what-are-some-azure-firewall-concepts" target="_blank" rel="noopener"&gt;https://learn.microsoft.com/en-us/azure/firewall/firewall-faq#what-are-some-azure-firewall-concepts&lt;/A&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Accordingly, If networking rules and application rules have been configured, network rules are applied in priority order before application rules.&lt;BR /&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In this scenario cx had an &lt;STRONG&gt;Application rule&lt;/STRONG&gt; to allow client to reach&amp;nbsp; FQDN (server.sql.azuresynapse.net), this rule would not work for &lt;STRONG&gt;redirect&lt;/STRONG&gt; that was scenario here. It was recommended to create a &lt;STRONG&gt;Networking Rule&lt;/STRONG&gt;, to allow&amp;nbsp;&lt;STRONG&gt;1433&lt;/STRONG&gt; port + range of redirect ports (&lt;STRONG&gt;11000-11999&lt;/STRONG&gt;) using &lt;STRONG&gt;Service tags&lt;/STRONG&gt;. The Service tags are found under &lt;STRONG&gt;Networking Rules&lt;/STRONG&gt;, and we need to configure the Firewall Policy to allow these communications.&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&lt;BR /&gt;After allowing this communication, the login process has been completed.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;4.5 - SCENARIO 5 - Connection Dropped&lt;/H3&gt;
&lt;P&gt;Let's suppose that the customer is loading data using ETL tools such as SSMS, ADF, Synapse ADF, Databricks, or any other third-party tool. This loading process has failed due to a disconnection. However, this disconnect does not occur in a predefined manner but happens in a &lt;STRONG&gt;transient&lt;/STRONG&gt; way, making it &lt;STRONG&gt;challenging&lt;/STRONG&gt; to identify or figure out how to reproduce the issue or determine the exact cause behind this problem.&lt;BR /&gt;&amp;nbsp;&lt;BR /&gt;As long as the data loading process proceeds smoothly, there is no need to conduct &lt;STRONG&gt;connectivity tests&lt;/STRONG&gt; since the connection is initially established (that means&amp;nbsp;&lt;STRONG&gt;WE CAN CONNECT&lt;/STRONG&gt;) but it's subsequently interrupted. But it is essential to gain insights into how the connection is established and the location of the client machine.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;H4&gt;Troubleshooting&lt;/H4&gt;
&lt;P&gt;Troubleshooting begins with the collection of &lt;STRONG&gt;network traces&lt;/STRONG&gt; in a &lt;STRONG&gt;circular manner (As explained above)&lt;/STRONG&gt;. The network trace collection remains active until the issue occurs, at which point the customer needs to stop the trace, as soon as possible.&lt;BR /&gt;&amp;nbsp;&lt;BR /&gt;Within the network trace, we focus on the communication occurring between the Synapse database and the client machine. When dealing with disconnection issues, it's crucial to examine the &lt;STRONG&gt;RESET packages&amp;nbsp;(RST)&lt;/STRONG&gt; in the network trace, as outlined below:&lt;BR /&gt;&amp;nbsp;&lt;BR /&gt;To use reset Filters , please use this&amp;nbsp;&lt;/P&gt;
&lt;PRE&gt;(tcp.port in {1433, 11000..11999}) and (tcp.flags.reset == 1)&lt;/PRE&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Notice that in these scenarios the RESET is coming &lt;STRONG&gt;from SERVER to CLIENT&lt;/STRONG&gt; (1433 -&amp;gt; Ephemeral Ports).&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;Even though it came from server, we might need additional investigation on &lt;STRONG&gt;why SERVER sent a reset&lt;/STRONG&gt;. &lt;FONT color="#DF0000"&gt;&lt;STRONG&gt;That does not mean a health issue on server&lt;/STRONG&gt;&lt;/FONT&gt;. It could be a connection reset because you did a scale up operation and this will cause all existing connections to be dropped.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;To better understand what happened before the connection dropped you can follow up connection&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;As data is encrypted there is not much to see on below screen. You can just &lt;STRONG&gt;close&lt;/STRONG&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Now the trace will be filtered with just one single communication thread using filter by tcp.stream.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In this sample we can see that connection completed and eventually you got connection reseted &lt;STRONG&gt;by Server side (1433 -&amp;gt; Ephemeral port)&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With this time you can try to investigate for &lt;STRONG&gt;some issue that happened at same time&lt;/STRONG&gt; that could explain the disconnection or you can also &lt;STRONG&gt;open a case&lt;/STRONG&gt; for further investigation sharing this network trace.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;Another sample&lt;/H4&gt;
&lt;P&gt;I run a command that is not actually moving data back and forth&lt;/P&gt;
&lt;PRE&gt;WAITFOR DELAY '01:00:00'&lt;/PRE&gt;
&lt;P&gt;&lt;BR /&gt;This could be &lt;STRONG&gt;similar to an update&lt;/STRONG&gt; that &lt;STRONG&gt;takes a long time on server&lt;/STRONG&gt; &lt;STRONG&gt;side&lt;/STRONG&gt;, but &lt;STRONG&gt;during all query duration it will not need to send data to client side&lt;/STRONG&gt;… To avoid keeping this connection idle Client or Server need to keep sending Keep-Alive packages, and other side need to send ACK to make sure that the other side will keep connection alive.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT color="#DF0000"&gt;&lt;STRONG&gt;This keep-alive packages does not mean any error !!! (Check detailed explanation above)&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;It just mean that during some time there was no comunication between client and server. If idle for long time you can be disconnected&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In this sample below we can see:&lt;BR /&gt;• Client and server exchange some data&lt;BR /&gt;• Client stop requesting and at network level we can see some keep alive packages&lt;BR /&gt;• Some more data exchange&lt;BR /&gt;• Client is forced to close (In this case forced killing the application). &lt;STRONG&gt;Connection reset is sent from client to server to notify server that this client is gone&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Summary&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Hope this guide is useful to increase your knowledge on how connection behind the scene and how can you go deep into troubleshooting network traces&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;More info at:&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/troubleshoot/azure/synapse-analytics/dedicated-sql/dsql-conn-dropped-connections" target="_blank" rel="noopener"&gt;Troubleshoot connectivity issues on a dedicated SQL pool&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 14 Nov 2023 16:00:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/synapse-connectivity-series-part-4-advanced-network/ba-p/3945481</guid>
      <dc:creator>FonsecaSergio</dc:creator>
      <dc:date>2023-11-14T16:00:00Z</dc:date>
    </item>
    <item>
      <title>Boost your CICD automation for Synapse SQL Serverless by taking advantage of SSDT and SqlPackage CLI</title>
      <link>https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/boost-your-cicd-automation-for-synapse-sql-serverless-by-taking/ba-p/3922851</link>
      <description>&lt;H2&gt;&lt;FONT size="4"&gt;Introduction&lt;/FONT&gt;&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="4"&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/synapse-analytics/sql/on-demand-workspace-overview" target="_blank" rel="noopener"&gt;Azure Synapse Analytics Serverless SQL&lt;/A&gt; is a query service mostly used over the data in your data lake, for data discovery, transformation, and exploration purposes. It is, therefore, normal to find in a Synapse Serverless SQL pool many objects referencing external locations,&amp;nbsp; using disparate external data sources, authentication mechanisms, file formats, etc. In the context of CICD,&amp;nbsp; where automated processes are responsible for propagating the database code across environments, one can take advantage of database oriented tools like &lt;A href="https://learn.microsoft.com/en-us/sql/ssdt/sql-server-data-tools?view=sql-server-ver16" target="_blank" rel="noopener"&gt;SSDT&lt;/A&gt; and &lt;A href="https://learn.microsoft.com/en-us/sql/tools/sqlpackage/sqlpackage?view=sql-server-ver16" target="_blank" rel="noopener"&gt;SqlPackage CLI&lt;/A&gt;&amp;nbsp;, ensuring that this code is conformed with the targeted resources.&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="4"&gt;In this article I will demonstrate how you can take advantage of thee tools when implementing the CICD for the Azure Synapse Serverless SQL engine. We will leverage SQL projects in SSDT to define our objects and implement deploy-time variables (SQLCMD variables). &amp;nbsp;Through CICD pipelines, we will build the SQL project to a dacpac artifact, which enables us to deploy the database objects one or many times with automation.&lt;/FONT&gt;&lt;/P&gt;
&lt;H2&gt;&lt;BR /&gt;&lt;FONT size="4"&gt;Pre-Requisites&lt;/FONT&gt;&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="4"&gt;Before you run this lab, make sure that you are using the latest version of Visual Studio, since the support for Synapse Serverless was recently introduced in the 17.x version. The one that I've used in this lab was&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/visualstudio/releases/2022/release-notes" target="_blank" rel="noopener"&gt;Microsoft Visual Studio Community 2022 (64-bit)&amp;nbsp; Version 17.7.3&lt;/A&gt;.&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="4"&gt;I've used&amp;nbsp;&lt;A href="https://azure.microsoft.com/en-us/products/devops" target="_blank" rel="noopener"&gt;Azure DevOps Git&lt;/A&gt;&amp;nbsp;to setup the automated processes to build and deploy the Dacpac. In case you are using your own infrastructure to run these processes, ensure that you have installed the latest SqlPackage version in your agent runner machine. In this lab, I've used a&amp;nbsp;Microsoft-Hosted Agent runner using the latest Windows image (running SqlPackage v162.0.52).&amp;nbsp;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;FONT size="4"&gt;TOC&lt;/FONT&gt;&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;To facilitate navigating through this lab, I'm breaking down this article in several steps:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H5 lang="en-US" style="margin: 0in; font-family: Calibri; font-size: 16.0pt; color: #1e4e79;"&gt;&lt;FONT size="3"&gt;&lt;A href="#community--1-step1" target="_self"&gt;Step 1: Adding a new database project to Visual Studio and importing the serverless pool&lt;/A&gt;&amp;nbsp;&lt;/FONT&gt;&lt;/H5&gt;
&lt;H5 lang="en-US" style="margin: 0in; font-family: Calibri; font-size: 16.0pt; color: #1e4e79;"&gt;&lt;FONT size="3"&gt;&lt;A href="#community--1-step2" target="_self"&gt;Step 2: Taking advantage of SQL CMD Variables in your Visual Studio code&lt;/A&gt;&lt;/FONT&gt;&lt;/H5&gt;
&lt;H3 lang="en-US" style="margin: 0in; font-family: Calibri; font-size: 16.0pt; color: #1e4e79;"&gt;&lt;FONT size="3"&gt;&lt;A href="#community--1-step3" target="_self"&gt;Step 3: Integrating your Visual Studio solution with a Git Repository&lt;/A&gt;&lt;/FONT&gt;&lt;/H3&gt;
&lt;H3 lang="en-US" style="margin: 0in; font-family: Calibri; font-size: 16.0pt; color: #1e4e79;"&gt;&lt;FONT size="3"&gt;&lt;A href="#community--1-step4" target="_self"&gt;Step 4: Creating a DevOps Pipeline and building the Dacpac file&lt;/A&gt;&lt;/FONT&gt;&lt;/H3&gt;
&lt;H3 lang="en-US" style="margin: 0in; font-family: Calibri; font-size: 16.0pt; color: #1e4e79;"&gt;&lt;FONT size="3"&gt;&lt;A href="#community--1-step5" target="_self"&gt;Step 5: Creating a Release Pipeline and deploying the Dacpac&lt;/A&gt;&lt;/FONT&gt;&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Starting the Lab&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3 lang="en-US" style="margin: 0in; font-family: Calibri; font-size: 16.0pt; color: #1e4e79;"&gt;&lt;A id="step1" target="_blank"&gt;&lt;/A&gt;Step 1: Adding a new database project to Visual Studio and importing the serverless pool&amp;nbsp;&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P lang="en-US"&gt;Create a new project and select the "&lt;STRONG&gt;SQL Server Database Project&lt;/STRONG&gt;" template.&lt;/P&gt;
&lt;P style="margin: 0in; font-family: Calibri; font-size: 11.0pt;" lang="en-US"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P lang="en-US"&gt;Select "&lt;STRONG&gt;Next&lt;/STRONG&gt;" to start configuring your new project.&lt;/P&gt;
&lt;P style="margin: 0in; font-family: Calibri; font-size: 11.0pt;" lang="en-US"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P style="margin: 0in; font-family: Calibri; font-size: 11.0pt;" lang="en-US"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P lang="en-US"&gt;Select "&lt;STRONG&gt;Create&lt;/STRONG&gt;" to finish the project configuration.&lt;/P&gt;
&lt;P lang="en-US"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P lang="en-US"&gt;Navigate to the &lt;STRONG&gt;Solution Explorer&lt;/STRONG&gt; blade and double-click "&lt;STRONG&gt;Properties&lt;/STRONG&gt;". This will open a new window displaying the database project properties.&lt;/P&gt;
&lt;P style="margin: 0in; font-family: Calibri; font-size: 11.0pt;" lang="en-US"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P lang="en-US"&gt;In "&lt;STRONG&gt;Project Settings&lt;/STRONG&gt;", select the "&lt;STRONG&gt;Azure Synapse Analytics Serverless SQL Pool&lt;/STRONG&gt;" target platform as shown in the figure below. If you don't see this option available in the dropdown list, most likely you don't have the latest SSDT/ Visual Studio version installed.&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; font-family: Calibri; font-size: 11.0pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Note:&lt;/STRONG&gt; You can refer to these links in case you need to update SSDT and Visual Studio:&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/sql/ssdt/download-sql-server-data-tools-ssdt?view=sql-server-ver16" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;Download SQL Server Data Tools (SSDT) - SQL Server Data Tools (SSDT) | Microsoft Learn&lt;/SPAN&gt;&lt;/A&gt;&lt;/P&gt;
&lt;P style="margin: 0in; font-family: Calibri; font-size: 11.0pt;"&gt;&lt;A href="https://learn.microsoft.com/en-us/sql/ssdt/sql-server-data-tools?view=sql-server-ver16#release-notes" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;SQL Server Data Tools - SQL Server Data Tools (SSDT) | Microsoft Learn&lt;/SPAN&gt;&lt;/A&gt;&lt;/P&gt;
&lt;P lang="en-US"&gt;&lt;BR /&gt;After selecting the target platform, you can start importing the Synapse Serverless pool to your project.&lt;/P&gt;
&lt;P lang="en-US"&gt;To do this, from the "&lt;STRONG&gt;Solution Explorer&lt;/STRONG&gt;" blade, right-click the project name and then select the "&lt;STRONG&gt;Import&lt;/STRONG&gt;" --&amp;gt; "&lt;STRONG&gt;Database…&lt;/STRONG&gt;" option.&lt;/P&gt;
&lt;P style="margin: 0in; font-family: Calibri; font-size: 11.0pt;" lang="en-US"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P lang="en-US"&gt;Hit the&amp;nbsp;"&lt;STRONG&gt;Select Connection&lt;/STRONG&gt;" button to specify your Synapse Serverless pool and from the "&lt;STRONG&gt;Import Settings&lt;/STRONG&gt;" section, make sure to uncheck the "Import application-scoped objects only" in case you need to import any server-scoped objects as well.&lt;/P&gt;
&lt;P style="margin: 0in; font-family: Calibri; font-size: 11.0pt;" lang="en-US"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P lang="en-US"&gt;Select "&lt;STRONG&gt;Start&lt;/STRONG&gt;" to begin importing the Serverless SQL pool objects to your project.&lt;/P&gt;
&lt;P lang="en-US"&gt;When the import is finished, check the Solution Explorer as it will show the sql files containing your database objects:&lt;/P&gt;
&lt;P style="margin: 0in; font-family: Calibri; font-size: 11.0pt;" lang="en-US"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;A id="step2" target="_blank"&gt;&lt;/A&gt;Step 2: Taking advantage of SQL CMD Variables in your Visual Studio&amp;nbsp;code&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In this lab, I'm using an external table that is pointing to a specific external location in my development environment. This external table, named "&lt;STRONG&gt;userData&lt;/STRONG&gt;", is targeting a delimited file, named "&lt;STRONG&gt;eds_mapping.csv&lt;/STRONG&gt;", saved in a storage container named "&lt;STRONG&gt;csv&lt;/STRONG&gt;".&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The data source being used by this external table is named "&lt;STRONG&gt;eds_storagecicd&lt;/STRONG&gt;" and it is targeting a storage account named "&lt;STRONG&gt;stgsyncicddev&lt;/STRONG&gt;".&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I've decided to use the Shared Access Signature authentication method, when accessing my storage account. That's why I'm defining a database scoped credential with this kind of authentication.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;IMPORTANT NOTE:&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;In this screenshot above you are not seeing the full t-sql statement as the &lt;STRONG&gt;SECRET&lt;/STRONG&gt; argument is missing from the CREATE DATABASE SCOPED CREDENTIAL statement. However, a SECRET value was provided at creation time. This is a by design behavior (for security reasons) in Visual Studio, when importing your database objects from your database to the database project, sensitive information is not being exposed in the code.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;So, make sure you revise any code that is using sensitive information, like database scoped credentials, validating if the object definition is consistent with what you have in the database.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Here's an example of an error message that can result from deploying an external table to a target environment using a missing or invalid credential:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Msg 16562, Level 16, State 1, Line 27&lt;/P&gt;
&lt;P&gt;External table 'dbo.userData' is not accessible because location does not exist or it is used by another process.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Usually, when deploying database objects like external tables from a source environment to a target environment, you require these objects to reference different resources. For example, the files that are referenced by external tables might be stored in a different storage location or storage path (and eventually stored with a different filename).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;So, how can you ensure that your database objects are referencing the right resources when being deployed to a target environment?&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;The answer is using &lt;A href="https://learn.microsoft.com/en-us/sql/ssdt/database-project-settings?view=sql-server-ver16#bkmk_sqlcmd_variables" target="_blank" rel="noopener"&gt;SQLCMD Variables&lt;/A&gt;. These variables can be used in SQL Server Database Projects, providing dynamic substitution to be used for publishing of Dacpac files, for example. By entering these variables in project properties, they will automatically be offered in publishing and stored in publishing profiles.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In Visual Studio, you can add these variables to your project, from the "&lt;STRONG&gt;Project Properties&lt;/STRONG&gt;" window, by selecting the "&lt;STRONG&gt;SQLCMD Variables&lt;/STRONG&gt;" menu option:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Important note&lt;/STRONG&gt;: by adding these variables to your code in Visual Studio, you are &lt;U&gt;not changing anything at database level&lt;/U&gt;, your &lt;U&gt;changes will be reflected in your project files only&lt;/U&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In this example, I'm creating three new variables , setting the values for the &lt;STRONG&gt;storage location&lt;/STRONG&gt;, the &lt;STRONG&gt;file path&lt;/STRONG&gt;, and the &lt;STRONG&gt;SAS key &lt;/STRONG&gt;that will be used by the external table&lt;STRONG&gt; "userData"&amp;nbsp;&lt;/STRONG&gt;&lt;U&gt;in my development environment&lt;/U&gt;:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In the figures below you can see how the&amp;nbsp;hardcoded values have been replaced by these SQLCMD variables in my database project files:&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;From the "&lt;STRONG&gt;File&lt;/STRONG&gt;" menu, you can save all the changes, and before moving on to the next step, I'd recommend building your solution, from the "&lt;STRONG&gt;Build&lt;/STRONG&gt;" menu, ensuring that your code is error free.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;And with this last action, we complete the lab's second step.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3 lang="en-US" style="margin: 0in; font-family: Calibri; font-size: 16.0pt; color: #1e4e79;"&gt;&lt;A id="step3" target="_blank"&gt;&lt;/A&gt;Step 3: Integrating your Visual Studio solution with a Git Repository&lt;/H3&gt;
&lt;P style="margin: 0in; font-family: Calibri; font-size: 11.0pt;" lang="en-US"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P lang="en-US"&gt;By having your Visual Studio solution integrated with a Git repository, you are leveraging source control in your project and improving your Continuous Integration process, as part of the CICD automation for your Synapse Serverless pool.&lt;/P&gt;
&lt;P lang="en-US"&gt;As part of this process, the goal is to push the changes from your Visual Studio project to your Git branch, as the Dacpac file will be built on top of these files. This Dacpac file will represent the outcome of this Continuous Integration process.&amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P lang="en-US"&gt;&lt;BR /&gt;To integrate your Visual Studio solution with your Git provider,&amp;nbsp; switch from the "&lt;STRONG&gt;Solution Explorer&lt;/STRONG&gt;" tab to the "&lt;STRONG&gt;Git Changes&lt;/STRONG&gt;" tab (you can access this tab from the "&lt;STRONG&gt;View&lt;/STRONG&gt;" menu).&lt;/P&gt;
&lt;P lang="en-US"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P lang="en-US"&gt;From "&lt;STRONG&gt;Git Changes&lt;/STRONG&gt;", select the "&lt;STRONG&gt;Create Git Repository…&lt;/STRONG&gt;" option to initialize a local Git repository.&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P lang="en-US"&gt;To create a new remote repository to push your changes, select the "&lt;STRONG&gt;Push to a new remote&lt;/STRONG&gt;" option, otherwise, in case you prefer to use an existing remote repository, select&amp;nbsp;&lt;SPAN&gt;"&lt;/SPAN&gt;&lt;STRONG style="font-family: inherit;"&gt;Existing remote&lt;/STRONG&gt;&lt;SPAN&gt;" and provide your repository URL.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P style="margin: 0in; font-family: Calibri; font-size: 11.0pt;" lang="en-US"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P lang="en-US"&gt;Select "&lt;STRONG&gt;Create and Push&lt;/STRONG&gt;" to complete the Git integration. During this integration, your project files will be automatically pushed to your remote repository. You can check&amp;nbsp;the master branch in your remote repository, as it should contain all your project files:&lt;/P&gt;
&lt;P style="margin: 0in; font-family: Calibri; font-size: 11.0pt;" lang="en-US"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; font-family: Calibri; font-size: 11.0pt;" lang="en-US"&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3 lang="en-US" style="margin: 0in; font-family: Calibri; font-size: 16.0pt; color: #1e4e79;"&gt;&lt;A id="step4" target="_blank"&gt;&lt;/A&gt;Step 4: Creating a DevOps Pipeline and building the Dacpac file&lt;/H3&gt;
&lt;P style="margin: 0in; font-family: Calibri; font-size: 11.0pt;" lang="en-US"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P lang="en-US"&gt;After integrating your Visual Studio database project with your Git repository, it's time to setup a DevOps Pipeline to build the &lt;A href="https://learn.microsoft.com/en-us/sql/relational-databases/data-tier-applications/data-tier-applications?view=sql-server-ver16#dacpac" target="_blank" rel="noopener"&gt;Dacpac&lt;/A&gt;.&lt;/P&gt;
&lt;P lang="en-US"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P lang="en-US"&gt;From the left navigation menu, select "&lt;STRONG&gt;Pipelines&lt;/STRONG&gt;" and then "&lt;STRONG&gt;New pipeline&lt;/STRONG&gt;" to create a new pipeline.&lt;/P&gt;
&lt;P lang="en-US"&gt;Select "&lt;STRONG&gt;Use the classic editor&lt;/STRONG&gt;" to create a new pipeline without YAML.&lt;/P&gt;
&lt;P style="margin: 0in; font-family: Calibri; font-size: 11.0pt;" lang="en-US"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P lang="en-US"&gt;Select "&lt;STRONG&gt;Azure Repos Git&lt;/STRONG&gt;" as the source type, and then specify your project name, repository and branch.&lt;/P&gt;
&lt;P style="margin: 0in; font-family: Calibri; font-size: 11.0pt;" lang="en-US"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; font-family: Calibri; font-size: 11.0pt;" lang="en-US"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P lang="en-US"&gt;Select the .NET desktop template.&lt;/P&gt;
&lt;P style="margin: 0in; font-family: Calibri; font-size: 11.0pt;" lang="en-US"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P lang="en-US"&gt;To simply your pipeline, you can just keep these tasks below:&lt;/P&gt;
&lt;P style="margin: 0in; font-family: Calibri; font-size: 11.0pt;" lang="en-US"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P lang="en-US"&gt;Select "&lt;STRONG&gt;Save &amp;amp; Queue&lt;/STRONG&gt;" to save your changes and then select "&lt;STRONG&gt;Save and Run&lt;/STRONG&gt;" to run your pipeline.&lt;/P&gt;
&lt;P style="margin: 0in; font-family: Calibri; font-size: 11.0pt;" lang="en-US"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P lang="en-US"&gt;Once the job run is finished, you can validate the list of published artifacts by selecting the link below:&lt;/P&gt;
&lt;P style="margin: 0in; font-family: Calibri; font-size: 11.0pt;" lang="en-US"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P lang="en-US"&gt;The link will take you to the Dacpac file:&lt;/P&gt;
&lt;P style="margin: 0in; font-family: Calibri; font-size: 11.0pt;" lang="en-US"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;Now that the Dacpac file has been published, it's now time to configure the Continuous Deliver process.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;A id="step5" target="_blank"&gt;&lt;/A&gt;Step 5: Creating a Release Pipeline and deploying the Dacpac&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In this step I'm creating a new release pipeline to deploy the Dacpac file to a target Synapse Serverless SQL pool.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;From the left navigation menu, select "&lt;STRONG&gt;Pipelines&lt;/STRONG&gt;" and then select "&lt;STRONG&gt;Releases&lt;/STRONG&gt;".&lt;/P&gt;
&lt;P&gt;To start configuring your new release pipeline, select "&lt;STRONG&gt;+New&lt;/STRONG&gt;" and then "&lt;STRONG&gt;New release pipeline&lt;/STRONG&gt;".&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;When prompted to select a template, select "&lt;STRONG&gt;Empty Job&lt;/STRONG&gt;".&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;you can name your stage and then close this blade.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Let's start by adding our Dacpac file as a pipeline artifact. Select "&lt;STRONG&gt;+Add&lt;/STRONG&gt;" to add a new artifact.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Under "&lt;STRONG&gt;Source Type&lt;/STRONG&gt;" select "&lt;STRONG&gt;Build&lt;/STRONG&gt;". you must specify your project name and the build pipeline name.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Select "&lt;STRONG&gt;Add&lt;/STRONG&gt;" to add this artifact to your pipeline.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Select the "&lt;STRONG&gt;Tasks&lt;/STRONG&gt;" tab to start configuring your pipeline. Click the "&lt;STRONG&gt;+&lt;/STRONG&gt;" button in the Agent Job bar to add a new task.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;You can type "&lt;STRONG&gt;data warehouse&lt;/STRONG&gt;" in the search bar , as you're looking to add the "&lt;A href="https://marketplace.visualstudio.com/items?itemName=ms-sql-dw.SQLDWDeployment" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Azure SQL Datawarehouse deployment&lt;/STRONG&gt;&lt;/A&gt;" task to your release pipeline. This task will allow deploying a Dacpac file to the target environment.&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Select "&lt;STRONG&gt;Add&lt;/STRONG&gt;" to add this task to your pipeline.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Let's start by configuring the authentication related inputs in this task. Instead of using hardcoded values, I'll take advantage of the user defined Variables in my DevOps pipeline.&lt;BR /&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In order to define and set the values for your variables, you must select the "&lt;STRONG&gt;Variables&lt;/STRONG&gt;" tab. I'm using these variables below, defining values for my target Synapse Serverless server, database and user credentials.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Back to the "&lt;STRONG&gt;Tasks&lt;/STRONG&gt;" tab, let's continue configuring our task , in particular the "&lt;STRONG&gt;Deployment Package&lt;/STRONG&gt;" section.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;When you select the "&lt;STRONG&gt;SQL DACPAC file&lt;/STRONG&gt;" deploy type, the deployment task will execute the SqlPackage CLI to deploy (publish) the Dacpac file. The &lt;A href="https://learn.microsoft.com/en-us/sql/tools/sqlpackage/sqlpackage?view=sql-server-ver16" target="_blank" rel="noopener"&gt;SqlPackage&lt;/A&gt; is a command line utility built on top of the &lt;A href="https://www.microsoft.com/en-US/download/details.aspx?id=55255" target="_blank" rel="noopener"&gt;Data-Tier Application Framework&lt;/A&gt; (DacFx) framework , and it exposes some of the public DacFx APIs like the Extract, Publish and Script. Since we want to deploy a dacpac file, the action that we are interested in is the &lt;A href="https://learn.microsoft.com/en-us/sql/tools/sqlpackage/sqlpackage-publish?view=sql-server-ver16" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;PUBLISH&lt;/STRONG&gt;&lt;/A&gt;&amp;nbsp;action.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;To specify the "DACPAC file" location, hit the "&lt;STRONG&gt;Browse&lt;/STRONG&gt;" button&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Specify the Dacpac file location from the linked artifact:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;There's a final step that you need to take before saving and running your release pipeline: replacing the SQLCMD variables values with new values pointing to your target environment, as these variables are still referencing the resources in your source environment.&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Any valid SQLCMD variable existing in the Dacpac can be overridden by adding the /v: (short form for /Variables:) property to the arguments list.&lt;BR /&gt;You can refer to this link to get more details on how to use SQLCMD variables in SqlPackage:&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/sql/tools/sqlpackage/sqlpackage-publish?view=sql-server-ver15#sqlcmd-variables" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;SqlPackage Publish - SQL Server | Microsoft Learn&lt;/SPAN&gt;&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In this example, because I'm using those three SQLCMD variables in Visual Studio (&lt;I&gt;storage_location&lt;/I&gt; , &lt;I&gt;file_path&lt;/I&gt; and &lt;I&gt;sas_key) , &lt;/I&gt;I'm adding three user defined variables to my pipeline, to override the SQLCMD variables.&lt;BR /&gt;&lt;BR /&gt;My external table "&lt;STRONG&gt;userData&lt;/STRONG&gt;" will be pointing to a different storage account (&lt;STRONG&gt;stgsyncicduat&lt;/STRONG&gt; instead of stgsyncicddev)&amp;nbsp; and to a different file path (&lt;STRONG&gt;target-csv/eds_mapping.csv&lt;/STRONG&gt; instead of csv/eds_mapping.csv). I'll be obviously &lt;STRONG&gt;replacing the storage account SAS key&lt;/STRONG&gt;&amp;nbsp;as well.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;After defining the pipeline variables, return to the task configuration, as you need to configure the SQLCMD variable replacement. This is done via SqlPackage arguments, when using the variables property. Using variables will instruct SqlPackage to override the SQLCMD variables being used in the Dacpac file with the new values defined in your DevOps variables.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;This is how I'm overriding my SQLCMD variables (storage_location, file_path, and sas_key).&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;After the configuration is complete, hit the "&lt;STRONG&gt;Save&lt;/STRONG&gt;" button and then select "&lt;STRONG&gt;Create Release&lt;/STRONG&gt;" to run your release pipeline.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;You can track the release progress by selecting the release number link or by selecting the "&lt;STRONG&gt;View Releases&lt;/STRONG&gt;" button.&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Once in the release, you can mouse over the stage name, and select the "&lt;STRONG&gt;Logs&lt;/STRONG&gt;" button to get more details about the actions being performed during the job run.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;After the execution is completed, the task output should look similar to this:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;To validate that the deployment went well, and all the objects are now pointing to the target environment resources, you can use a client tool such SSMS. &lt;BR /&gt;&lt;BR /&gt;Et voila! My Synapse serverless objects were successfully deployed to the target environment and they are now pointing to a different external location&amp;nbsp;&lt;span class="lia-unicode-emoji" title=":slightly_smiling_face:"&gt;🙂&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Conclusion&lt;/H2&gt;
&lt;P&gt;&lt;BR /&gt;By completing this lab, you should have learned how to take advantage of database oriented tools (like SSDT or SqlPackage) to boost your CICD automation for Azure Synapse Serverless SQL pools. These tools will facilitate the deployment of database changes across the environments, by &lt;SPAN&gt;providing &lt;SPAN class="ui-provider bls blt blu blv blw blx bly blz bma bmb bmc bmd bme bmf bmg bmh bmi bmj bmk bml bmm bmn bmo bmp bmq bmr bms bmt bmu bmv bmw bmx bmy bmz bna"&gt;deploy-time variables (SQLCMD variables) that are&amp;nbsp;&lt;/SPAN&gt;particularly helpful in the context of CICD for an Azure Synapse Serverless SQL pool, where you must adapt your database objects to the target environment resources.&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 14 Sep 2023 14:25:38 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/boost-your-cicd-automation-for-synapse-sql-serverless-by-taking/ba-p/3922851</guid>
      <dc:creator>RuiCunha</dc:creator>
      <dc:date>2023-09-14T14:25:38Z</dc:date>
    </item>
    <item>
      <title>Metadata-Based Ingestion in Synapse with Delta Lake</title>
      <link>https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/metadata-based-ingestion-in-synapse-with-delta-lake/ba-p/3844900</link>
      <description>&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:3,&amp;quot;335551620&amp;quot;:3,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Overview&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt; &lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;The crucial first step in any ETL (extract, transform, load) process or data engineering program is ingestion, which involves dealing with multiple data sources and entities or datasets. Managing ingestion flow for multiple entities is therefore essential. This article outlines a Metadata-based approach for ingestion that allows for convenient modification of datasets whenever necessary. It also shows how a Delta Lake can be accessed by different forms of compute, such as Spark pools and SQL serverless, in Synapse and how these computes can be utilized in a single Synapse pipeline.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Scenarios:&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt; &lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Metadata-based ingestion is a frequent practice used by various teams. In the past, ASQL was used just for storing Metadata, which resulted in additional resource maintenance only for Metadata. However, with modern architecture and migration to Synapse for analytics, there is an opportunity to move away from other Metadata stores such as ASQL database.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;To address this, we explored multiple options to find a new and more effective way to store and read Metadata. After careful consideration, we introduced a new method for Metadata storage that would eliminate the need for ASQL and allow for easier maintenance of Metadata resources.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;This novel approach to Metadata storage is based on modern technology and is designed to meet the needs of a Data Engineering team. With this novel approach, the team can better manage Metadata resources as it is easily configurable with Delta tables.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt; &lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Overall, the introduction of this new Metadata-based ingestion approach represents a significant step forward for every team.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt; &lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Metadata Store Options&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="1" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559684&amp;quot;:-2,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;ASQL&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="1" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559684&amp;quot;:-2,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;Dedicated SQL Pool&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="1" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559684&amp;quot;:-2,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;Delta Lake tables&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="1" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559684&amp;quot;:-2,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;Excel/csv files&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="1" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559684&amp;quot;:-2,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;Azure Table Storage&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="1" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559684&amp;quot;:-2,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;Serverless SQL Pool&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Among the above options&lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;, Delta Lake tables&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="none"&gt; are the best fit for storing Metadata, as Delta Lake offers ACID like properties, schema enforcement and data versioning, which makes it more useful.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt; &lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Pros and cons of Delta Lake tables are as below:&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Pros:&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="2" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559684&amp;quot;:-2,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;Delta Lake tables are used heavily in all reporting solutions.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="2" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559684&amp;quot;:-2,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="2" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;Delta Lake tables support bulk update SQL commands such as update, merge, etc.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="2" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559684&amp;quot;:-2,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="3" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;Delta Lake tables support audit and restore previous snapshot of data.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="2" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559684&amp;quot;:-2,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="4" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;Delta Lake provides several features such as ACID transactions, schema enforcement, upsert and delete operations, unified stream and batch data processing, and time travel (data versioning) that are incredibly useful for analytics on Big Data.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Cons:&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="3" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559684&amp;quot;:-2,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="2" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;Synapse pipelines cannot directly read Delta tables. They can only be read through compute like Spark pool.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt; &lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;How to use Delta tables for Metadata-based ingestion&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="4" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559684&amp;quot;:-2,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="3" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;Using Serverless SQL pool&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="4" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559684&amp;quot;:-2,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="4" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;Using Synapse dataflows&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="4" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559684&amp;quot;:-2,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="4" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;Using Spark notebook&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;From the above listed options, Serverless SQL pool seems to be the optimal way to read Metadata from Delta tables in Synapse.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt; &lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;SQL serverless pool benefits&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="5" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559684&amp;quot;:-2,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;SQL serverless pool can now read Delta tables directly from Delta Lake, just like csv files. This can be leveraged in Synapse pipelines to read Metadata stored in Delta tables.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="5" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559684&amp;quot;:-2,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="2" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;SQL serverless pool is always available and part of every Synapse workspace.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="5" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559684&amp;quot;:-2,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="3" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;SQL serverless pool does not require any spin up time to become active unlike Spark pools.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="5" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559684&amp;quot;:-2,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="4" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;The cost of serverless pool is low, using $5 per TB of processed data. In this scenario, there are only a few Metadata rows that are read, which is cost efficient.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="6" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559684&amp;quot;:-2,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="5" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;Costing Model:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;
&lt;UL&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="6" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559684&amp;quot;:-2,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="5" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;Cost is $5 per TB data processed by the query.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="6" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559684&amp;quot;:-2,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="5" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;Cost depends on the amount of data processed:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&amp;nbsp;&lt;/SPAN&gt;
&lt;UL&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="6" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559684&amp;quot;:-2,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="5" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;Data read/write into Delta Lake.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="6" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559684&amp;quot;:-2,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="5" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;Data shuffled in intermediate nodes.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="6" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559684&amp;quot;:-2,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="5" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;Creation of statistics.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="6" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559684&amp;quot;:-2,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="5" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;Minimum data processed will be 10 MB.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt; &lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Metadata (Delta table) based ingestion using Serverless SQL pool.&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt; &lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Using Serverless SQL pool, we can query Delta tables to read our Metadata and then start our orchestration process using pipelines. For sample Metadata, please refer to the GitHub repository mentioned in appendix.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt; &lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;How to use Serverless SQL pool to read Metadata:&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;SPAN data-contrast="none"&gt; Creation of linked service:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Prerequisite: Create at least one database under serverless pool for Metadata.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Connection String: Integrated Security=False; Encrypt=True; Connection Timeout=30; Data Source=synapseserverlesspoolname-ondemand.sql.azuresynapse.net ; Initial Catalog=DBName&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;2. &lt;SPAN class="NormalTextRun SCXW242245541 BCX8"&gt;Create a new dataset &lt;/SPAN&gt;&lt;SPAN class="NormalTextRun SCXW242245541 BCX8"&gt;using &lt;/SPAN&gt;&lt;SPAN class="NormalTextRun SCXW242245541 BCX8"&gt;the linked&lt;/SPAN&gt;&lt;SPAN class="NormalTextRun SCXW242245541 BCX8"&gt; service &lt;/SPAN&gt;&lt;SPAN class="NormalTextRun SCXW242245541 BCX8"&gt;created in step 1 and &lt;/SPAN&gt;&lt;SPAN class="NormalTextRun SCXW242245541 BCX8"&gt;keep &lt;/SPAN&gt;&lt;SPAN class="NormalTextRun SCXW242245541 BCX8"&gt;the table&lt;/SPAN&gt;&lt;SPAN class="NormalTextRun SCXW242245541 BCX8"&gt; name&lt;/SPAN&gt;&lt;SPAN class="NormalTextRun SCXW242245541 BCX8"&gt; empty&lt;/SPAN&gt;&lt;SPAN class="NormalTextRun SCXW242245541 BCX8"&gt;.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;3. As shown in below snapshot, Create a pipeline that uses Look-up activity to read Metadata from Delta Lake. In this Look-up activity we are connecting to dataset (from point 2) to fire user customized query on Delta table. Once the Look-up activity retrieves all rows from the Metadata table, the For-loop activity is used to iterate through each Metadata row from Look-up activity output. And within For-loop activity, a copy activity is used to read data from source to sink using query/table option for given column SourceQuery/SourceEntity from Metadata row, respectively.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt; &lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Overall, this pipeline efficiently leverages the Look-up and For-loop activities to retrieve Metadata and iterate through it, while the copy activity facilitates data movement from the source to the sink using the appropriate query or table options based on the Metadata information.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt; &lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Look-up activity to fire user customized query on Delta table as shown below: &lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Query:&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;SELECT * FROM OPENROWSET (BULK 'filepath', FORMAT = 'delta') as rows&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;NOTE:&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Use &lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;filepath&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="none"&gt; above in format as given: &lt;/SPAN&gt;&lt;A href="https://datalakegen2.b/" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;https://datalakegen2.b&lt;/SPAN&gt;&lt;/A&gt;&lt;A href="http://lob.core.windows.net/eehrsisynapsefs/data/common/metadata/entitymetadata/" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;lob.core.windows.net/syanpseworkspace/data/common/metadata/entitymetadata/&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="none"&gt; &lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;With Metadata-based ingestion, Synapse pipeline is simplified and it contains only a few pipeline items like copy activity, for-loop, and look-up, reducing the need of adding 100 of copy activities if you are processing data for 100 entities each day, which makes pipeline huge and difficult to maintain.  Metadata-based pipeline makes orchestration simple and clean.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt; &lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Process to merge raw data into gold table&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt; &lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Once we have data copied into raw layer, we call a Synapse notebook which reads Metadata and merge raw parquet files data into gold tables in parallel threads as shown below:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Steps to make notebook in parallel call fashion:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL&gt;
&lt;LI data-leveltext="%1." data-font="Lato,Times New Roman" data-listid="7" data-list-defn-props="{&amp;quot;335552541&amp;quot;:0,&amp;quot;335559684&amp;quot;:-1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769242&amp;quot;:[65533,0],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;%1.&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;Create methods which will have logic to merge data to existing tables:&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;2. Read Metadata in a data frame:&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL start="3"&gt;
&lt;LI&gt;&lt;SPAN class="TextRun SCXW89112504 BCX8" data-contrast="none"&gt;&lt;SPAN class="NormalTextRun SCXW89112504 BCX8"&gt;Call method created above in parallel fashion (async calls):&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="EOP SCXW89112504 BCX8" data-ccp-props="{&amp;quot;134233117&amp;quot;:true,&amp;quot;134233118&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;img /&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Advantages:&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;OL&gt;
&lt;LI data-leveltext="%1." data-font="Lato,Times New Roman" data-listid="9" data-list-defn-props="{&amp;quot;335552541&amp;quot;:0,&amp;quot;335559684&amp;quot;:-1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769242&amp;quot;:[65533,0],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;%1.&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;As the Synapse session takes 2-3 mins to start, it will save a lot of time by processing multiple entities in parallel executions.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:true,&amp;quot;134233118&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="%1." data-font="Lato,Times New Roman" data-listid="9" data-list-defn-props="{&amp;quot;335552541&amp;quot;:0,&amp;quot;335559684&amp;quot;:-1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769242&amp;quot;:[65533,0],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;%1.&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;With Metadata driven approach if there is a change in any attribute, like entities addition/removal, Delta Lake path, then you just need to change Metadata and code will take care of the rest.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:true,&amp;quot;134233118&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="%1." data-font="Lato,Times New Roman" data-listid="9" data-list-defn-props="{&amp;quot;335552541&amp;quot;:0,&amp;quot;335559684&amp;quot;:-1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769242&amp;quot;:[65533,0],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;%1.&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;Minimal cost: cost is $5 per TB data processed by the query.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:true,&amp;quot;134233118&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt; &lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Conclusion&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Delta Lake is a valuable resource for persisting data in storage, offering ACID-like properties that are particularly useful for analytics and reporting. It provides several other features such as schema enforcement, upsert and delete operations, unified stream and batch data processing, and time travel (data versioning) that are incredibly useful for analytics on Big Data.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt; &lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;A streamlined ingestion process is critical for ETL. Metadata-based ingestion provides an effective solution for achieving a smooth ingestion flow, with only a one-time setup required. Pipeline activities can be easily extended, to add new entities or retire existing datasets.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt; &lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;In this article, we have explored how Delta Lake can be accessed by different forms of compute in Synapse such as Spark pools and SQL serverless. And how these compute steps can be orchestrated in a single Synapse pipeline. This, combined with the other benefits of Delta Lake mentioned above, make it an incredibly useful format for data ingestion and storage in an ETL process and in Synapse pipelines.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt; &lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Appendix&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="10" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559684&amp;quot;:-2,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Code Repository: &amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;FONT color="#1460AA"&gt;&lt;A class="Hyperlink SCXW212345067 BCX8" href="https://github.com/microsoft/SynapseGenie/tree/main/utilities/Metadata-IngestionUsing-ServerlessSQLPool" target="_blank" rel="noreferrer noopener"&gt;&lt;SPAN class="TextRun Underlined SCXW212345067 BCX8" data-contrast="none"&gt;&lt;SPAN class="NormalTextRun SCXW212345067 BCX8"&gt;Metadata-IngestionUsing&lt;U&gt;-&lt;/U&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;U&gt;&lt;SPAN class="TextRun Underlined SCXW212345067 BCX8" data-contrast="none"&gt;&lt;SPAN class="NormalTextRun SCXW212345067 BCX8"&gt;ServerlessSQLPool&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/U&gt;&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="10" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559684&amp;quot;:-2,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="2" data-aria-level="1"&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Other Helpful articles:&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:true,&amp;quot;134233118&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;
&lt;UL&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="10" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559684&amp;quot;:-2,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="2" data-aria-level="1"&gt;&lt;A href="https://github.com/microsoft/SynapseGenie/tree/main/utilities" target="_blank" rel="noopener"&gt;Synapse Genie&lt;/A&gt;&amp;nbsp;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="10" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559684&amp;quot;:-2,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="2" data-aria-level="1"&gt;&lt;A href="https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/strengthen-delta-lake-in-synapse-with-auto-maintenance-job/ba-p/3737161" target="_blank" rel="noopener"&gt;strengthen-delta-lake-in-synapse-with-auto-maintenance-job&lt;/A&gt;&amp;nbsp;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="10" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559684&amp;quot;:-2,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="2" data-aria-level="1"&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/data-factory/control-flow-for-each-activity" target="_blank" rel="noopener"&gt;https://learn.microsoft.com/en-us/azure/data-factory/control-flow-for-each-activity&lt;/A&gt;&amp;nbsp;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="10" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559684&amp;quot;:-2,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="2" data-aria-level="1"&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/synapse-analytics/spark/apache-spark-what-is-delta-lake" target="_blank" rel="noopener"&gt;https://learn.microsoft.com/en-us/azure/synapse-analytics/spark/apache-spark-what-is-delta-lake&lt;/A&gt;&amp;nbsp;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="10" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559684&amp;quot;:-2,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="2" data-aria-level="1"&gt;&lt;A href="https://docs.delta.io/latest/delta-intro.html" target="_blank" rel="noopener"&gt;Introduction — Delta Lake Documentation&lt;/A&gt;&amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Wed, 19 Jul 2023 15:00:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/metadata-based-ingestion-in-synapse-with-delta-lake/ba-p/3844900</guid>
      <dc:creator>madhuvigupta</dc:creator>
      <dc:date>2023-07-19T15:00:00Z</dc:date>
    </item>
    <item>
      <title>Missing Fields Added to Dedicated SQL pool Diagnostic Settings Logs</title>
      <link>https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/missing-fields-added-to-dedicated-sql-pool-diagnostic-settings/ba-p/3844011</link>
      <description>&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Over the past year, customers have informed the team there were a set of key columns missing in the &lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;standalone Dedicated SQL pools (formerly SQL DW) and Synapse workspace Dedicated SQL pool Diagnostic Setting logs. After receiving a considerable amount of customer feedback, nine fields have been added to the Dedicated SQL pool logs and are generally available. These values will provide a more &lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;effective monitoring experience as customers are now able to identify which session the query belongs to along with other fundamental insights.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559685&amp;quot;:0,&amp;quot;335559737&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559685&amp;quot;:0,&amp;quot;335559737&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;The fields added into the Synapse Diagnostic Settings Logs are the following:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/azure/azure-monitor/reference/tables/synapsesqlpoolexecrequests" target="_blank" rel="noopener"&gt;Azure Monitor Logs reference - SynapseSqlPoolExecRequests:&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="53" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559684&amp;quot;:-2,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;ClassifierName&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="53" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559684&amp;quot;:-2,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="2" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Importance&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="53" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559684&amp;quot;:-2,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="3" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;ResourceAllocationPercent&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="53" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559684&amp;quot;:-2,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="4" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;ResultCacheHit&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="53" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559684&amp;quot;:-2,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;SessionId&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="53" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559684&amp;quot;:-2,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="2" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;SubmitTime [UTC]&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="53" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559684&amp;quot;:-2,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="3" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;SubscriptionId&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/azure/azure-monitor/reference/tables/synapsesqlpoolwaits" target="_blank" rel="noopener"&gt;Azure Monitor Logs reference - SynapseSqlPoolWaits&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="53" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559684&amp;quot;:-2,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Request_id&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="53" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559684&amp;quot;:-2,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="2" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Session_id&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="53" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559684&amp;quot;:-2,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="3" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Lock Type&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;There will be more fields added later and we’re working on enabling Basic Logs for Synapse logs as well so please stay tuned!&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559685&amp;quot;:0,&amp;quot;335559737&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 13 Jun 2023 18:09:49 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/missing-fields-added-to-dedicated-sql-pool-diagnostic-settings/ba-p/3844011</guid>
      <dc:creator>jacindaeng</dc:creator>
      <dc:date>2023-06-13T18:09:49Z</dc:date>
    </item>
    <item>
      <title>Using Azure DevOps with Synapse Workspaces to create hot fixes in production environments</title>
      <link>https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/using-azure-devops-with-synapse-workspaces-to-create-hot-fixes/ba-p/3809631</link>
      <description>&lt;P&gt;Have you ever deployed a release to production only to find out a bug has escaped your testing process and now users are being severely impacted? In this post, I’ll discuss how to deploy a fix from your development Synapse Workspace into a production Synapse Workspace without adversely affecting ongoing development projects.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This example uses Azure DevOps for CICD along with a Synapse extension for Azure DevOps: &lt;A href="https://marketplace.visualstudio.com/items?itemName=AzureSynapseWorkspace.synapsecicd-deploy" target="_blank" rel="noopener"&gt;Synapse Workspace Deployment&lt;/A&gt;. In this example, I assume Synapse is already configured for source control with Azure DevOps Git and Build and Release pipelines are already defined in Azure DevOps. Instructions on how to apply this this can be found in the Azure Synapse documentation for &lt;A href="https://learn.microsoft.com/en-us/azure/synapse-analytics/cicd/continuous-integration-delivery" target="_blank" rel="noopener"&gt;continuous integration and delivery&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Let’s start by looking at an example of development activities in a Synapse workspace. During development, several things happen. The Main (aka. collaboration, trunk or master) branch in the Synapse Development environment gets branched off one or more times to support parallel feature development activities. Changes are committed within the feature branches. Periodically, the branches need to refresh their code with the latest changes that have been incorporated into Main. It is typically up to the developer working on a feature branch to merge their code from Main at whatever interval they choose. These activities can all be done through the Synapse user interface by executing a pull request in the proper direction, in this case from Main to Feature 1. It is up to the feature developers to resolve conflicts in their feature branch.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;At some point, code in the Main branch is ready for deployment to the Synapse Test environment. After testing, a decision is made to release the code into Production. This is done through an Azure DevOps release pipeline outside the scope of this document. For the sake of simplicity, the following diagrams omit the Synapse Test environment and only show the Synapse Development and Production environments since we are targeting a Production “hot fix” use case. Best practice would be to have at least one intermediate environment between Development and Production.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;After a Production release, development continues on with Feature 3…N, and changes are incorporated into the Main branch as shown in the diagram below.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Now to the heart of the discussion... At some point, a critical bug is discovered in Production that needs to be fixed immediately. How do we do this?&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Start by getting the Commit ID of the code that was released to production and create a new branch based on the Commit ID. These steps must be done through Azure DevOps.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In Azure DevOps, navigate to Pipelines/Releases, and select the appropriate deployment pipeline for the production release. Then select the appropriate release.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Note: It is important to have the release pipeline have access to both the ARM templates created by the build process, as well as the actual source code. The ARM templates are used for deployment into later stages (Test/QA/Prod/etc). The source code will be used for hot fixes.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Copy the Commit ID (this looks like an 8 character hex string in the UX, but is actually only the first 8 characters of a longer SHA) for the released code as shown in the diagram below. Make sure you select the Commit ID for the source code and not the ARM template artifact.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Navigate to the branches section of your Azure DevOps Repo and create a new branch based on the Commit ID you copied in the last step.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Select the pulldown on “main” to get to the UX screen that allows you to search for a specific Commit:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Validate that the new branch is based on the proper Commit ID:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Now that we have a special code branch for creating the fix, we can go back into the Development Synapse Workspace, select the hot fix branch, and make our corrections.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;When satisfied with the changes, commit them to the branch. In order to get changes in the hot fix branch deployed to Production without impacting the Development and Test environments, we need to create a new Azure DevOps release pipeline incorporating the Synapse Workspace Extension task. In our example, the release of the hot fix branch into Production is triggered manually in Azure DevOps.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The hot fix branch should remain in existence for the life of the Production release it is associated with, so that future hotfixes can be incorporated easily.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Let’s create a new release pipeline specifically for hot fixes following the instructions for Synapse &lt;A href="https://learn.microsoft.com/en-us/azure/synapse-analytics/cicd/continuous-integration-delivery" target="_blank" rel="noopener"&gt;continuous integration and delivery&lt;/A&gt;. We will make a couple modifications to the pipeline that is created in that document. First, the source artifact for this pipeline should have a default branch pointing to the hot fix branch for this release (ie Release3_Hotfixes in our example). Update the source alias to reflect the proper code branch. Update the stage name to “Production_Hotfix”.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Now click on the “1 job, 0 task” link under the stage name. Add a new agent job. Search for Synapse and select the “Toggle” task. Update the fields as appropriate for your Production environment. For the subscription, use a service connection for the Prod environment. You can see how to create service connections in &lt;A href="https://learn.microsoft.com/en-us/azure/devops/pipelines/library/service-endpoints?view=azure-devops&amp;amp;tabs=yaml&amp;amp;preserve-view=true" target="_blank" rel="noopener"&gt;this document&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Add another agent task. Search for Synapse and add the task called “Synapse Workspace Deployment”. Update the fields as shown in the screenshot below, replacing values as appropriate with names in your Production environment. Make sure the operation type is “Validate and Deploy”.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Add another agent task; search for Synapse and add the task called “Azure Synapse Toggle Triggers”. Update the fields as shown in the screenshot below, replacing values as appropriate with names in your Production environment. Rename the pipeline to something meaningful. Save your pipeline.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The pipeline we just created must be manually run to deploy the hot fix to Production. Go into Azure DevOps and create a new release using this pipeline when you are ready.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We have now completed the hot fix release flow shown below. Validate that your Synapse Development Main code branch and live mode UX are unaffected by the hot fix.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Don’t forget to incorporate the hot fix code changes back into the Main and Feature branches at a convenient time. As new releases are deployed, new hot fix branches based on the Commit ID of the release will need to be created. Hot fix branches associated with releases that are no longer current can be deleted.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;That’s all there is! Congratulations on fixing the burning issue without interrupting ongoing development and testing. &amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 10 May 2023 15:00:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/using-azure-devops-with-synapse-workspaces-to-create-hot-fixes/ba-p/3809631</guid>
      <dc:creator>vysuopys</dc:creator>
      <dc:date>2023-05-10T15:00:00Z</dc:date>
    </item>
    <item>
      <title>Azure Synapse MVP Corner - March 2023</title>
      <link>https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-mvp-corner-march-2023/ba-p/3812901</link>
      <description>&lt;H2 id="toc-hId-505479025"&gt;About this blog series&lt;/H2&gt;
&lt;P&gt;Microsoft Most Valuable Professionals, or MVPs, are technology experts who passionately share their knowledge with the community.&amp;nbsp;&lt;SPAN&gt;They are always on the "bleeding edge" and have an unstoppable urge to get their hands on new, exciting technologies. They have very deep knowledge of Microsoft products and services, while also being able to bring together diverse platforms, products and solutions, to solve real world problems.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;To&amp;nbsp;learn about the Microsoft MVP Award and to find MVPs visit the official website:&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://mvp.microsoft.com/" target="_blank" rel="noopener noreferrer"&gt;https://mvp.microsoft.com/&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The Azure Synapse Analytics team presents you with a blog series "&lt;EM&gt;Azure Synapse MVP Corner&lt;/EM&gt;"&amp;nbsp;to&amp;nbsp;highlight selected content created by MVPs.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 id="toc-hId--1301975438"&gt;This month's MVP content&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE style="border-style: none; border-color: #FFFFFF;" border="0px"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="120px" valign="top"&gt;&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;A href="https://mvp.microsoft.com/en-us/PublicProfile/5004032" target="_blank" rel="noopener"&gt;Kevin Chant&lt;/A&gt;&lt;/STRONG&gt; (Twitter: &lt;A href="https://twitter.com/kevchant" target="_blank" rel="noopener"&gt;@kevchant&lt;/A&gt;)&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Blog post:&lt;/STRONG&gt; &lt;A href="https://www.kevinrchant.com/2023/03/09/deploy-a-dacpac-to-a-serverless-sql-pool-using-github-actions/" target="_blank" rel="noopener"&gt;Deploy a dacpac to a serverless SQL pool using GitHub Actions&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Description:&lt;/STRONG&gt; Covers how you can deploy a dacpac to a serverless SQL pool using&amp;nbsp;GitHub Actions. Which is now possible thanks to a SqlPackage update.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="120px" valign="top"&gt;&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;A href="https://mvp.microsoft.com/en-us/PublicProfile/5004032" target="_blank" rel="noopener"&gt;Kevin Chant&lt;/A&gt;&lt;/STRONG&gt; (Twitter: &lt;A href="https://twitter.com/kevchant" target="_blank" rel="noopener"&gt;@kevchant&lt;/A&gt;)&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Video:&lt;/STRONG&gt; &lt;A href="https://www.youtube.com/watch?v=wakDmLYxSD0" target="_blank" rel="noopener"&gt;Streamlining your CI/CD Pipeline with Azure Synapse Link for SQL Server 2022&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Description:&lt;/STRONG&gt; Covers a complete CI/CD experience for Azure Synapse Link for SQL Server 2022.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="120px" valign="top"&gt;&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;A href="https://mvp.microsoft.com/en-us/PublicProfile/5004611" target="_blank" rel="noopener"&gt;Andy Cutler&lt;/A&gt;&lt;/STRONG&gt; (Twitter: &lt;A href="https://twitter.com/MrAndyCutler" target="_blank" rel="noopener"&gt;@MrAndyCutler&lt;/A&gt;)&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Blog post:&lt;/STRONG&gt; &lt;A href="https://www.serverlesssql.com/what-is-resultset-caching-azure-synapse-analytics-and-power-bi-together/" target="_blank" rel="noopener"&gt;Understanding ResultSet Caching in Dedicated SQL Pools with Power BI&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Description:&lt;/STRONG&gt; In this first blog in a new series of Power BI + Azure Synapse Analytics Andy is looking into Dedicated SQL Pools ResultSet Caching and how this feature can accelerate load performance into Power BI datasets.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV id="tinyMceEditorpawelpotasinski_3" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV id="tinyMceEditorpawelpotasinski_4" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="120px" valign="top"&gt;&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;A href="https://mvp.microsoft.com/en-us/PublicProfile/5004611" target="_blank" rel="noopener"&gt;Andy Cutler&lt;/A&gt;&lt;/STRONG&gt; (Twitter: &lt;A href="https://twitter.com/MrAndyCutler" target="_blank" rel="noopener"&gt;@MrAndyCutler&lt;/A&gt;)&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Blog post:&lt;/STRONG&gt; &lt;A href="https://www.serverlesssql.com/azure-synapse-serverless-sql-pools-cheat-sheet/" target="_blank" rel="noopener"&gt;Azure Synapse Serverless SQL Pools Cheat Sheet&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Description:&lt;/STRONG&gt; Here is a Synapse Serverless SQL Pools Cheat Sheet to help you easily reference syntax to query CSV, Parquet, and Delta data in Azure Data Lake. It covers various scenarios when querying and includes creating Views, External Tables, and using system metadata to track Data Processed.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 id="toc-hId-1185537395"&gt;Call to action&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;Read or watch the content listed above.&lt;/LI&gt;
&lt;LI&gt;If you like the content, subscribe to the blogs or YouTube channels of the MVPs.&lt;/LI&gt;
&lt;LI&gt;Follow the MVPs mentioned above on Twitter.&lt;/LI&gt;
&lt;LI&gt;Stay tuned for more "MVP Corner" blog posts!&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Tue, 09 May 2023 16:00:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-mvp-corner-march-2023/ba-p/3812901</guid>
      <dc:creator>pawelpotasinski</dc:creator>
      <dc:date>2023-05-09T16:00:00Z</dc:date>
    </item>
    <item>
      <title>CI &amp; CD With Azure Synapse Dedicated SQL Pool</title>
      <link>https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/ci-amp-cd-with-azure-synapse-dedicated-sql-pool/ba-p/3810686</link>
      <description>&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&lt;I&gt;&lt;SPAN data-contrast="none"&gt;Author(s): Pradeep Srikakolapu&amp;nbsp;is a Program Manager in Azure Synapse Customer Success Engineering (CSE) team.&lt;/SPAN&gt;&lt;/I&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259,&amp;quot;469777462&amp;quot;:[250,4680],&amp;quot;469777927&amp;quot;:[0,0],&amp;quot;469777928&amp;quot;:[1,3]}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Automating development practices is hard but we can make it simpler using Version Control, Continuous Integration &amp;amp; Deployment and best practices to manage ALM lifecycle of an Azure Synapse Data Warehouse with this blog article.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 aria-level="1"&gt;&lt;STRONG&gt;Introduction&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:true,&amp;quot;134245529&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;This article helps developers understand the following:&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;How to version control Synapse Dedicated SQL Pool (Azure data warehouse) objects?&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;How to continuously develop and deploy data warehouse objects using SSDT &amp;amp; SQL Package?&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;How to selectively deploy objects from a dacpac file?&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;The prerequisites to understand this article are:&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Basic understanding of GIT, a repository in Azure DevOps or GitHub organizations.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;How SSDT and SQL Package tooling works&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Azure DevOps Pipelines&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Note: &lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Synapse Serverless SQL Pool is not discussed in this blog. I will blog about version control, CI/CD for Synapse Serverless soon.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:257}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Version control in DW is achieved by SQL Server Data Tooling (SSDT) with a database project. You can build a database project using Visual Studio, VS Code, Azure Data Studio. I will be using Visual Studio 2022 in this blog.&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2 aria-level="1"&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2 aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;Build &amp;amp; publish database code locally from a developer machine&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-contrast="auto"&gt;Create a database project – &lt;EM&gt;File&amp;gt; New &amp;gt; Project, choose SQL Server Database Project template&lt;/EM&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Once the database project is created, right click on the project in the solution explorer. You should choose the target platform as “Microsoft Azure SQL Data Warehouse”.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;&lt;img /&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Organize the database project in the same way we organize the objects in SSMS.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;In the above screenshot, see that I organized schema files in Security -&amp;gt; Schemas folder. I also organized tables and views for each scheme for better abstraction like SSMS with minor differences. Please make sure to include .sql files as part of your build by configuring the build action property to “Build” and copy to output directory property to “Do not copy”.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;You can build the DACPAC file by building the database project from context menu or the entire solution by F5. The context menu of database project has an option build to compile the database project into DACPAC file.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:257}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;SSDT + SQL Database projects use MS build to compile the source code and extract the DAC package file (DACPAC file). If you unzip/unpack DACPAC file, you will see DacMetadata.xml, Origin.xml, model.xml, and model.sql files. model.sql and model.xml files are the representation of your database model.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Once you build a DACPAC file by building database project, you can deploy it to the target data warehouse using SQL Package – Publish action either via database project or SQL Package.exe from cmd.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;You need to configure the target database configuration to deploy the DACPAC file.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:257}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;The other way is to use SQL Package file. An example:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;SqlPackage /Action:Publish /SourceFile:"C:\Users\pvenkat\source\repos\AzureSynapseDW\DWSSDT\bin\Debug\DWSSDT.dacpac"  /TargetConnectionString:"Server=tcp:pvenkat-test-ws.sql.azuresynapse.net,1433;Initial Catalog=testsqlpool;Persist Security Info=False;User ID=dbttestuser;Password={};MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;"
&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;SPAN data-contrast="none"&gt;Continuous integration - Build &amp;amp; deploy database project with Azure pipelines&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:true,&amp;quot;134245529&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:40,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;As a developer, you can build data warehouse code and publish it locally. What if you need to deploy the database code to a development or a test or a production environment? We wouldn’t want to manually deploy source code to any environment. The answer to this problem is to enable continuous integration and deployment on the source in Azure DevOps using Azure pipelines or GitHub using Actions. In this example, I will share the source code repository and the Azure pipelines to enable CI &amp;amp; CD aspects of database code development and deployment.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:257}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Add a YAML file to your source code to perform three tasks. I am using multistage Azure Pipelines to show these three tasks.&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI data-leveltext="·" data-font="Symbol" data-listid="22" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559683&amp;quot;:0,&amp;quot;335559684&amp;quot;:-2,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;·&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Build the database project and publish artifacts from this stage. Please note that publishing artifacts is not publish action with SQL Package.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="·" data-font="Symbol" data-listid="22" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559683&amp;quot;:0,&amp;quot;335559684&amp;quot;:-2,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;·&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="2" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Compare/verify the DACPAC file from build task with target data warehouse. The target data warehouse can be a dev/test/production environment.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="·" data-font="Symbol" data-listid="22" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559683&amp;quot;:0,&amp;quot;335559684&amp;quot;:-2,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;·&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="3" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Deploy the DACPAC file to target environment data warehouse. &lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3 aria-level="3"&gt;&lt;SPAN data-contrast="none"&gt;Build Database Project &amp;amp; Publish Artifacts&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:true,&amp;quot;134245529&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:40,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H3&gt;
&lt;UL&gt;
&lt;LI data-leveltext="·" data-font="Symbol" data-listid="25" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559683&amp;quot;:0,&amp;quot;335559684&amp;quot;:-2,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;·&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="4" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;task:&amp;nbsp;VSBuild@1 &lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;uses MS Build to generate the DACPAC file in /{project folder}/{release/debug}/bin/{project name}.dacpac&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:257}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="·" data-font="Symbol" data-listid="25" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559683&amp;quot;:0,&amp;quot;335559684&amp;quot;:-2,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;·&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="5" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;task&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;:&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;CopyFiles@2&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt; copies the DACPAC file and AgileSqlClub.DeploymentFilterContributor.dll file to artifact staging directory (&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;'$(build.artifactstagingdirectory)')&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:257}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="·" data-font="Symbol" data-listid="25" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559683&amp;quot;:0,&amp;quot;335559684&amp;quot;:-2,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;·&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="6" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;task&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;:&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;PublishPipelineArtifact@1 &lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;copies all files and folders from '&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;$(build.artifactstagingdirectory)'&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt; to a folder ‘drop’ and will publish the folder ‘drop’ for next steps.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:257}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="yaml"&gt;- stage: Build
    jobs:
    - job: Build_And_PublishDatabase_Project
      steps:
      - task: NuGetCommand@2
        displayName: 'NuGet restore'
        inputs:
          vstsFeed: 'ebdcf40d-0db5-427e-b3c5-32b9cb5dcb8d'

      - task: VSBuild@1 
        displayName: 'Build solution DWSSDT/DWSSDT.sqlproj'
        inputs:
          solution: DWSSDT/DWSSDT.sqlproj

      - task: CopyFiles@2
        displayName: 'Copy binaries to staging directory'
        inputs:
          SourceFolder: '$(System.DefaultWorkingDirectory)'
          Contents: '**\DWSSDT\**\bin\**'
          TargetFolder: '$(build.artifactstagingdirectory)'

      - task: CopyFiles@2
        displayName: 'Copy dacpac tools to staging directory'
        inputs:
          SourceFolder: dacpactools
          TargetFolder: '$(build.artifactstagingdirectory)\dacpactools\'

      - task: PublishPipelineArtifact@1
        displayName: 'Publish Pipeline Artifact'
        inputs:
          targetPath: ' $(build.artifactstagingdirectory)'
          artifact: drop
&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3 aria-level="3"&gt;&lt;SPAN data-contrast="none"&gt;Verify DACPAC &amp;amp; compare with target data warehouse&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:true,&amp;quot;134245529&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:40,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Verify stage is verifying the contents of the dacpac that are about to be deployed.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:257}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI data-leveltext="·" data-font="Symbol" data-listid="28" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559683&amp;quot;:0,&amp;quot;335559684&amp;quot;:-2,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;·&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="7" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;download&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;:&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;current &lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;task&lt;/SPAN&gt; &lt;SPAN data-contrast="auto"&gt;downloads the ‘drop’ folder and its content generated from previous stage (Build) for dacpac verification process.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:257}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="·" data-font="Symbol" data-listid="28" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559683&amp;quot;:0,&amp;quot;335559684&amp;quot;:-2,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;·&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="8" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;task&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;:&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;SqlAzureDacpacDeployment@1, &lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;DeploymentAction&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;:&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;Script &lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;task uses SQL Package.exe to perform schema compare on the objects from DACPAC file and target data warehouse. It generates a differential script based on arguments provided so that the content of the differential script can be verified.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="yaml"&gt;- stage: Verify
    jobs:
    - job: Verify_DW_Package
      steps:
      - download: current
        artifact: drop
      - task: SqlAzureDacpacDeployment@1
        displayName: 'Verify '
        inputs:
          azureSubscription: 'Pradeep-MSFT Personal Use (f4664abe-17d0-4128-8048-150cd01575b4)'
          ServerName: 'pvenkat-test-ws.sql.azuresynapse.net'
          DatabaseName: testsqlpool
          SqlUsername: dbttestuser
          SqlPassword: '$(PASSWORD)'
          DeploymentAction: Script
          DacpacFile: '$(Pipeline.Workspace)\drop\DWSSDT\bin\Debug\DWSSDT.dacpac'
&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;SPAN data-contrast="none"&gt;Deploy DACPAC using SQL Package&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:true,&amp;quot;134245529&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:40,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Deploy stage publishes the DACPAC file to target data warehouse. Publish action generates the differential script between DACPAC and target data warehouse and then deploys the differential script to the target data warehouse.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:257}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI data-leveltext="·" data-font="Symbol" data-listid="28" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559683&amp;quot;:0,&amp;quot;335559684&amp;quot;:-2,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;·&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="9" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;download&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;:&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;current &lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;task&lt;/SPAN&gt; &lt;SPAN data-contrast="auto"&gt;downloads the ‘drop’ folder and its content generated from previous stage (Build) for dacpac deployment/publish process.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:257}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="·" data-font="Symbol" data-listid="28" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559683&amp;quot;:0,&amp;quot;335559684&amp;quot;:-2,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;·&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="10" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;task&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;:&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;SqlAzureDacpacDeployment@1, &lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;DeploymentAction&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;:&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;Publish &lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;task deploys differential script to target data warehouse.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:257}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="yaml"&gt;- stage: Deploy
    jobs:
    - job: Deploy_DW_Project
      steps:
      - download: current
        artifact: drop
      - task: SqlAzureDacpacDeployment@1
        displayName: 'Azure SQL DacpacTask'
        inputs:
          azureSubscription: 'Pradeep-MSFT Personal Use (f4664abe-17d0-4128-8048-150cd01575b4)'
          ServerName: 'pvenkat-test-ws.sql.azuresynapse.net'
          DatabaseName: testsqlpool
          SqlUsername: dbttestuser
          SqlPassword: '$(PASSWORD)'
          DacpacFile: '$(Pipeline.Workspace)\drop\DWSSDT\bin\Debug\DWSSDT.dacpac'
          AdditionalArguments: '/p:AdditionalDeploymentContributors="AgileSqlClub.DeploymentFilterContributor" /p:AdditionalDeploymentContributorPaths="$(Pipeline.Workspace)\drop\dacpactools" /p:AdditionalDeploymentContributorArguments="SqlPackageFilter=IgnoreSchema(sch_1)"'
&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Selective Deployment – Publish Action&amp;nbsp;&lt;/H3&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;SQL Package, publish action support several additional parameters to ignore objects by type. Parameters such as DoNotDropObjectTypes, DropObjectsNotInSource, ExcludeObjectTypes and other properties in &lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/sql/tools/sqlpackage/sqlpackage-publish?view=sql-server-ver16#properties-specific-to-the-publish-action" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;SqlPackage Publish - SQL Server | Microsoft Learn&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt; lets you selectively deploy the object type(s) of your choice. However, SQL Package does not selective deployment of specific objects by name, regex expressions. Many customers with a large data warehouse (&amp;gt;2000 objects) are having trouble deploying DACPAC solutions without specifying object name(s). &lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:257}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://github.com/GoEddie/DeploymentContributorFilterer" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;GitHub - GoEddie/DeploymentContributorFilterer&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt; provides an alternative to filter objects by name/regex expressions as part of dacpac deployment. Please provide Additional arguments to Publish action - &lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;AdditionalDeploymentContributors, AdditionalDeploymentContributorPaths, AdditionalDeploymentContributorArguments &lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;to apply filters before DACPAC deployment.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;AdditionalArguments: '/p:AdditionalDeploymentContributors="AgileSqlClub.DeploymentFilterContributor" /p:AdditionalDeploymentContributorPaths="$(Pipeline.Workspace)\drop\dacpactools" /p:AdditionalDeploymentContributorArguments="SqlPackageFilter=IgnoreSchema(sch_1)"'&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI data-leveltext="·" data-font="Symbol" data-listid="32" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559683&amp;quot;:0,&amp;quot;335559684&amp;quot;:-2,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;·&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="11" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;AdditionalDeploymentContributors &lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;takes a namespace of the library. This library is applied as an additional deployment contributor as part of Publish action.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:257}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="·" data-font="Symbol" data-listid="32" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559683&amp;quot;:0,&amp;quot;335559684&amp;quot;:-2,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;·&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="12" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;AdditionalDeploymentContributorPaths &lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;takes path of the library.dll as an input&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:257}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;AdditionalDeploymentContributorArguments &lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;takes filters as an input to ignore/filter/keep the objects as part of DACPAC for publish action.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="3"&gt;Our team publishes blog(s) regularly and you can find all these blogs at&lt;A href="https://aka.ms/synapsecseblog" target="_blank" rel="noopener noreferrer"&gt;&amp;nbsp;https://aka.ms/synapsecseblog.&lt;/A&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="3"&gt;For a deeper level of understanding of Synapse implementation best practices, please refer to our Success by Design (SBD) site:&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://aka.ms/Synapse-Success-By-Design" target="_blank" rel="noopener noreferrer"&gt;https://aka.ms/Synapse-Success-By-Design&lt;/A&gt;&lt;/FONT&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 04 May 2023 15:00:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/ci-amp-cd-with-azure-synapse-dedicated-sql-pool/ba-p/3810686</guid>
      <dc:creator>pradeeps</dc:creator>
      <dc:date>2023-05-04T15:00:00Z</dc:date>
    </item>
    <item>
      <title>Transforming Your Data Lake: Implementing Slow Change Dimension with Synapse</title>
      <link>https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/transforming-your-data-lake-implementing-slow-change-dimension/ba-p/3718996</link>
      <description>&lt;P&gt;&lt;SPAN&gt;As organizations continue to collect and store large volumes of data in their data lakes, managing this data effectively becomes increasingly important. One key aspect of this is implementing Slow Change Dimension type 2, which allows organizations to track historical data by creating multiple records for a given natural key in the dimensional tables with separate surrogate keys and/or different version numbers. In this blog post we will address the following scenario: a customer wants to implement Slow Change Dimension type 2 on top of their data lake. &lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;For this example, we will use Serverless SQL Pool to demonstrate how this can be done. Additionally, in the next post, we will explore how the same approach can be used with Spark.&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2&gt;&lt;BR /&gt;&lt;STRONG&gt;What is Slow Change Dimension Type 2?&lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;&lt;STRONG&gt;Slow Change Dimension type 2&lt;/STRONG&gt;: This method tracks historical data by creating multiple records for a given&amp;nbsp;&lt;A href="https://en.wikipedia.org/wiki/Natural_key" target="_blank" rel="noopener"&gt;natural key&lt;/A&gt;&amp;nbsp;in the dimensional tables with separate&amp;nbsp;&lt;A href="https://en.wikipedia.org/wiki/Surrogate_key" target="_blank" rel="noopener"&gt;surrogate keys&lt;/A&gt;&amp;nbsp;and/or different version numbers. Check out this wiki to learn more, &lt;A href="https://en.wikipedia.org/wiki/Slowly_changing_dimension" target="_blank" rel="noopener"&gt;https://en.wikipedia.org/wiki/Slowly_changing_dimension&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;When implementing Slow Change Dimension type 2, there are various approaches that can be used depending on how the data is handled at the source. For example, if the source includes row version information or columns with flags for deleted or updated records, a different approach may be used compared to scenarios where this information is not available. In this particular scenario, we will follow a specific approach to address the challenges that are present.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Considerations for this Scenario:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;This scenario is managing UPSERT and file versioning. Please note, updates are not supported on Serverless so I am workaround this with the external tables.&lt;/LI&gt;
&lt;LI&gt;The files are not coming from the source with version, or information if the row was updated or deleted.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;Deleted rows are not been handled on this solution, only inserted and updated ones.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;Those files have a key value that is used to filter if they already exist or not on the destination. If you do not have a key column to compare. You will need to compare all the columns that the business considers as key values to determine if the information is new or not.&amp;nbsp;&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;The source will send the information on the file regardless if it is a new row to be inserted or updated. In other words, the process of transformation and cleansing needs to understand if the value on the file refers to an updated row or a new row. So this solution tries to understand that scenario and based on that version it accordingly. If your source can send the rows in an accurate way in which it is clear which row is updated, you can change the steps accordingly.&amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Solution:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Consider as in the example below the folder &lt;EM&gt;sourcefiles_folder&lt;/EM&gt; with all files in parquet format which were sent from the source to be compared to a destination. My destination is the external table: SCD_DimDepartmentGroup.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;The file has the following columns:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;TABLE width="472"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="187"&gt;
&lt;P&gt;&lt;STRONG&gt;Column&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;&lt;STRONG&gt;Datatype&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="215"&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="187"&gt;
&lt;P&gt;DepartmentGroupKey&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;int&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="215"&gt;
&lt;P&gt;Not Null&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="187"&gt;
&lt;P&gt;ParentDepartmentGroupKey&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;int&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="215"&gt;
&lt;P&gt;NULL&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="187"&gt;
&lt;P&gt;DepartmentGroupName&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;nvarchar&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="215"&gt;
&lt;P&gt;Not Null&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Here are the current values for this table -&amp;nbsp;SCD_DimDepartmentGroup as Fig. 1, shows:&lt;/P&gt;
&lt;P&gt;The last surrogate key is of the value of 8:&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;H5&gt;Fig. 1&lt;/H5&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Code - First Load and Dimension Creation:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="applescript"&gt;CREATE EXTERNAL TABLE SCD_DimDepartmentGroup
  WITH (
    LOCATION = '/SCD_DimDepartmentGroup',
    DATA_SOURCE = SCD_serveless_dim,
    FILE_FORMAT = Parquet_file
      ) 
  AS
  SELECT ROW_NUMBER () OVER (ORDER BY DepartmentGroupKey) ID_Surr
         ,[DepartmentGroupKey]
        ,[ParentDepartmentGroupKey]
        ,[DepartmentGroupName]
        ,1 ID_valid
        ,0 ID_Deleted
        ,Curr_date as From_date
        ,Null as End_date
    FROM
    OPENROWSET(
               BULK 'https://Storage.blob.core.windows.net/Container/SCD/sourcefiles_folder/',
               FORMAT = 'PARQUET'
               ) AS [SCD_DimDepartmentGroup]
   &lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;STRONG&gt;Slow Change Dimension in 5 steps:&lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;The implementation of the SCD is quite simple is basically filtering, persisting the data filtered, and comparing.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Current values on the table:&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;ID_Valid column will show if the row was versioned or not. 1 for a new row, 0 for a versioned row.&lt;/LI&gt;
&lt;LI&gt;ID_Delete column will show if the row is still valid or not. 0 for a new not, 1 for an invalid&lt;/LI&gt;
&lt;LI&gt;Curr_date is the date where the row was inserted into the table, a column on the SCD will be FromDate&lt;/LI&gt;
&lt;LI&gt;ID_Surr is a surrogate key created for this table.&amp;nbsp;&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;EndDate will manage the version per date&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;1) Step 1 -&amp;nbsp;Create a CETAS for new values&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;&lt;EM&gt;TableA_SCD_DimDepartmentGroup_NEW&lt;/EM&gt; - In this example, this step will basically insert all the values that exist on the files and&lt;STRONG&gt;&amp;nbsp;do not exist on the destination&lt;/STRONG&gt; into an External Table.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The column DepartmentGroupKey defines if the row is new or not. You will need a column or columns to understand if this is considered new information or not in your dimension.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Please note three columns were added to this example:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;As Curr_date is the date where the row was inserted into the table. So we are using getdate()&lt;/LI&gt;
&lt;LI&gt;1 ID_Valid&amp;nbsp; which means this is a new valid row&lt;/LI&gt;
&lt;LI&gt;1 ID_Delete which means this row was not deleted Or versioned.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Code - First, let's create the data source that will be used across all the external tables:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="applescript"&gt;  CREATE EXTERNAL DATA SOURCE SCD_serveless_dim
  WITH (
  LOCATION = 'https://Storage.blob.core.windows.net/Container/SCD/transformation_folder/',
  CREDENTIAL = [MSI]
  ) &lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Now, let's create the external table for Step 1:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;Please note that I am using the maximum value of the surrogate key column ID_Surr from the main table SCD_DimDepartmentGroup and adding 1 to it to generate new surrogate values for rows that require them. If some values repeat at this point, it is okay because I will reuse this information with the Department Key to create a new surrogate key in the last step (Step 5), thus ensuring a unique value for each row. I am using the max+1 approach to ensure that existing surrogate keys are not affected.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="applescript"&gt;DECLARE @ID_Surr AS INT

SELECT @ID_Surr = MAX (ID_Surr)+1 FROM SCD_DimDepartmentGroup

CREATE EXTERNAL TABLE TableA_SCD_DimDepartmentGroup_NEW
WITH (
    LOCATION = '/TableA_SCD_DimDepartmentGroup_NEW',
    DATA_SOURCE = SCD_serveless_dim,
    FILE_FORMAT = Parquet_file
      ) 
  AS
  SELECT @ID_Surr as ID_Surr
        ,[DepartmentGroupKey]
        ,[ParentDepartmentGroupKey]
        ,[DepartmentGroupName]
        ,1 ID_valid
        ,0 ID_Deleted
        ,Getdate() as Curr_date

FROM
    OPENROWSET(
        BULK 'https://Storage.blob.core.windows.net/Container/SCD/sourcefiles_folder/',
        FORMAT = 'PARQUET' ---DELTA can be used here 
    ) AS [SCD_DimDepartmentGroup_Silver]

WHERE NOT EXISTS ( 
                SELECT 1
                FROM SCD_DimDepartmentGroup
                WHERE   SCD_DimDepartmentGroup_Silver.[DepartmentGroupKey] =   SCD_DimDepartmentGroup.DepartmentGroupKey        
                   )&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Results in Fig. 2 - Step 1:&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;H5&gt;Fig. 2 - Step 1.&lt;/H5&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;2)&lt;/STRONG&gt;&amp;nbsp;&lt;STRONG&gt;Step 2 -&lt;/STRONG&gt; &lt;STRONG&gt;Create a CETAS for values that will be updated/versioned.&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;TableB_SCD_DimDepartmentGroup_OLD&lt;/EM&gt; will be the next external table to be created by inserting all values that exist on the file exported from the source and do exist on the destination. Those value also should not exist on the external table created in Step 1 (A).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;In this scenario, the data that will be versioned is not a new row (Step 1); it is rows that already exist in the destination, but something has changed. The DepartmentGroupKey is the key that will always remain the same, so if there is a new key, it means a new row, and the comparison will happen for any other column. Additionally, in this scenario, it is possible that the source may send the same information again without any changes. Therefore, comparing the data is necessary to ensure that the row sent is indeed a row to be versioned.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Please note for the rows to be versioned:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Flag ID_Valid was changed to 0&lt;/LI&gt;
&lt;LI&gt;ID_Deleted was invalidated by changing the value to 1.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;Curr_date here is the date the row was inserted on the table&amp;nbsp;&amp;nbsp;SCD_DimDepartmentGroup, so the From_date column is re-used.&lt;/LI&gt;
&lt;LI&gt;NULL columns are been handled properly(ISNULL) to find the ones that in fact changed.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Code:&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;Please note I re-used my surrogate key column ID_Surr of the main table&amp;nbsp;SCD_DimDepartmentGroup for the values that will be versioned.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="applescript"&gt; CREATE EXTERNAL TABLE TableB_SCD_DimDepartmentGroup_OLD
  WITH (
    LOCATION = '/TableB_SCD_DimDepartmentGroup_OLD',
    DATA_SOURCE = SCD_serveless_dim,
    FILE_FORMAT = Parquet_file
      ) 
  AS
    CREATE EXTERNAL TABLE TableB_SCD_DimDepartmentGroup_OLD
  WITH (
    LOCATION = '/TableB_SCD_DimDepartmentGroup_OLD',
    DATA_SOURCE = SCD_serveless_dim,
    FILE_FORMAT = Parquet_file
      ) 
  AS
  SELECT ID_Surr
        ,[DepartmentGroupKey]
        ,[ParentDepartmentGroupKey]
        ,[DepartmentGroupName]
        ,0 ID_valid
        ,1 ID_Deleted
        ,From_date
FROM [SCD_DimDepartmentGroup]
WHERE NOT   EXISTS ( SELECT 1
                FROM
                    OPENROWSET(
                        BULK 'https://Storage.blob.core.windows.net/Container/SCD/sourcefiles_folder',
                        FORMAT = 'PARQUET'
                    ) AS [SCD_DimDepartmentGroup_Silver]
                 WHERE   SCD_DimDepartmentGroup_Silver.[DepartmentGroupKey] =   SCD_DimDepartmentGroup.DepartmentGroupKey
                        AND  (ISNULL(SCD_DimDepartmentGroup_Silver.[ParentDepartmentGroupKey], 1) =   ISNULL(SCD_DimDepartmentGroup.[ParentDepartmentGroupKey], 1)
				        AND  SCD_DimDepartmentGroup_Silver.[DepartmentGroupName] = SCD_DimDepartmentGroup.[DepartmentGroupName])

                    )
                  AND NOT EXISTS 
                      (SELECT 1 FROM TableA_SCD_DimDepartmentGroup_NEW
                      WHERE   TableA_SCD_DimDepartmentGroup_NEW.[DepartmentGroupKey] =   SCD_DimDepartmentGroup.DepartmentGroupKey)&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Results are Fig. 3 - Step 2 :&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;H5&gt;Fig. 3 - Step 2&lt;/H5&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;3)&lt;/STRONG&gt; &lt;STRONG&gt;Step 3 -&lt;/STRONG&gt; &lt;STRONG&gt;Only updated values are to be versioned&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;&lt;EM&gt;TableC_SCD_DimDepartmentGroup_GOLD_OLD_INS -&amp;nbsp;&lt;/EM&gt;The next external table will get the new rows from the source file. The filter will be the rows that we already know something changed and needs to be versioned from Step 2.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Code:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;Please note that I am using the maximum value of the surrogate key column ID_Surr from the main table SCD_DimDepartmentGroup and adding 1 to it to generate new surrogate values for rows that require them. If some values repeat at this point, it is okay because I will reuse this information with the Department Key to create a new surrogate key in the last step (Step 5), thus ensuring a unique value for each row. I am using the max+1 approach to ensure that existing surrogate keys are not affected.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="applescript"&gt;DECLARE @ID_Surr AS INT

SELECT @ID_Surr = MAX (ID_Surr)+1 FROM SCD_DimDepartmentGroup

CREATE EXTERNAL TABLE TableC_SCD_DimDepartmentGroup_GOLD_OLD_INS
WITH (
       LOCATION = '/TableC_SCD_DimDepartmentGroup_GOLD_OLD_INS',
    DATA_SOURCE = SCD_serveless_dim,
    FILE_FORMAT = Parquet_file
      ) 
  AS
  SELECT  @ID_Surr as ID_Surr
         ,[DepartmentGroupKey]
        ,[ParentDepartmentGroupKey]
        ,[DepartmentGroupName]
        ,1 ID_valid
        ,0 ID_Deleted
        ,getdate() Curr_date
FROM
    OPENROWSET(
        BULK 'https://Storage.blob.core.windows.net/Container/SCD/sourcefiles_folder/',
        FORMAT = 'PARQUET'
    )  AS [SCD_DimDepartmentGroup_Silver]
WHERE  EXISTS   (SELECT 1 FROM   TableB_SCD_DimDepartmentGroup_OLD  
                  WHERE   SCD_DimDepartmentGroup_Silver.[DepartmentGroupKey] =   TableB_SCD_DimDepartmentGroup_OLD.DepartmentGroupKey)
&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The results are in Fig. 4 - Step 3:&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;DIV&gt;
&lt;H5&gt;&lt;SPAN&gt;Fig. 4 - Step 3&lt;/SPAN&gt;&lt;/H5&gt;
&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;4&lt;/STRONG&gt;) &lt;STRONG&gt;Step 4 - Create another external table by Union&lt;/STRONG&gt;&amp;nbsp;&lt;/H3&gt;
&lt;P&gt;For all those external tables that were created on the last steps.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;EndDatate should be null for new rows or new values versioned. If the value was versioned the respective row will contain until when it was valid.&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;ID_valid will be 1 if the row is not versioned, 0 if this row is no longer valid&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;
&lt;DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;ID_Deleted&amp;nbsp;&lt;/SPAN&gt;will be 0 if the row is not versioned, 1 if this row is no longer valid&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="applescript"&gt; CREATE EXTERNAL TABLE UNION_SCD_DimDepartmentGroup
  WITH (
    LOCATION = '/UNION_SCD_DimDepartmentGroup',
    DATA_SOURCE = SCD_serveless_dim,
    FILE_FORMAT = Parquet_file
      ) 
  AS
  SELECT ID_Surr
        ,[DepartmentGroupKey]
        ,[ParentDepartmentGroupKey]
        ,[DepartmentGroupName]
        ,ID_valid
        ,ID_Deleted
        ,getdate() as From_date
        ,Null as End_date
FROM TableA_SCD_DimDepartmentGroup_NEW
UNION ALL
     SELECT ID_Surr 
        ,[DepartmentGroupKey]
        ,[ParentDepartmentGroupKey]
        ,[DepartmentGroupName]
        ,ID_valid
        ,ID_Deleted
        ,From_date as From_date
        ,Getdate() as End_date
FROM TableB_SCD_DimDepartmentGroup_OLD 
UNION ALL
SELECT ID_Surr 
        ,[DepartmentGroupKey]
        ,[ParentDepartmentGroupKey]
        ,[DepartmentGroupName]
        ,ID_valid
        ,ID_Deleted
        ,getdate() as From_date
        ,Null as End_date
FROM TableC_SCD_DimDepartmentGroup_GOLD_OLD_INS &lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Results are - Fig. 5 - Union :&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H5&gt;Fig. 5 - Union&lt;/H5&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Here we have the new rows, new versioned rows consolidated on the external table.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;5) Step 5.&amp;nbsp;&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;Next, we need to transfer this to our main table, by recreating an external table with the new information and dropping the old one.&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;1) I need the data from the main table excluding the rows that were versioned on this round&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;2) I need the data from the external tables Union.&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;3) I need to keep the historical changes of the old/main table&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;4) On top of that I will recreate the surrogate key. &lt;EM&gt;But for this, we will order per&amp;nbsp;ID_Surr and DepartmentGroupKey. So, the new surrogate key values will start from the max surrogate+1 as we defined in the first Steps.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="applescript"&gt;SELECT ROW_NUMBER () OVER (ORDER BY ID_Surr, DepartmentGroupKey) ID_Surr
       ,[DepartmentGroupKey]
        ,[ParentDepartmentGroupKey]
        ,[DepartmentGroupName]
        ,ID_valid
        ,ID_Deleted
        ,From_date
        ,End_date
FROM (
      SELECT   ID_Surr
              ,[DepartmentGroupKey]
              ,[ParentDepartmentGroupKey]
              ,[DepartmentGroupName]
              ,ID_valid
              ,ID_Deleted
              ,From_date
              ,End_date
      FROM SCD_DimDepartmentGroup
      WHERE NOT EXISTS
          ( SELECT 1 FROM TableB_SCD_DimDepartmentGroup_OLD
            WHERE SCD_DimDepartmentGroup.DepartmentGroupKey = TableB_SCD_DimDepartmentGroup_OLD.DepartmentGroupKey
              )
      UNION 
      SELECT   ID_Surr
               ,[DepartmentGroupKey]
              ,[ParentDepartmentGroupKey]
              ,[DepartmentGroupName]
              , ID_valid
              ,ID_Deleted
              ,From_date
              ,End_date
      FROM UNION_SCD_DimDepartmentGroup
      UNION 
      SELECT   ID_Surr
              ,[DepartmentGroupKey]
              ,[ParentDepartmentGroupKey]
              ,[DepartmentGroupName]
              ,ID_valid
              ,ID_Deleted
              ,From_date
              ,End_date
      FROM SCD_DimDepartmentGroup
      WHERE  SCD_DimDepartmentGroup.ID_valid = 0
)NEW_SCD
&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Results are in Fig. 6 - New Table. &lt;EM&gt;Please note in green there are the rows that are versioned, and in blue are the new rows:&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;H5&gt;Fig. 6 - New Table&lt;/H5&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This new table should replace the current SCD_DimDepartmentGroup, by recreating a new external table and dropping this old one.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Summary&lt;/H2&gt;
&lt;P&gt;Please review the considerations for the scenario to understand what was relevant for this implementation. As I mentioned earlier, there are different approaches that can be used to create a Slow Change Dimension, depending on how the data is handled at the source. In this post, I have provided examples of how to read new data, version the old data that has changed, and recreate new external tables with the updated information for a Slow Change Dimension type 2.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In the next post, I will demonstrate how to achieve the same using Spark. Stay tuned for more!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Liliam, UK.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 03 May 2023 16:00:04 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/transforming-your-data-lake-implementing-slow-change-dimension/ba-p/3718996</guid>
      <dc:creator>Liliam_C_Leme</dc:creator>
      <dc:date>2023-05-03T16:00:04Z</dc:date>
    </item>
    <item>
      <title>Azure Synapse Analytics April Update 2023</title>
      <link>https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-april-update-2023/ba-p/3807118</link>
      <description>&lt;DIV id="generatedtoc"&gt;
&lt;H2&gt;Azure Synapse Analytics April Update 2023&lt;/H2&gt;
&lt;/DIV&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Welcome to the April 2023 Azure Synapse Analytics Update! This month, we have a new ARM template to deploy Azure Data Explorer DB with Cosmos DB connection, as well as additional updates in Apache Spark for Synapse, Synapse Data Explorer, and Data Integration.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Read on for more details and don’t forget to watch the video!&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;IFRAME src="https://www.youtube.com/embed/GGFGNv9m70M" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen" title="YouTube video player" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"&gt;&lt;/IFRAME&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H1&gt;Table of contents&lt;/H1&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="#community--1-TOCREF_0" target="_self" rel="nofollow noopener noreferrer"&gt; Apache Spark for Synapse &lt;/A&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="#community--1-TOCREF_1" target="_self" rel="nofollow noopener noreferrer"&gt; Delta Lake – Low Shuffle Merge &lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;A href="#community--1-TOCREF_2" target="_self" rel="nofollow noopener noreferrer"&gt; Synapse Data Explorer &lt;/A&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="#community--1-TOCREF_3" target="_self" rel="nofollow noopener noreferrer"&gt; Ingest data from Azure Events Hub to ADX free tier &lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="#community--1-TOCREF_4" target="_self" rel="nofollow noopener noreferrer"&gt; New ARM template to deploy Azure Data Explorer DB with Cosmos DB connection &lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="#community--1-TOCREF_5" target="_self" rel="nofollow noopener noreferrer"&gt; New look and feel for Query command bar in ADX Web &lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;A href="#community--1-TOCREF_6" target="_self" rel="nofollow noopener noreferrer"&gt; Data Integration &lt;/A&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="#community--1-TOCREF_7" target="_self" rel="nofollow noopener noreferrer"&gt; Capture changed data from Cosmos DB analytical store (Public Preview)&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2 id="TOCREF_0" aria-level="1"&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Apache Spark for Synapse&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/H2&gt;
&lt;H3 id="TOCREF_1" aria-level="2"&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Delta Lake – Low Shuffle Merge&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Low Shuffle Merge optimization for Delta tables is now available in Apache Spark 3.2 and 3.3 pools.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;You can now update a Delta table with advanced conditions using the Delta Lake MERGE command. It can update data from a source table, view, or DataFrame into a target table. The current algorithm of the MERGE command is not optimized for handling unmodified rows. With Low Shuffle Merge optimization, unmodified rows are excluded from expensive shuffling execution and written separately.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;To learn more about this new command, read &lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/azure/synapse-analytics/spark/low-shuffle-merge-for-apache-spark" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;Low Shuffle Merge optimization on Delta tables&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:257}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P aria-level="2"&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 id="TOCREF_2" aria-level="1"&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Synapse Data Explorer&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/H2&gt;
&lt;H3 id="TOCREF_3" aria-level="2"&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Ingest data from Azure Events Hub to ADX free tier&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Are you looking for a powerful and cost-effective (or possibly even free...) way to analyze large volumes of near real-time streaming data? Azure Data Explorer now supports integration with Events Hub in ADX free tier.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Events Hub is a big data streaming platform, which can process millions of events per second in near real-time.&amp;nbsp; Connecting your Event Hub data to Azure Data Explorer is easy and straightforward and can be done in just a few simple steps, using an intuitive "One-Click" ingestion wizard.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233118&amp;quot;:false,&amp;quot;134245418&amp;quot;:true,&amp;quot;134245529&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;To learn more about ingesting data from Azure Events Hub to ADX free tier, read &lt;/SPAN&gt;&lt;A href="https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/free-event-hub-data-analysis-with-azure-data-explorer/ba-p/3775034" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;Free Event Hub data analysis with Azure Data Explorer&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233118&amp;quot;:false,&amp;quot;134245418&amp;quot;:true,&amp;quot;134245529&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3 id="TOCREF_4" aria-level="2"&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;New ARM template to deploy Azure Data Explorer DB with Cosmos DB connection&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;A new ARM &lt;A href="https://learn.microsoft.com/en-us/samples/azure/azure-quickstart-templates/kusto-cosmos-db/" target="_blank" rel="noopener"&gt;template&lt;/A&gt; that deploys an Azure Data Explorer DB with a Cosmos DB connection is now available. This vastly simplifies the deployment of&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;an ADX cluster with:&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-contrast="none"&gt;a System Assigned Identity&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="none"&gt;a database&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="none"&gt;an Azure Cosmos DB account (NoSql)&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="none"&gt;an Azure Cosmos DB database&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="none"&gt;an Azure Cosmos DB container&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="none"&gt; a data connection between the Cosmos DB container and the Kusto database (using the System Assigned identity)&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;To learn more about this new ARM template, read &lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/samples/azure/azure-quickstart-templates/kusto-cosmos-db/" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;Deploy Azure Data Explorer DB with Cosmos DB connection with ARM Template&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="none"&gt;.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3 id="TOCREF_5" aria-level="2"&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;New look and feel for Query command bar in ADX Web&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;The ADX Web Query command bar has undergone a major redesign to provide an improved user experience. The new design is not only visually appealing but also makes it easier and faster for users to access the commands they need.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;To learn more about the new command bar and other ADX Web updates, read &lt;/SPAN&gt;&lt;A href="https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/adx-web-updates-march-2023/ba-p/3785987" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;ADX Web updates – March 2023&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559685&amp;quot;:0,&amp;quot;335559737&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 id="TOCREF_6" aria-level="1"&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Data Integration&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/H2&gt;
&lt;H3 id="TOCREF_7" aria-level="2"&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Capture changed data from Cosmos DB analytical store (Public Preview)&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;&lt;LI-WRAPPER&gt;&lt;/LI-WRAPPER&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;When you perform data integration and ETL processes in the cloud, your jobs can perform better and be more effective when you only read changed data from your source. We are excited to share that Azure Cosmos DB analytical store now supports change data capture (CDC) for Azure Cosmos DB API for NoSQL and Azure Cosmos DB API for MongoDB.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559685&amp;quot;:0,&amp;quot;335559737&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P data-unlink="true"&gt;&lt;SPAN data-contrast="none"&gt;Available in Public Preview, this will allow you to efficiently consume continuous and changed (inserted, updated, and deleted) data from the analytical store. Seamlessly integrated with Azure Synapse Analytics and Azure Data Factory, it is a scalable, no-code experience for high data volume and will &lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;not consume provisioned RUs or &lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;affect the performance of your transactional workloads while providing lower latency and a lower TCO.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559685&amp;quot;:0,&amp;quot;335559737&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;You can consume incremental analytical store data from a Cosmos DB container using either Azure Synapse Analytics or Azure Data Factory after you enable the &lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/azure/cosmos-db/configure-synapse-link#enable-synapse-link" target="_blank" rel="noopener"&gt;Cosmos DB account for Synapse Link&lt;/A&gt;&lt;SPAN data-contrast="none"&gt; and you have enabled analytical store on a &lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/azure/cosmos-db/configure-synapse-link#update-analytical-ttl" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;new container&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="none"&gt; or an &lt;/SPAN&gt;&lt;A href="https://devblogs.microsoft.com/cosmosdb/azure-synapse-link-existing-containers-and-power-bi-integration/#azure-synapse-link-for-existing-azure-cosmos-containers" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;existing container&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="none"&gt;.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559685&amp;quot;:0,&amp;quot;335559737&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559685&amp;quot;:0,&amp;quot;335559737&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;To learn more about capturing Change Data:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Watch &lt;/SPAN&gt;&lt;A href="https://www.youtube.com/watch?v=0e0b9gjtzv4" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;Azure Cosmos DB analytical store Change Data Capture&lt;/SPAN&gt;&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Read &lt;/SPAN&gt;&lt;A href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/capture-changed-data-from-your-cosmos-db-analytical-store/ba-p/3783530" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;Capture Changed Data From your Cosmos DB analytical store (Preview)&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt; and &lt;/SPAN&gt;&lt;A href="https://devblogs.microsoft.com/cosmosdb/now-in-preview-change-data-capture-cdc-with-azure-cosmos-db-analytical-store/" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;Now in preview – Change Data Capture (CDC) with Azure Cosmos DB analytical store&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559685&amp;quot;:0,&amp;quot;335559737&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;Thanks for reading! That's all we have for you this month. We look forward to hearing your comments and questions. We'll see you here for the next monthly update!&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 02 May 2023 15:00:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-april-update-2023/ba-p/3807118</guid>
      <dc:creator>ryanmajidi</dc:creator>
      <dc:date>2023-05-02T15:00:00Z</dc:date>
    </item>
    <item>
      <title>Extracting relational schema from streaming data containing complex JSON documents</title>
      <link>https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/extracting-relational-schema-from-streaming-data-containing/ba-p/3803612</link>
      <description>&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;Author:&amp;nbsp;&lt;a href="javascript:void(0)" data-lia-user-mentions="" data-lia-user-uid="1314595" data-lia-user-login="devangshah" class="lia-mention lia-mention-user"&gt;devangshah&lt;/a&gt;&amp;nbsp;is a Principal Program Manager for Data Explorer in the Synapse Customer Success Engineering (CSE) team.&amp;nbsp;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In the world of IoT devices, industrial historians, infrastructure and application logs, and metrics, machine-generated or software-generated telemetry, there are often scenarios where the upstream data producer produces data in non-standard schemas, formats, and structures that often make it difficult to analyze the data contained in these at scale. Azure Data Explorer provides some useful features to run meaningful, fast, and interactive analytics on such heterogenous data structures and formats.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In this blog, we're taking an example of a complex JSON file as&amp;nbsp;&lt;SPAN&gt;shown in the screenshot below. You can access the JSON file from this &lt;A title="JSON Sample File" href="https://github.com/Azure/kusto-adx-cse/blob/main/blogs/jsonextractor/json-sample.json" target="_blank" rel="noopener"&gt;GitHub page&lt;/A&gt; to try the steps below.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This JSON has keys and values in two different arrays.&amp;nbsp;To convert this JSON document into the relational schema as shown below, we will use the approach of extracting the 'structure' object into one table and the 'kpi_data' object into another table and then join the two tables using the GUID.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 1&lt;/STRONG&gt;: Use ADX Ingestion Hub (also called One Click Ingestion), to upload sample data and let ADX understand the schema of the JSON document. With multi-level JSON, you can extract multiple objects within the JSON document. However, in the example above, this will not work and hence we will write KQL in step 2.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 2&lt;/STRONG&gt;: Since there are 2 nested JSON arrays, we will use&amp;nbsp;&lt;A title="Kusto mv-expand operator" href="https://docs.microsoft.com/en-us/azure/data-explorer/kusto/query/mvexpandoperator" target="_blank" rel="noopener"&gt;mv-expand&lt;/A&gt;&amp;nbsp;operator to expand these dynamic arrays. We will first use the '&lt;A title="Kusto project operator" href="https://docs.microsoft.com/en-us/azure/data-explorer/kusto/query/projectoperator" target="_blank" rel="noopener"&gt;project&lt;/A&gt;' operator to select the columns of interest and then apply mv-expand on the column containing nested arrays.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="applescript"&gt;SampleTable1
| project Timestamp = from, Level1_id = structure.id, Level1_Name=structure.name, Level1_kpi_type=structure.kpi_type, kpi_structure=structure.kpi_structure
| mv-expand kpi_structure&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Upon executing these KQL statements, we see the following output:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 3&lt;/STRONG&gt;: We will apply the &lt;A title="Kusto mv-expand operator" href="https://docs.microsoft.com/en-us/azure/data-explorer/kusto/query/mvexpandoperator" target="_blank" rel="noopener"&gt;mv-expand&lt;/A&gt; operator again to expand the next array. We're also using the '&lt;A title="Kusto extend operator" href="https://docs.microsoft.com/en-us/azure/data-explorer/kusto/query/extendoperator" target="_blank" rel="noopener"&gt;extend&lt;/A&gt;' operator to extract columns from the expanded JSON arrays.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="applescript"&gt;SampleTable1
| project Timestamp = from, Level1_id = structure.id, Level1_Name=structure.name, Level1_kpi_type=structure.kpi_type, kpi_structure=structure.kpi_structure
| mv-expand kpi_structure
| extend Level2_data_type = kpi_structure.data_type, Level2_Id = kpi_structure.id, Level2_kpi_type = kpi_structure.kpi_type, Level2_name = kpi_structure.name, new_kpi_structure=kpi_structure.kpi_structure
| mv-expand new_kpi_structure
| extend Level3_data_type = new_kpi_structure.data_type, Level3_Id = tostring(new_kpi_structure.id), Level3_name = new_kpi_structure.name, Level3_Unit = new_kpi_structure.unit
| project-away kpi_structure, new_kpi_structure&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The output of executing these statements is:&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 4&lt;/STRONG&gt;: Use the mv-expand operator again to expand the array inside the 'kpi_data' JSON object&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="applescript"&gt;SampleTable1
| mv-expand kpi_data
| project kpi_values = kpi_data.values&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The output of executing these statements is:&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 5&lt;/STRONG&gt;: In this JSON document, the value (in this case, 18806s) is referenced using a GUID key. Since GUID key can be different for each value, we will use the &lt;A title="Kusto bag_keys() function" href="https://docs.microsoft.com/en-us/azure/data-explorer/kusto/query/bagkeysfunction" target="_blank" rel="noopener"&gt;bag_keys()&lt;/A&gt; function to transform this JSON structure into a column of keys and values.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="applescript"&gt;SampleTable1
| mv-expand kpi_data
| project kpi_values = kpi_data.values
| extend Level3_Id = tostring(bag_keys(kpi_values)[0])
| extend key_value = kpi_values[Level3_Id]&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The output of executing these statements is:&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 6&lt;/STRONG&gt;: We will use the&amp;nbsp;&lt;A title="Kusto mv-apply operator" href="https://docs.microsoft.com/en-us/azure/data-explorer/kusto/query/mv-applyoperator" target="_blank" rel="noopener"&gt;mv-apply&lt;/A&gt; operator to execute some of the statements in step 5 on each row that can be present in the 'kpi_data' JSON object.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="applescript"&gt;SampleTable1
| mv-expand kpi_data
| project kpi_values = kpi_data.values
| mv-apply kpi_values on (
    extend Level3_Id = tostring(bag_keys(kpi_values)[0])
    | project Level3_Id, key_value = kpi_values[Level3_Id]
)&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In the case of a single row, these statements generate the same output as Step 5. However, in the case of multiple rows, you will get the desired output for each row as shown below.&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 7&lt;/STRONG&gt;: Join the two tables created in Step 3 (Keys) and Step 6 (Values) to retrieve a complete table containing key-value pairs that can be easily queried. After the join, we divided the value column into 3 columns having distinct data types Integer, Boolean, and String. Doing this will allow the data analysts and scientists to run the calculation, aggregation, and other queries more effectively without having to worry about data type conversion at each stage.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="applescript"&gt;let Keys = SampleTable1
| project Timestamp = from, Level1_id = structure.id, Level1_Name=structure.name, Level1_kpi_type=structure.kpi_type, kpi_structure=structure.kpi_structure
| mv-expand kpi_structure
| extend Level2_data_type = kpi_structure.data_type, Level2_Id = kpi_structure.id, Level2_kpi_type = kpi_structure.kpi_type, Level2_name = kpi_structure.name, new_kpi_structure=kpi_structure.kpi_structure
| mv-expand new_kpi_structure
| extend Level3_data_type = new_kpi_structure.data_type, Level3_Id = tostring(new_kpi_structure.id), Level3_name = new_kpi_structure.name, Level3_Unit = new_kpi_structure.unit;
let Values = SampleTable1
| mv-expand kpi_data
| project kpi_values = kpi_data.values
| mv-apply kpi_values on (
    extend Level3_Id = tostring(bag_keys(kpi_values)[0])
    | project Level3_Id, key_value = kpi_values[Level3_Id]
);
Keys
| join kind = leftouter Values on $left.Level3_Id==$right.Level3_Id
| extend int_val = iff(tostring(Level3_data_type) in ("INTEGER","FILL_LEVEL","COUNT"), key_value.integer_value,0), str_val=iff(tostring(Level3_data_type) == "STRING", key_value.string_value,""), bool_val=iff(tostring(Level3_data_type) == "BOOLEAN", iff(isempty(tobool(key_value.boolean_value)),false,tobool(key_value.boolean_value)),false), duration=iff(tostring(Level3_data_type) == "DURATION", tolong(trim("s",tostring(key_value.duration_value))),0)
| project Timestamp, Level1_Name, Level1_kpi_type, Level2_name, Level3_name, Level3_data_type, Level3_Unit, int_val, str_val,bool_val,duration&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Possible next steps:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;You can store these KQL statements as a &lt;A title="Kusto Stored Function" href="https://docs.microsoft.com/en-us/azure/data-explorer/kusto/query/functions/" target="_blank" rel="noopener"&gt;User-defined Function&lt;/A&gt; that can then be used by others in your organization&lt;/LI&gt;
&lt;LI&gt;Once saved as a User-defined Function, it can also be referenced in an &lt;A title="Kusto Update Policy" href="https://docs.microsoft.com/en-us/azure/data-explorer/kusto/management/updatepolicy" target="_blank" rel="noopener"&gt;Update Policy&lt;/A&gt; that will then execute this function on every incoming JSON document in your source table. Update Policy is a lightweight Extract-Load-Transform capability of Azure Data Explorer that can help you transform raw messages into curated data.&amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In summary, with Kusto Query Language, you can extract data from a complex JSON document containing nested arrays and objects using only a few lines of code.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="3"&gt;Our team publishes blog(s) regularly and you can find all these blogs here:&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://aka.ms/synapsecseblog" target="_blank" rel="noopener noreferrer"&gt;https://aka.ms/synapsecseblog&lt;/A&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="3"&gt;For a deeper level of understanding of Synapse implementation best practices, please refer to our Success by Design (SBD) site:&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://aka.ms/Synapse-Success-By-Design" target="_blank" rel="noopener noreferrer"&gt;https://aka.ms/Synapse-Success-By-Design&lt;/A&gt;&lt;/FONT&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 27 Apr 2023 15:00:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/extracting-relational-schema-from-streaming-data-containing/ba-p/3803612</guid>
      <dc:creator>devangshah</dc:creator>
      <dc:date>2023-04-27T15:00:00Z</dc:date>
    </item>
    <item>
      <title>What’s new in SynapseML v0.11</title>
      <link>https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/what-s-new-in-synapseml-v0-11/ba-p/3804919</link>
      <description>&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We are pleased to announce SynapseML v0.11, a new version of our open-source distributed machine learning library that simplifies and accelerates the development of scalable AI. In this release, we are excited to introduce many new features from the past year of developments well as many bug fixes and improvements. Though this post will give a high-level overview of the most salient new additions, curious readers can check out the&amp;nbsp;&lt;A href="https://github.com/microsoft/SynapseML/releases/tag/v0.11.0" target="_blank" rel="noopener"&gt;full release notes&lt;/A&gt; for all of the new additions.&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;OpenAI Language Models and Embeddings&lt;/H2&gt;
&lt;P&gt;A new release wouldn’t be complete without joining the large language model (LLM) hype train and SynapseML v0.11 features a variety of new features that make large-scale LLM usage simple and easy. In particular, SynapseML v0.11 introduces three new APIs for working with foundation models: `OpenAIPrompt`, ` OpenAIEmbedding`, and `OpenAIChatCompletion`. The `OpenAIPrompt` API makes it easy to construct complex LLM prompts from columns of your dataframe. Here’s a quick example of translating a dataframe column called “Description” into emojis.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;from synapse.ml.cognitive.openai import OpenAIPrompt

emoji_template = """
  Translate the following into emojis
  Word: {Description}
  Emoji: """

results = (OpenAIPrompt()
    .setPromptTemplate(emoji_template)
    .setErrorCol("error")
    .setOutputCol("Emoji")
    .transform(inputs))&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This code will automatically look for a database column called “Description” and prompt your LLM (ChatGPT, GPT-3, GPT-4) with the created prompts. Our new OpenAI embedding classes make it easy to embed large tables of sentences quickly and easily from your Apache Spark clusters. &amp;nbsp;To learn more, see &lt;A href="https://microsoft.github.io/SynapseML/docs/next/features/cognitive_services/CognitiveServices%20-%20OpenAI" target="_blank" rel="noopener"&gt;our docs&lt;/A&gt; on using OpenAI embeddings API and the SynapseML KNN model to create an LLM-based vector search engine directly on your spark cluster. Finally, the new OpenAIChatCompletion transformer allows users to submit large quantities of chat-based prompts to ChatGPT, enabling parallel inference of thousands of conversations at a time. We hope you find the new OpenAI integrations useful for building your next intelligent application.&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;Simple Deep Learning&lt;/H2&gt;
&lt;P&gt;SynapseML v0.11 introduces a new &lt;STRONG&gt;Simple deep learning&lt;/STRONG&gt;&amp;nbsp;package that allows for the training of custom text and deep vision classifiers with only a few lines of code. This package integrates the power of distributed deep network training with PytorchLightning with the simple and easy APIs of SynapseML. The new API allows users to fine-tune visual foundation models from torchvision as well as a variety of state-of-the-art text backbones from HuggingFace.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Here’s a quick example showing how to fine-tune custom vision networks:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;from synapse.ml.dl import DeepVisionClassifier

train_df = spark.createDataframe([
    ("PATH_TO_IMAGE_1.jpg", 1),
    ("PATH_TO_IMAGE_2.jpg", 2)
], ["image", "label"])

deep_vision_classifier = DeepVisionClassifier(
    backbone="resnet50",
    num_classes=2,
    batch_size=16,
    epochs=2,
)

deep_vision_model = deep_vision_classifier.fit(train_df)&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Keep an eye out with upcoming new releases of SynapseML featuring additional simple deep-learning algorithms that will make it easier than ever to train and deploy models at scale.&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;LightGBM v2&lt;/H2&gt;
&lt;P&gt;LightGBM is one of the most used features of SynapseML and we heard your feedback on better performance! SynapseML v0.11 introduces a completely refactored integration between LightGBM and Spark, called LightGBM v2. This integration aims for high performance by introducing a variety of new streaming APIs in the core LightGBM library to enable fast and memory-efficient data sharing between spark and LightGBM. In particular, the new “Streaming execution mode” has a &amp;gt;10x lower memory footprint than earlier versions of SynapseML yielding fewer memory issues and faster training. Best of all, you can use the new mode by just &lt;A href="https://microsoft.github.io/SynapseML/docs/features/lightgbm/about/#execution-mode" target="_blank" rel="noopener"&gt;passing a single extra flag&lt;/A&gt; to your existing LightGBM models in SynapseML.&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;ONNX Model Hub&lt;/H2&gt;
&lt;P&gt;SynapseML supports a variety of new deep learning integrations with the ONNX runtime for fast, hardware-accelerated inference in all of the SynapseML languages (Scala, Java, Python, R, and .NET). &amp;nbsp;In version 0.11 we add support for the new ONNX model hub, which is an open collection of state-of-the-art pre-trained ONNX models that can be quickly downloaded and embedded into spark pipelines. This allowed us to completely deprecate and remove our old dependence on the CNTK deep learning library. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;To learn more about how you can embed deep networks into Spark pipelines, check out our ONNX episode in the new &lt;A href="https://youtube.com/playlist?list=PLzUAjXZBFU9Md95vj64blD3r74GhmKjYK" target="_blank" rel="noopener"&gt;SynapseML video series&lt;/A&gt;:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;div data-video-id="https://www.youtube.com/watch?v=bTBLaIqeELE&amp;amp;list=PLzUAjXZBFU9Md95vj64blD3r74GhmKjYK&amp;amp;index=5" data-video-remote-vid="https://www.youtube.com/watch?v=bTBLaIqeELE&amp;amp;list=PLzUAjXZBFU9Md95vj64blD3r74GhmKjYK&amp;amp;index=5" class="lia-video-container lia-media-is-center lia-media-size-small"&gt;&lt;iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FbTBLaIqeELE%3Flist%3DPLzUAjXZBFU9Md95vj64blD3r74GhmKjYK&amp;amp;display_name=YouTube&amp;amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DbTBLaIqeELE&amp;amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FbTBLaIqeELE%2Fhqdefault.jpg&amp;amp;key=b0d40caa4f094c68be7c29880b16f56e&amp;amp;type=text%2Fhtml&amp;amp;schema=youtube" allowfullscreen="" style="max-width: 100%"&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;Causal Learning&lt;/H2&gt;
&lt;P&gt;SynapseML v0.11 introduces a new package for causal learning that can help businesses and policymakers make more informed decisions. When trying to understand the impact of a “treatment” or intervention on an outcome, traditional approaches like correlation analysis or prediction models fall short as they do not necessarily establish causation. Causal inference aims to overcome these shortcomings by bridging the gap between prediction and decision-making. SynapseML's causal learning package implements a technique called "Double machine learning", which allows us to estimate treatment effects without data from controlled experiments. Unlike regression-based approaches, this approach can model non-linear relationships between confounders, treatment, and outcome. Users can run the DoubleMLEstimator using a simple code snippet like the one below:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;from pyspark.ml.classification import LogisticRegression
from synapse.ml.causal import DoubleMLEstimator

dml = (DoubleMLEstimator()
      .setTreatmentCol("Treatment")
      .setTreatmentModel(LogisticRegression())
      .setOutcomeCol("Outcome")
      .setOutcomeModel(LogisticRegression())
      .setMaxIter(20))

dmlModel = dml.fit(dataset)&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For more information, be sure to check out Dylan Wang's guided tour of the DoubleMLEstimator on the SynapseML video series:&lt;/P&gt;
&lt;P&gt;&lt;div data-video-id="https://youtu.be/_QXKGFFtfxg" data-video-remote-vid="https://youtu.be/_QXKGFFtfxg" class="lia-video-container lia-media-is-center lia-media-size-small"&gt;&lt;iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2F_QXKGFFtfxg%3Ffeature%3Doembed&amp;amp;display_name=YouTube&amp;amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3D_QXKGFFtfxg&amp;amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2F_QXKGFFtfxg%2Fhqdefault.jpg&amp;amp;key=b0d40caa4f094c68be7c29880b16f56e&amp;amp;type=text%2Fhtml&amp;amp;schema=youtube" allowfullscreen="" style="max-width: 100%"&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;Vowpal Wabbit v2&lt;/H2&gt;
&lt;P&gt;Finally, SynapseML v0.11 introduces Vowpal Wabbit v2, the second-generation integration between the Vowpal Wabbit (VW) online optimization library and Apache Spark. With this update, users can work with Vowpal wabbit data directly using the new “VowpalWabbitGeneric” model. This makes working with Spark easier for existing VW users. This more direct integration also adds support for new cost functions and use cases including &lt;A href="https://microsoft.github.io/SynapseML/docs/features/vw/Vowpal%20Wabbit%20-%20Multi-class%20classification/" target="_blank" rel="noopener"&gt;“multi-class” and “cost-sensitive one against all” problems&lt;/A&gt;. The update also introduces a new progressive validation strategy and a new &lt;A href="https://github.com/microsoft/SynapseML/blob/master/notebooks/features/vw/Vowpal%20Wabbit%20-%20Contextual%20Bandits.ipynb" target="_blank" rel="noopener"&gt;Contextual Bandit Offline policy evaluation notebook&lt;/A&gt; to demonstrate how to evaluate VW models on large datasets.&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;Conclusion&lt;/H2&gt;
&lt;P&gt;In conclusion, we are thrilled to share the new SynapseML library with you with you and hope you will find that it simplifies your distributed machine learning pipelines. &amp;nbsp;This blog only covered the highlights, so be sure to check out the &lt;A href="https://github.com/microsoft/SynapseML/releases/tag/v0.11.0" target="_blank" rel="noopener"&gt;full release notes&lt;/A&gt; for all the updates and new features. Whether you are working with large language models, training custom classifiers, or performing causal inference, SynapseML makes it easier and faster to develop and deploy machine learning models at scale.&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;Learn more&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://github.com/microsoft/SynapseML/releases/tag/v0.11.0" target="_self"&gt;SynapseML v0.11 Release Notes&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://github.com/microsoft/SynapseML" target="_self"&gt;SynapseML Github&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://microsoft.github.io/SynapseML/" target="_self"&gt;SynapseML Website&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://www.youtube.com/playlist?list=PLzUAjXZBFU9Md95vj64blD3r74GhmKjYK" target="_self"&gt;SynapseML Youtube Series&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Tue, 25 Apr 2023 17:23:47 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/what-s-new-in-synapseml-v0-11/ba-p/3804919</guid>
      <dc:creator>mhamilton723</dc:creator>
      <dc:date>2023-04-25T17:23:47Z</dc:date>
    </item>
    <item>
      <title>Synapse Database Templates for airlines &amp; travel services plus seven industries are now GA</title>
      <link>https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/synapse-database-templates-for-airlines-amp-travel-services-plus/ba-p/3801805</link>
      <description>&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;In response to continued enthusiastic adoption of the twenty previously published Synapse Database Templates (SDTs), we’re pleased to announce today that we are releasing two Industry Data Models (IDMs), for Airlines and for Travel Services, that have not previously been published as SDTs, along with enhanced versions of seven previously published SDTs.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;The IDM for &lt;STRONG&gt;Airlines&lt;/STRONG&gt; is a comprehensive data model that addresses the &lt;/SPAN&gt;&lt;SPAN&gt;typical data requirements of organizations operating one or more airlines for passengers and/or cargo. &lt;/SPAN&gt;&lt;SPAN&gt;The IDM for &lt;STRONG&gt;Travel Services&lt;/STRONG&gt; is a comprehensive data model that addresses the &lt;/SPAN&gt;&lt;SPAN&gt;typical data requirements of organizations providing booking services and/or hospitality services for airlines, hotels, car rentals, cruises, and vacation packages.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We have released two new SDTs for Travel Services and Airlines, in addition to updated versions of the previously released SDTs for &lt;STRONG&gt;Automotive Industries, Consumer Packaged Goods, Healthcare Insurance, Healthcare Service Providers, Manufacturing, Retail, and Utilities&lt;/STRONG&gt;. All twenty-two SDTs can now be accessed in Azure Synapse, either through the &lt;STRONG&gt;Gallery&lt;/STRONG&gt; or by creating a new lake database from the Data tab and selecting '&lt;STRONG&gt;+ Table&lt;/STRONG&gt;', and then '&lt;STRONG&gt;From template&lt;/STRONG&gt;'.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We have also continued to expand the scope and content of the previously published seven SDTs by releasing new versions. Additionally, we are working to ensure that our customers and partners who use selected Microsoft solution offerings can fully integrate their application data into relevant subsets of a data lake created using the new versions of SDTs.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;The new versions for &lt;STRONG&gt;Healthcare Insurance&lt;/STRONG&gt; and &lt;STRONG&gt;Healthcare Providers&lt;/STRONG&gt; have been fully mapped from Microsoft’s Industry Cloud for Healthcare to ensure customers can land all their data from our Healthcare Cloud into the relevant subset of a data lake published using the applicable SDT to make it easier for our customers to land data from Microsoft’s Healthcare Cloud, along with data from their organization’s many other applications and data sources, into a comprehensive integrated and harmonized lake database deployed using the SDTs. We have also expanded the IDMs for healthcare to enable customers to provision the specific clinical data content required to support OMOP consumption in the gold layer of their Azure data lake if that is of interest.&amp;nbsp;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;We’ve expanded the IDM for &lt;STRONG&gt;Retail&lt;/STRONG&gt; to accommodate smart store and shopper journey data generated by solutions from key Microsoft smart store partners such as AiFI in support of current and anticipated future smart store analytics use cases. Similarly, the expanded IDM for &lt;STRONG&gt;Utilities&lt;/STRONG&gt; provides full data support for data sourced from Microsoft’s 24x7 Sustainability offering. We’ve also expanded the IDMs for &lt;STRONG&gt;Automotive&lt;/STRONG&gt;, &lt;STRONG&gt;Consumer Packaged Goods&lt;/STRONG&gt;, and &lt;STRONG&gt;Manufacturing&lt;/STRONG&gt; to provide full alignment with the recently announced Microsoft Supply Chain Center solution offering. Each of the SDTs that is part of this latest release also benefits from enhanced and expanded support for readings and data streams acquired from leading edge IoT devices and sensors used in each of those industries.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;SDTs contain coverage of many different “business areas”, some very industry-specific and others cross-industry, that together comprise each of these very large IDMs. For example, in addition to comprehensive industry-specific business areas (such as Reservations, Ticketing, and Cargo and Departure Control Services for the Airline industry), most SDTs also include cross-industry business areas such as Accounting &amp;amp; Financial Reporting, Human Resources, Inventory, and an Emissions business area which provides support for data used to report greenhouse gas emissions (including scope 1, scope 2, and scope 3 emissions), combining to provide unparalleled coverage of the data typically found in the integrated data estates of large organizations in specific industries, thereby providing comprehensive best practices based accelerators as our customers continue to move their enterprise data estates to the cloud.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV id="tinyMceEditorDataModels_2" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;To learn more, check out the following:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/synapse-analytics/database-designer/overview-database-templates" target="_blank" rel="noopener"&gt;Overview of database templates.&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/synapse-analytics/database-designer/create-lake-database-from-lake-database-templates" target="_blank" rel="noopener"&gt;How to get started with database templates and lake databases.&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/synapse-analytics/database-designer/concepts-lake-database" target="_blank" rel="noopener"&gt;Learn more about lake databases.&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Tags:&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://techcommunity.microsoft.com/t5/tag/Database%20templates/tg-p/board-id/AzureSynapseAnalyticsBlog" target="_blank" rel="noopener"&gt;Database templates&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://techcommunity.microsoft.com/t5/tag/Industry/tg-p/board-id/AzureSynapseAnalyticsBlog" target="_blank" rel="noopener"&gt;Industry&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://techcommunity.microsoft.com/t5/tag/Lake%20databases/tg-p/board-id/AzureSynapseAnalyticsBlog" target="_blank" rel="noopener"&gt;Lake databases&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 21 Apr 2023 21:00:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/synapse-database-templates-for-airlines-amp-travel-services-plus/ba-p/3801805</guid>
      <dc:creator>DataModels</dc:creator>
      <dc:date>2023-04-21T21:00:00Z</dc:date>
    </item>
  </channel>
</rss>

