<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>Data Architecture Blog articles</title>
    <link>https://techcommunity.microsoft.com/t5/data-architecture-blog/bg-p/DataArchitectureBlog</link>
    <description>Data Architecture Blog articles</description>
    <pubDate>Tue, 17 Mar 2026 13:16:56 GMT</pubDate>
    <dc:creator>DataArchitectureBlog</dc:creator>
    <dc:date>2026-03-17T13:16:56Z</dc:date>
    <item>
      <title>Automated Continuous integration and delivery – CICD in Azure Data Factory</title>
      <link>https://techcommunity.microsoft.com/t5/data-architecture-blog/automated-continuous-integration-and-delivery-cicd-in-azure-data/ba-p/4294719</link>
      <description>&lt;P&gt;In Azure Data Factory, continuous integration and delivery (CI/CD) involves transferring Data Factory pipelines across different environments such as development, test, UAT and production. This process leverages Azure Resource Manager templates to store the configurations of various ADF entities, including pipelines, datasets, and data flows. This article provides a detailed, step-by-step guide on how to automate deployments using the integration between Data Factory and Azure Pipelines.&lt;/P&gt;
&lt;H4&gt;Prerequisite&lt;/H4&gt;
&lt;UL&gt;
&lt;LI&gt;Azure database factory, Setup of multiple ADF environments for different stages of development and deployment.&lt;/LI&gt;
&lt;LI&gt;Azure DevOps, the platform for managing code repositories, pipelines, and releases.&lt;/LI&gt;
&lt;LI&gt;Git Integration, ADF connected to a Git repository (Azure Repos or GitHub).&lt;/LI&gt;
&lt;LI&gt;The ADF contributor and Azure DevOps build administrator permission is required&lt;/LI&gt;
&lt;/UL&gt;
&lt;H5&gt;Step 1&lt;/H5&gt;
&lt;P&gt;Establish a dedicated Azure DevOps Git repository specifically for Azure Data Factory within the designated Azure DevOps project.&lt;/P&gt;
&lt;H5&gt;Step 2&lt;/H5&gt;
&lt;P&gt;Integrate Azure Data Factory (ADF) with the Azure DevOps Git repositories that were created in the first step.&lt;/P&gt;
&lt;img /&gt;
&lt;H5&gt;Step 3&lt;/H5&gt;
&lt;P&gt;Create developer feature branch with the Azure DevOps Git repositories that were created in the first step. &amp;nbsp;Select the created developer feature branch from ADF to start the development.&lt;/P&gt;
&lt;img /&gt;
&lt;H5&gt;Step 4&lt;/H5&gt;
&lt;P&gt;Begin the development process. For this example, I create a test pipeline “pl_adf_cicd_deployment_testing” and save all.&lt;/P&gt;
&lt;img /&gt;
&lt;H5&gt;Step 5&lt;/H5&gt;
&lt;P&gt;Submit pull request from developer feature branch to main&lt;/P&gt;
&lt;img /&gt;
&lt;H5&gt;Step 6&lt;/H5&gt;
&lt;P&gt;Once the pull requests are merged from the developer's feature branch into the main branch, proceed to publish the changes from the &lt;STRONG&gt;main branch to the ADF Publish branch&lt;/STRONG&gt;. The ARM templates (JSON files) will get up-to date, they will be available in the adf-publish branch within the Azure DevOps ADF repository.&lt;/P&gt;
&lt;img /&gt;
&lt;H5&gt;Step 7&lt;/H5&gt;
&lt;P&gt;ARM templates can be customized to accommodate various configurations for Development, Testing, and Production environments. This customization is typically achieved through the ARMTemplateParametersForFactory.json file, where you specify environment-specific values such as link service, environment variables, managed link and etc.&lt;/P&gt;
&lt;P&gt;For example, in a Testing environment, the storage account might be named teststorageaccount, whereas in a Production environment, it could be prodstorageaccount.&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;To create environment specific parameters file Azure DevOps ADF Git repo &amp;gt; main branch &amp;gt; linkedTemplates folder &amp;gt; Copy “ARMTemplateParametersForFactory.json”&lt;/LI&gt;
&lt;LI&gt;Create parameters_files folder under root path&lt;/LI&gt;
&lt;LI&gt;Copy paste ARMTemplateParametersForFactory.json inside parameters_files folder and rename to specify environment for example, prod-adf-parameters.json&lt;/LI&gt;
&lt;LI&gt;Update each environment specific parameter values&lt;/LI&gt;
&lt;/OL&gt;
&lt;img /&gt;
&lt;H5&gt;Step 8&lt;/H5&gt;
&lt;P&gt;To create an Azure DevOps CICD pipeline, use the following code and ensure you update the variables to match your environment before running it. This will allow you to deploy from one ADF environment to another, such as from Test to Production.&lt;/P&gt;
&lt;LI-CODE lang=""&gt;name: Release-$(rev:r)

trigger:
  branches:
    include:
      - adf_publish
variables:
  azureSubscription: &amp;lt;Your subscription&amp;gt;
  SourceDataFactoryName: &amp;lt;Test ADF&amp;gt;
  DeployDataFactoryName: &amp;lt;PROD ADF&amp;gt;
  DeploymentResourceGroupName: &amp;lt;PROD ADF RG&amp;gt;

stages:
- stage: Release
  displayName: Release Stage
  jobs:
    - job: Release
      displayName: Release Job
      pool:
        vmImage: 'windows-2019'
      steps:
        - checkout: self

        # Stop ADF Triggers
        - task: AzurePowerShell@5
          displayName: Stop Triggers
          inputs:
            azureSubscription: '$(azureSubscription)'
            ScriptType: 'InlineScript'
            Inline: |
              $triggersADF = Get-AzDataFactoryV2Trigger -DataFactoryName "$(DeployDataFactoryName)" -ResourceGroupName "$(DeploymentResourceGroupName)"
              if ($triggersADF.Count -gt 0) {
                $triggersADF | ForEach-Object { Stop-AzDataFactoryV2Trigger -ResourceGroupName "$(DeploymentResourceGroupName)" -DataFactoryName "$(DeployDataFactoryName)" -Name $_.name -Force }
              }
            azurePowerShellVersion: 'LatestVersion'



        # Deploy ADF using ARM Template and UAT JSON parameters
        - task: AzurePowerShell@5
          displayName: Deploy ADF
          inputs:
            azureSubscription: '$(azureSubscription)'
            ScriptType: 'InlineScript'
            Inline: |
              New-AzResourceGroupDeployment `
                -ResourceGroupName "$(DeploymentResourceGroupName)" -TemplateFile "$(System.DefaultWorkingDirectory)/$(SourceDataFactoryName)/ARMTemplateForFactory.json" -TemplateParameterFile "$(System.DefaultWorkingDirectory)/parameters_files/prod-adf-parameters.json" -Mode "Incremental"
            azurePowerShellVersion: 'LatestVersion'


        # Restart ADF Triggers
        - task: AzurePowerShell@5
          displayName: Restart Triggers
          inputs:
            azureSubscription: '$(azureSubscription)'
            ScriptType: 'InlineScript'
            Inline: |
              $triggersADF = Get-AzDataFactoryV2Trigger -DataFactoryName "$(DeployDataFactoryName)" -ResourceGroupName "$(DeploymentResourceGroupName)"
              if ($triggersADF.Count -gt 0) {
                $triggersADF | ForEach-Object { Start-AzDataFactoryV2Trigger -ResourceGroupName "$(DeploymentResourceGroupName)" -DataFactoryName "$(DeployDataFactoryName)" -Name $_.name -Force }
              }
            azurePowerShellVersion: 'LatestVersion'
&lt;/LI-CODE&gt;
&lt;H5&gt;Triggering the Pipeline&lt;/H5&gt;
&lt;P&gt;The Azure DevOps CI/CD pipeline is designed to automatically trigger whenever changes are merged into the main branch. Additionally, it can be initiated manually or set to run on a schedule for periodic deployments, providing flexibility and ensuring that updates are deployed efficiently and consistently.&lt;/P&gt;
&lt;H5&gt;Monitoring and Rollback&lt;/H5&gt;
&lt;P&gt;To monitor the pipeline execution, utilize the Azure DevOps pipeline dashboards. In case a rollback is necessary, you can revert to previous versions of the ARM templates or pipelines using Azure DevOps and redeploy the changes.&lt;/P&gt;</description>
      <pubDate>Wed, 13 Nov 2024 13:45:44 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/data-architecture-blog/automated-continuous-integration-and-delivery-cicd-in-azure-data/ba-p/4294719</guid>
      <dc:creator>MUA</dc:creator>
      <dc:date>2024-11-13T13:45:44Z</dc:date>
    </item>
    <item>
      <title>Create and Deploy Azure SQL Managed Instance Database Project integrated with Azure DevOps CICD</title>
      <link>https://techcommunity.microsoft.com/t5/data-architecture-blog/create-and-deploy-azure-sql-managed-instance-database-project/ba-p/4287713</link>
      <description>&lt;P&gt;Integrating database development into continuous integration and continuous deployment (CI/CD) workflows is the best practice for Azure SQL managed instance database projects. Automating the process through a deployment pipeline is always recommended. This automation ensures that ongoing deployments seamlessly align with your continuous local development efforts, eliminating the need for additional manual intervention.&lt;/P&gt;
&lt;P&gt;This article guides you through the step-by-step process of creating a new azure SQL managed instance database project, adding objects to it, and setting up a CICD deployment pipeline using GitHub actions.&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;Prerequisites&lt;/STRONG&gt;&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://visualstudio.microsoft.com/downloads/" target="_blank" rel="noopener"&gt;Visual Studio 2022 Community, Professional, or Enterprise&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Azure DevOps environment&lt;/LI&gt;
&lt;LI&gt;Contributor permission within Azure DevOps Con&lt;/LI&gt;
&lt;LI&gt;Sysadmin server roles within Azure SQL managed instance&lt;/LI&gt;
&lt;/UL&gt;
&lt;H4&gt;&lt;STRONG&gt;Step 01&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;Open Visual Studio, click Create a new project&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;Search for SQL Server, select SQL Server Database Project&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Provide the project name, folder path to store .dacpac file, create&lt;/P&gt;
&lt;img /&gt;
&lt;H4&gt;&lt;STRONG&gt;Step 2&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;Import the database schema from an existing database. Right-click on the project and select 'Import'. You will see three options: Data-Tier Application (.dacpac), Database, and Script (.sql). In this case, I am using the Database option and importing form Azure SQL managed instance&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;To proceed, you will encounter a screen that allows you to provide a connection string. You can choose to select a database from local, network, or Azure sources, depending on your needs. Alternatively, you can directly enter the server name, authentication type, and credentials to connect to the database server. Once connected, select the desired database to import and include in your project.&lt;/P&gt;
&lt;img /&gt;
&lt;H4&gt;Step 3&lt;/H4&gt;
&lt;P&gt;Configure the import settings. There are several options available, each designed to optimize the process and ensure seamless integration.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Import application-scoped objects: will import tables, views, stored procedures likewise objects.&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI&gt;Imports reference logins: login related imports.&lt;/LI&gt;
&lt;LI&gt;Import Permissions: will import related permissions.&lt;/LI&gt;
&lt;LI&gt;Import database settings: will import database settings.&lt;/LI&gt;
&lt;LI&gt;Folder Structure: option to choose folder structure in your project for database objects.&lt;/LI&gt;
&lt;LI&gt;Maximum files per folder: limit number files per folder.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Click Start which will show the progress window as shown. Click “Finish” to complete the step.&lt;/P&gt;
&lt;img /&gt;
&lt;H4&gt;Step 4&lt;/H4&gt;
&lt;UL&gt;
&lt;LI&gt;To ensure a smooth deployment process, start by incorporating any necessary post-deployment scripts into your project. These scripts are crucial for executing tasks that must be completed after the database has been deployed, such as performing data migrations or applying additional configurations.&lt;/LI&gt;
&lt;LI&gt;To compile your database project in Visual Studio, simply right-click on the project and select 'Build'. This action will compile the project and generate a sqlproj file, ensuring that your database project is ready for deployment.&lt;/LI&gt;
&lt;LI&gt;When building the project, you might face warnings and errors that need careful debugging and resolution to ensure the successful creation of the sqlproj file. Common issues include missing references, syntax errors, or configuration mismatches.&lt;/LI&gt;
&lt;LI&gt;After addressing all warnings and errors, rebuild the project to create the sqlproj file. This file contains the database schema and is essential for deployment.&lt;/LI&gt;
&lt;LI&gt;Ensure that any post-deployment scripts are seamlessly integrated into the project. These scripts will run after the database deployment, performing any additional necessary tasks.&lt;/LI&gt;
&lt;LI&gt;To ensure all changes are tracked and can be deployed through your CI/CD pipeline, commit the entire codebase, including the sqlproj file and any post-deployment scripts, to your branch in Azure DevOps. This step guarantees that every modification is documented and ready for deployment.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H4&gt;Step 5&lt;/H4&gt;
&lt;P&gt;Create Azure DevOps pipeline to deploy database project&lt;/P&gt;
&lt;img /&gt;&lt;img /&gt;
&lt;H4&gt;Step 6&lt;/H4&gt;
&lt;P&gt;To ensure the YAML file effectively builds the SQL project and publishes the DACPAC file to the artifact folder of the pipeline, include the following stages.&lt;/P&gt;
&lt;LI-CODE lang=""&gt;stages:
- stage: Build
  jobs:
  - job: BuildJob
    displayName: 'Build Stage'
    steps:
    - task: VSBuild@1
      displayName: 'Build SQL Server Database Project'
      inputs:
        solution: $(solution)
        platform: $(buildPlatform)
        configuration: $(buildConfiguration)

    - task: CopyFiles@2
      inputs:
        SourceFolder: '$(Build.SourcesDirectory)'
        Contents: '**\*.dacpac'
        TargetFolder: '$(Build.ArtifactStagingDirectory)'
        flattenFolders: true

    - task: PublishPipelineArtifact@1
      inputs:
        targetPath: '$(Build.ArtifactStagingDirectory)'
        artifact: 'dacpac'
        publishLocation: 'pipeline'

- stage: Deploy
  jobs:
  - job: Deploy
    displayName: 'Deploy Stage'
    pool:
      name: 'Pool'
    steps:
    - task: DownloadPipelineArtifact@2
      inputs:
        buildType: current
        artifact: 'dacpac'
        path: '$(Build.ArtifactStagingDirectory)'

    - task: PowerShell@2
      displayName: 'upgrade sqlpackage'
      inputs:
        targetType: 'inline'
        script: |
          # use evergreen or specific dacfx msi link below
          wget -O DacFramework.msi "https://aka.ms/dacfx-msi"
          msiexec.exe /i "DacFramework.msi" /qn

    - task: SqlAzureDacpacDeployment@1
      inputs:
        azureSubscription: '$(ServiceConnection)'
        AuthenticationType: 'servicePrincipal'
        ServerName: '$(ServerName)'
        DatabaseName: '$(DatabaseName)'
        deployType: 'DacpacTask'
        DeploymentAction: 'Publish'
        DacpacFile: '$(Build.ArtifactStagingDirectory)/*.dacpac'
        IpDetectionMethod: 'AutoDetect'
&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;Step 7&lt;/H4&gt;
&lt;P&gt;To execute any Pre and Post SQL script during deployment, you need to update the SQL package, obtain an access token, and then run the scripts.&lt;/P&gt;
&lt;LI-CODE lang=""&gt;# install all necessary dependencies onto the build agent
    - task: PowerShell@2
      name: install_dependencies
      inputs:
        targetType: inline
        script: |
          # Download and Install Azure CLI
          write-host "Installing AZ CLI..."
          Invoke-WebRequest -Uri https://aka.ms/installazurecliwindows -OutFile .\AzureCLI.msi
          Start-Process msiexec.exe -Wait -ArgumentList "/I AzureCLI.msi /quiet"
          Remove-Item .\AzureCLI.msi
          write-host "Done."

          # prepend the az cli path for future tasks in the pipeline
          write-host "Adding AZ CLI to PATH..."
          write-host "##vso[task.prependpath]C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\wbin"
          $currentPath = (Get-Item -path "HKCU:\Environment" ).GetValue('Path', '', 'DoNotExpandEnvironmentNames')
          if (-not $currentPath.Contains("C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\wbin")) { setx PATH ($currentPath + ";C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\wbin") }
          if (-not $env:path.Contains("C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\wbin")) { $env:path += ";C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\wbin" }
          write-host "Done."

          # install necessary PowerShell modules

          write-host "Installing necessary PowerShell modules..."
          
          Get-PackageProvider -Name nuget -force

          if ( -not (Get-Module -ListAvailable -Name Az.Resources) ) {
            install-module Az.Resources -force
          } 

          if ( -not (Get-Module -ListAvailable -Name Az.Accounts) ) {
            install-module Az.Accounts -force
          } 

          if ( -not (Get-Module -ListAvailable -Name SqlServer) ) {
            install-module SqlServer -force
          } 

          write-host "Done."
        
    - task: AzureCLI@2
      name: run_sql_scripts
      inputs:
        azureSubscription: '$(ServiceConnection)'
        scriptType: ps
        scriptLocation: inlineScript
        inlineScript: |
          # get access token for SQL
          $token = az account get-access-token --resource https://database.windows.net --query accessToken --output tsv

          # configure OELCore database
          Invoke-Sqlcmd -AccessToken $token -ServerInstance '$(ServerName)' -Database '$(DatabaseName)' -inputfile '.\pipelines\config-db.dev.sql'
&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 13 Nov 2024 13:48:56 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/data-architecture-blog/create-and-deploy-azure-sql-managed-instance-database-project/ba-p/4287713</guid>
      <dc:creator>MUA</dc:creator>
      <dc:date>2024-11-13T13:48:56Z</dc:date>
    </item>
    <item>
      <title>Cross Subscription Database Restore for SQL Managed Instance Database with TDE enabled using ADF</title>
      <link>https://techcommunity.microsoft.com/t5/data-architecture-blog/cross-subscription-database-restore-for-sql-managed-instance/ba-p/4273575</link>
      <description>&lt;P&gt;Our customers require daily refreshes of their production database to the non-production environment. The database, approximately 600GB in size, has Transparent Data Encryption (TDE) enabled in production. Disabling TDE before performing a copy-only backup is not an option, as it would take hours to disable and re-enable. To meet customer needs, we use a customer-managed key stored in Key Vault. Azure Data Factory is then utilized to schedule and execute the end-to-end database restore process.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Permission requires&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;To perform backup and restore operations, the SQL Managed Instance Managed Identity needs to have the "Contributor, Storage Blob Data Contributor" permission for the blob storage.&lt;/LI&gt;
&lt;LI&gt;To transfer backup files between two storage locations, ADF managed identity needs the "Storage Blob Data Contributor" permission for the blob storage.&lt;/LI&gt;
&lt;LI&gt;To carry out backup and restore operations, ADF managed identity needs 'sysadmin' permissions on SQL Managed Instance.&lt;/LI&gt;
&lt;LI&gt;To enable Transparent Data Encryption (TDE) with a customer-managed key, ensure you have Contributor access to the Resource Group (RG) where the SQL managed instance resides.&lt;/LI&gt;
&lt;LI&gt;Establish an Azure Key vault access policy for the SQL managed instance's managed identity and the user configuring Transparent Data Encryption (TDE), granting them full key permissions.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Step 1&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Create a TDE key in the non-production Azure Key Vault &lt;STRONG&gt;&lt;EM&gt;dev-kv-001 &lt;/EM&gt;&lt;/STRONG&gt;within the same subscription as the non-production SQL Managed Instance.&lt;/P&gt;
&lt;P&gt;Name the key, select RSA with a 2048-bit key size, and leave the active and expiration dates unset for this demonstration. Ensure the key is enabled, and do not set a rotation policy. Finally, click Create.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P class="lia-clear-both"&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;STRONG&gt;Step 2&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Establish an Azure Key vault access policy for the SQL managed instance's managed identity and the user configuring Transparent Data Encryption (TDE), granting them full key permissions.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;STRONG&gt;Step 3&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Backup the TDE key we just created with in non-prod key vault&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;STRONG&gt;Step 4&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Create a new Key Vault &lt;STRONG&gt;&lt;EM&gt;dev-kv-002 &lt;/EM&gt;&lt;/STRONG&gt;and proceed to restore the key within this newly created vault. Ensure the name matches the backed-up key name and that the status is set to enabled.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 5&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Move the new &lt;STRONG&gt;&lt;EM&gt;dev-kv-002&lt;/EM&gt;&lt;/STRONG&gt; Azure Key Vault from the development (non-prod) subscription to the production subscription. This process may take a few minutes as it validates the ability to move the Key vault.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P class="lia-clear-both"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 6&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Having successfully moved our Key Vault &lt;STRONG&gt;&lt;EM&gt;dev-kv-002 &lt;/EM&gt;&lt;/STRONG&gt;to the production subscription, we will now proceed to backup (Follow step 3) the Key for restoration in the actual production Key vault &lt;STRONG&gt;&lt;EM&gt;prod-kv-001&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;STRONG&gt;Step 7&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;We are now prepared to link the keys we created with their respective Azure SQL Managed Instances in both the development and production environments. The objective is to maintain the same key in the backup, enabling us to refresh our production environment into development seamlessly.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We will simultaneously execute these actions in both our Production and Development SQL Managed Instances. Begin by accessing your portal blades for the SQL Managed Instances. Once there, navigate to the SQL Managed Instance blade and select Transparent Data Encryption under the Security section.&lt;/P&gt;
&lt;P&gt;To enable a successful production refresh of our development environment, we need to switch from Service-managed keys to Customer-managed keys.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;STRONG&gt;Step: 8&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Creates a server-level credential. A credential is a record that contains the authentication information that is required to connect to a resource outside SQL Server.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="applescript"&gt;USE master CREATE CREDENTIAL [https://&amp;lt;mystorageaccountname&amp;gt;.blob.core.windows.net/&amp;lt;mystorageaccountcontainername&amp;gt;] WITH IDENTITY='Managed Identity' GO&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step: 9&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Create ADF link service connects for both SQL Managed Instances and storage accounts.&lt;/LI&gt;
&lt;LI&gt;Create ADF dataset using both SQL Managed Instances and storage accounts link services&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Step: 10&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;If you're utilizing a private endpoint, make sure to set up an ADF integration runtime and a managed link follow &lt;A href="https://techcommunity.microsoft.com/t5/data-architecture-blog/create-azure-data-factory-managed-private-links/ba-p/4270629" target="_blank" rel="noopener"&gt;Create Azure Data Factory Managed Private Links&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step: 11&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Create ADF pipeline to take database backup from source.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Split backup files into multiple files for faster backup&lt;/LI&gt;
&lt;LI&gt;Use below scripts to take copy only database backup&lt;/LI&gt;
&lt;LI&gt;Use Script activity to execute the backup scripts using source SQL MI link service&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="applescript"&gt;BACKUP DATABASE [@{pipeline().parameters.source_database_name}] TO URL = N'https://&amp;lt;storageaccountname&amp;gt;.blob.core.windows.net/databaserefresh/@{pipeline().parameters.source_database_name}_01.bak', URL = N'https:// &amp;lt;storageaccountname&amp;gt;.blob.core.windows.net/databaserefresh/@{pipeline().parameters.source_database_name}_02.bak', URL = N'https:// &amp;lt;storageaccountname&amp;gt;.blob.core.windows.net/databaserefresh/@{pipeline().parameters.source_database_name}_03.bak', URL = N'https:// &amp;lt;storageaccountname&amp;gt;.blob.core.windows.net/databaserefresh/@{pipeline().parameters.source_database_name}_04.bak', URL = N'https:// &amp;lt;storageaccountname&amp;gt;.blob.core.windows.net/databaserefresh/@{pipeline().parameters.source_database_name}_05.bak', URL = N'https:// &amp;lt;storageaccountname&amp;gt;.blob.core.windows.net/databaserefresh/@{pipeline().parameters.source_database_name}_06.bak', URL = N'https:// &amp;lt;storageaccountname&amp;gt;.blob.core.windows.net/databaserefresh/@{pipeline().parameters.source_database_name}_07.bak', URL = N'https:// &amp;lt;storageaccountname&amp;gt;.blob.core.windows.net/databaserefresh/@{pipeline().parameters.source_database_name}_08.bak' WITH COPY_ONLY, MAXTRANSFERSIZE = 4194304, COMPRESSION, STATS = 10&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;Allow a minute for the backup to transfer to blob storage, adjusting the duration to meet your specific needs.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;STRONG&gt;Step: 12&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Create ADF pipeline to copy database backup files from source storage account to target storage account.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Use copy activity to copy backup files from source storage account to target storage account.&lt;/LI&gt;
&lt;LI&gt;Allow a minute for the backup to transfer to blob storage, adjusting the duration to meet your specific needs.&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;P&gt;&lt;STRONG&gt;Step: 13&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Create Azure Data Factory pipeline to restore database to a target SQL Managed Instance backup from the designated storage account.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Use below scripts to restore database from designated storage account&lt;/LI&gt;
&lt;LI&gt;Use Script activity to execute the restore scripts using target SQL MI link service&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="applescript"&gt;USE [master] RESTORE DATABASE [@{pipeline().parameters.target_database_name}] FROM URL = N'https://&amp;lt;storageaccountname&amp;gt;.blob.core.windows.net/databaserefresh/@{pipeline().parameters.source_database_name}_01.bak', URL = N'https://&amp;lt;storageaccountname&amp;gt;.blob.core.windows.net/databaserefresh/@{pipeline().parameters.source_database_name}_02.bak', URL = N'https:// &amp;lt;storageaccountname&amp;gt;.blob.core.windows.net/databaserefresh/@{pipeline().parameters.source_database_name}_03.bak', URL = N'https:// &amp;lt;storageaccountname&amp;gt;.blob.core.windows.net/databaserefresh/@{pipeline().parameters.source_database_name}_04.bak', URL = N'https:// &amp;lt;storageaccountname&amp;gt;.blob.core.windows.net/databaserefresh/@{pipeline().parameters.source_database_name}_05.bak', URL = N'https:// &amp;lt;storageaccountname&amp;gt;.blob.core.windows.net/databaserefresh/@{pipeline().parameters.source_database_name}_06.bak', URL = N'https:// &amp;lt;storageaccountname&amp;gt;.blob.core.windows.net/databaserefresh/@{pipeline().parameters.source_database_name}_07.bak', URL = N'https:// &amp;lt;storageaccountname&amp;gt;.blob.core.windows.net/databaserefresh/@{pipeline().parameters.source_database_name}_08.bak'&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P class="lia-clear-both"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step: 14&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Set up an additional pipeline to remove orphan databases users, provide user access, or carry out any extra tasks needed after a restore, using the suitable activity task.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step: 15&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Create ADF pipeline workstation to execute all Step4 &amp;gt; Step5&amp;gt;Step6&amp;gt;Step7 in sequence.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Set up parameters for both the source_database_name and target_database_name to enable dynamic operation of the pipeline across different databases.&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;P class="lia-clear-both"&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 13 Nov 2024 14:12:13 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/data-architecture-blog/cross-subscription-database-restore-for-sql-managed-instance/ba-p/4273575</guid>
      <dc:creator>MUA</dc:creator>
      <dc:date>2024-11-13T14:12:13Z</dc:date>
    </item>
    <item>
      <title>Failed to Restore SQL Managed Instance Database Backup from Azure Blob Storage</title>
      <link>https://techcommunity.microsoft.com/t5/data-architecture-blog/failed-to-restore-sql-managed-instance-database-backup-from/ba-p/4273518</link>
      <description>&lt;P&gt;If you encounter issues while restoring an Azure SQL Managed Instance database from Azure blob storage, ensure that your database backup files are correctly stored in Azure blob storage.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN style="color: #df0000;"&gt;Error: 1&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN style="color: #df0000;"&gt;Cannot open backup device '&lt;A href="https://mystorageaccount.blob.core.windows.net/test_container/db_backup/mytest_db_01.bak" target="_blank" rel="noopener"&gt;https://mystorageaccount.blob.core.windows.net/test_container/db_backup/mytest_db_01.bak&lt;/A&gt;'. Operating system error 86(The specified network password is not correct.).&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN style="color: #df0000;"&gt;RESTORE DATABASE is terminating abnormally.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Cause of failure&lt;/STRONG&gt;&lt;/P&gt;
&lt;H2 style="margin: 0in; font-family: Calibri; font-size: 11.0pt;"&gt;&lt;SPAN style="font-size: medium;"&gt;The blob storage credential is either incorrect or missing.&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;To validate credential&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;I am utilizing managed identity, but the same approach applies to all identity types. SQL managed instance managed identity &lt;STRONG&gt;&lt;EM&gt;Storage Contributor and Storage Blob Data Contributor&lt;/EM&gt;&lt;/STRONG&gt; is require.&lt;/P&gt;
&lt;LI-CODE lang="applescript"&gt;select * from sys.credentials&lt;/LI-CODE&gt;&lt;img /&gt;
&lt;P&gt;To create credentials&lt;/P&gt;
&lt;LI-CODE lang="applescript"&gt;USE master
GO
CREATE CREDENTIAL [https:// mystorageaccount.blob.core.windows.net/db_backup/]
WITH IDENTITY='Managed Identity'
GO&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN style="color: #df0000;"&gt;Error: 2&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN style="color: #df0000;"&gt;Cannot open backup device '&lt;A href="https://mystorageaccount.blob.core.windows.net/test_container/db_backup/mytest_db_01.bak" target="_blank" rel="noopener"&gt;https://mystorageaccount.blob.core.windows.net/test_container/db_backup/mytest_db_01.bak&lt;/A&gt;'. Operating system error 12007(failed to retrieve text for this error. Reason: 317).&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN style="color: #df0000;"&gt;RESTORE DATABASE is terminating abnormally.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN style="color: #000000;"&gt;&lt;STRONG&gt;Cause of failure&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;The storage account's public network access is disabled, relying solely on a private endpoint. However, the private endpoint was either configured incorrectly or associated with the wrong DNS zone, leading to connectivity issues. The private endpoint for blob storage must be created with &lt;STRONG&gt;&lt;EM&gt;privatelink.blob.core.windows.net&lt;/EM&gt;&lt;/STRONG&gt; Private DNS zone.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;To validate private endpoint DNS configuration&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Navigate Storage account &amp;gt; Network &amp;gt; Private Endpoint &amp;gt; Click Private Endpoint &amp;gt; Setting &amp;gt; DNS configuration.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;FQDN: &lt;EM&gt;&lt;STRONG&gt;mystorage.privatelink.blob.core.windows.net&lt;/STRONG&gt;&lt;/EM&gt;&lt;/LI&gt;
&lt;LI&gt;Private DNS Zone: &lt;STRONG&gt;&lt;EM&gt;privatelink.blob.core.windows.net&lt;/EM&gt;&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 07 Nov 2024 05:58:29 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/data-architecture-blog/failed-to-restore-sql-managed-instance-database-backup-from/ba-p/4273518</guid>
      <dc:creator>MUA</dc:creator>
      <dc:date>2024-11-07T05:58:29Z</dc:date>
    </item>
    <item>
      <title>Azure SQL Managed Instance Cross Subscription Database Restore using Azure Data Factory</title>
      <link>https://techcommunity.microsoft.com/t5/data-architecture-blog/azure-sql-managed-instance-cross-subscription-database-restore/ba-p/4271078</link>
      <description>&lt;P&gt;Azure Data Factory set up automated, continuous, or on-demand restore of Azure SQL Managed Instance databases between two separate Azure subscriptions.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;Before you start the database restore process, make sure to turn off TDE.&lt;/EM&gt;&lt;/STRONG&gt;&amp;nbsp;&lt;EM&gt;&lt;STRONG&gt;Those who require&lt;/STRONG&gt;&lt;/EM&gt;&amp;nbsp;&lt;A style="background-color: #ffffff;" href="https://techcommunity.microsoft.com/t5/data-architecture-blog/cross-subscription-database-restore-for-sql-managed-instance/ba-p/4273575" target="_self"&gt;Cross Subscription Database Restore for SQL Managed Instance Database with TDE enabled using ADF&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Prerequisite&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Azure SQL Managed Instances are located across two distinct subscriptions.&lt;/LI&gt;
&lt;LI&gt;Azure Blob storage same subscriptions SQL Managed Instances are located&lt;/LI&gt;
&lt;LI&gt;Azure Data Factory instance&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Permission requires&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;To perform backup and restore operations, the SQL Managed Instance Managed Identity needs to have the "Contributor, Storage Blob Data Contributor" permission for the blob storage.&lt;/LI&gt;
&lt;LI&gt;To transfer backup files between two storage locations, ADF managed identity needs the "Storage Blob Data Contributor" permission for the blob storage.&lt;/LI&gt;
&lt;LI&gt;To carry out backup and restore operations, ADF managed identity needs 'sysadmin' permissions on SQL Managed Instance.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Note:&lt;/STRONG&gt; We utilized Managed Identity for permission granting. Should you employ a different ID, ensure it has the same permissions assigned.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step: 1&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Creates a server-level credential. A credential is a record that contains the authentication information that is required to connect to a resource outside SQL Server.&lt;/P&gt;
&lt;LI-CODE lang="applescript"&gt;USE master
GO
CREATE CREDENTIAL [https://&amp;lt;mystorageaccountname&amp;gt;.blob.core.windows.net/&amp;lt;mystorageaccountcontainername&amp;gt;]
WITH IDENTITY='Managed Identity'
GO&lt;/LI-CODE&gt;
&lt;P&gt;Validate the credential created successfully&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;STRONG&gt;Step: 2&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Create ADF link service connects for both SQL Managed Instances and storage accounts.&lt;/LI&gt;
&lt;LI&gt;Create ADF dataset using both SQL Managed Instances and storage accounts link services&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Step: 3&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;If you're utilizing a private endpoint, make sure to set up an ADF integration runtime and a managed link follow &lt;A href="https://techcommunity.microsoft.com/t5/data-architecture-blog/create-azure-data-factory-managed-private-links/ba-p/4270629" target="_blank" rel="noopener"&gt;Create Azure Data Factory Managed Private Links&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step: 4&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Create ADF pipeline to take database backup from source.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Split backup files into multiple files for faster backup&lt;/LI&gt;
&lt;LI&gt;Use below scripts to take copy only database backup&lt;/LI&gt;
&lt;LI&gt;Use Script activity to execute the backup scripts using source SQL MI link service&lt;/LI&gt;
&lt;/UL&gt;
&lt;LI-CODE lang="applescript"&gt;Uas Master
GO
BACKUP DATABASE [@{pipeline().parameters.source_database_name}]
  TO URL = N'https://&amp;lt;storageaccountname&amp;gt;.blob.core.windows.net/databaserefresh/@{pipeline().parameters.source_database_name}_01.bak',
  URL = N'https:// &amp;lt;storageaccountname&amp;gt;.blob.core.windows.net/databaserefresh/@{pipeline().parameters.source_database_name}_02.bak',
  URL = N'https:// &amp;lt;storageaccountname&amp;gt;.blob.core.windows.net/databaserefresh/@{pipeline().parameters.source_database_name}_03.bak',
  URL = N'https:// &amp;lt;storageaccountname&amp;gt;.blob.core.windows.net/databaserefresh/@{pipeline().parameters.source_database_name}_04.bak',
  URL = N'https:// &amp;lt;storageaccountname&amp;gt;.blob.core.windows.net/databaserefresh/@{pipeline().parameters.source_database_name}_05.bak',
  URL = N'https:// &amp;lt;storageaccountname&amp;gt;.blob.core.windows.net/databaserefresh/@{pipeline().parameters.source_database_name}_06.bak',
  URL = N'https:// &amp;lt;storageaccountname&amp;gt;.blob.core.windows.net/databaserefresh/@{pipeline().parameters.source_database_name}_07.bak',
  URL = N'https:// &amp;lt;storageaccountname&amp;gt;.blob.core.windows.net/databaserefresh/@{pipeline().parameters.source_database_name}_08.bak'
WITH COPY_ONLY, MAXTRANSFERSIZE = 4194304, COMPRESSION, STATS = 10&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;Allow a minute for the backup to transfer to blob storage, adjusting the duration to meet your specific needs.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;STRONG&gt;Step: 5&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Create ADF pipeline to copy database backup files from source storage account to target storage account.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Use copy activity to copy backup files from source storage account to target storage account.&lt;/LI&gt;
&lt;LI&gt;Allow a minute for the backup to transfer to blob storage, adjusting the duration to meet your specific needs.&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;P&gt;&lt;STRONG&gt;Step: 6&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Create Azure Data Factory pipeline to restore database to a target SQL Managed Instance backup from the designated storage account.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Use below scripts to restore database from designated storage account&lt;/LI&gt;
&lt;LI&gt;Use Script activity to execute the restore scripts using target SQL MI link service&lt;/LI&gt;
&lt;/UL&gt;
&lt;LI-CODE lang="applescript"&gt;USE [master]

RESTORE DATABASE [@{pipeline().parameters.target_database_name}] FROM  
URL = N'https://&amp;lt;storageaccountname&amp;gt;.blob.core.windows.net/databaserefresh/@{pipeline().parameters.source_database_name}_01.bak',
URL = N'https://&amp;lt;storageaccountname&amp;gt;.blob.core.windows.net/databaserefresh/@{pipeline().parameters.source_database_name}_02.bak',
URL = N'https:// &amp;lt;storageaccountname&amp;gt;.blob.core.windows.net/databaserefresh/@{pipeline().parameters.source_database_name}_03.bak',
URL = N'https:// &amp;lt;storageaccountname&amp;gt;.blob.core.windows.net/databaserefresh/@{pipeline().parameters.source_database_name}_04.bak',
URL = N'https:// &amp;lt;storageaccountname&amp;gt;.blob.core.windows.net/databaserefresh/@{pipeline().parameters.source_database_name}_05.bak',
URL = N'https:// &amp;lt;storageaccountname&amp;gt;.blob.core.windows.net/databaserefresh/@{pipeline().parameters.source_database_name}_06.bak',
URL = N'https:// &amp;lt;storageaccountname&amp;gt;.blob.core.windows.net/databaserefresh/@{pipeline().parameters.source_database_name}_07.bak',
URL = N'https:// &amp;lt;storageaccountname&amp;gt;.blob.core.windows.net/databaserefresh/@{pipeline().parameters.source_database_name}_08.bak'&lt;/LI-CODE&gt;&lt;img /&gt;
&lt;P&gt;&lt;STRONG&gt;Step: 7&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Set up an additional pipeline to remove orphan databases users, provide user access, or carry out any extra tasks needed after a restore, using the suitable activity task.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step: 8&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Create ADF pipeline workstation to execute all Step4 &amp;gt; Step5&amp;gt;Step6&amp;gt;Step7 in sequence.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Set up parameters for both the source_database_name and target_database_name to enable dynamic operation of the pipeline across different databases.&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 07 Nov 2024 15:27:22 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/data-architecture-blog/azure-sql-managed-instance-cross-subscription-database-restore/ba-p/4271078</guid>
      <dc:creator>MUA</dc:creator>
      <dc:date>2024-11-07T15:27:22Z</dc:date>
    </item>
    <item>
      <title>Create Azure Data Factory Managed Private Links</title>
      <link>https://techcommunity.microsoft.com/t5/data-architecture-blog/create-azure-data-factory-managed-private-links/ba-p/4270629</link>
      <description>&lt;P&gt;Create a managed private link for an Azure SQL managed instance, which is a similar procedure to establishing a managed link for any Azure resources from the Azure data factory.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Create Azure integration runtime&lt;/STRONG&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P class="lia-clear-both"&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P class="lia-clear-both"&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P class="lia-clear-both"&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;STRONG&gt;Create managed link for Azure SQL Managed Instance&lt;/STRONG&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P class="lia-clear-both"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Click new&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Search for managed instance and select Azure SQL Database Managed Instance&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;Provide name &amp;gt; select target subscription &amp;gt; select target SQL MI &amp;gt; Create&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;Azure portal target SQL managed instance &amp;gt; security &amp;gt; Private endpoint connection&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Select the pending adf private endpoint &amp;gt; approve&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Validate ADF&amp;nbsp;managed private link to ensure approval state "Approved"&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 07 Nov 2024 06:05:22 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/data-architecture-blog/create-azure-data-factory-managed-private-links/ba-p/4270629</guid>
      <dc:creator>MUA</dc:creator>
      <dc:date>2024-11-07T06:05:22Z</dc:date>
    </item>
    <item>
      <title>Creating a Kubernetes Application for Azure SQL Database</title>
      <link>https://techcommunity.microsoft.com/t5/data-architecture-blog/creating-a-kubernetes-application-for-azure-sql-database/ba-p/3945641</link>
      <description>&lt;H1 dir="auto" tabindex="-1"&gt;&lt;img /&gt;&lt;/H1&gt;
&lt;H1 id="user-content-creating-a-kubernetes-application-for-azure-sql-database" dir="auto" tabindex="-1"&gt;Creating a Kubernetes Application for Azure SQL Database&lt;/H1&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;&lt;EM&gt;Buck Woody, Principal Applied Data Scientist, Microsoft&lt;/EM&gt;&lt;/H4&gt;
&lt;H4 id="user-content-buck-woody-principal-applied-data-scientist-microsoft" dir="auto" tabindex="-1"&gt;&amp;nbsp;&lt;/H4&gt;
&lt;P&gt;Modern application development has multiple challenges. From selecting a "stack" of front-end through data storage and processing from several competing standards, through ensuring the highest levels of security and performance, developers are required to ensure the application scales and performs well and is supportable on multiple platforms. For that last requirement, bundling up the application into Container technologies such as Docker and deploying multiple Containers onto the Kubernetes platform is now de rigueur in application development.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In this example, we'll explore using Python, Docker Containers, and Kubernetes - all running on the Microsoft Azure platform. Using Kubernetes means that you also have the flexibility of using local environments or even other clouds for a seamless and consistent deployment of your application and allows for multi-cloud deployments for even higher resiliency. We'll also use Microsoft Azure SQL Database for a service-based, scalable, highly resilient, and secure environment for the data storage and processing. In fact, in many cases, other applications are often using Microsoft Azure SQL Database already, and this sample application can be used to further leverage and enrich that data.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This example is comprehensive in scope, but uses the simplest application, database, and deployment to illustrate the process. You can adapt this sample to be far more robust, even leveraging the latest technologies for the returned data. It's a useful learning tool to create a pattern for other applications.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 id="user-content-using-the-adventureworkslt-sample-database-in-a-practical-example" dir="auto" tabindex="-1"&gt;Using the AdventureWorksLT Sample Database in a Practical Example&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The AdventureWorks (fictitious) company uses a database that stores data about Sales and Marketing, Products, Customers and Manufacturing. It also contains views and stored procedures that join information about the products, such as the product name, category, price, and a brief description.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The AdventureWorks Development team wants to create a Proof-of-Concept (PoC) that returns data from a View in the AdventureWorksLT database and make them available as a REST API. Using this PoC, the Development team will create a more scalable and multi-cloud ready application for the Sales team. They have selected the Microsoft Azure platform for all aspects of deployment. The PoC is using the following elements:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL dir="auto"&gt;
&lt;LI&gt;A Python application using the Flask package for headless web deployment.&lt;/LI&gt;
&lt;LI&gt;Docker Containers for code and environment isolation, stored in a private registry so that the entire company can re-use the application Containers in future projects, saving time and money.&lt;/LI&gt;
&lt;LI&gt;Kubernetes for ease of deployment and scale, and to avoid platform lock-in.&lt;/LI&gt;
&lt;LI&gt;Microsoft Azure SQL Database for selection of size, performance, scale, auto-management and backup, in addition to Relational data storage and processing at the highest security level.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;In this article I'll explain the process for creating the entire Proof-of-Concept project. The general steps for creating the application are:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL dir="auto"&gt;
&lt;LI&gt;Set up pre-requisites&lt;/LI&gt;
&lt;LI&gt;Write the application&lt;/LI&gt;
&lt;LI&gt;Create a Docker Container to deploy the application and test&lt;/LI&gt;
&lt;LI&gt;Create an Azure Container Service (ACS) Registry and load the Container to the ACS Registry&lt;/LI&gt;
&lt;LI&gt;Create the Azure Kubernetes Service (AKS) environment&lt;/LI&gt;
&lt;LI&gt;Deploy the application Container from the ACS Registry to AKS and test the application&lt;/LI&gt;
&lt;LI&gt;Clean up&lt;/LI&gt;
&lt;/OL&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;Throughout this article, there are several values you should replace, as listed below. Ensure that you consistently replace these values for each step. You might want to open a text editor and drop these values in to set the correct values as you work though the Proof-of-Concept project:&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;UL dir="auto"&gt;
&lt;LI&gt;&lt;EM&gt;ReplaceWith_AzureSubscriptionName&lt;/EM&gt;: Replace this value with the name of the Azure subscription name you have.&lt;/LI&gt;
&lt;LI&gt;&lt;EM&gt;ReplaceWith_PoCResourceGroupName&lt;/EM&gt;: Replace this value with the name of the resource group you would like to create.&lt;/LI&gt;
&lt;LI&gt;&lt;EM&gt;ReplaceWith_AzureSQLDBServerName&lt;/EM&gt;: Replace this value with the name of the Azure SQL Database Server you create using the Azure Portal.&lt;/LI&gt;
&lt;LI&gt;&lt;EM&gt;ReplaceWith_AzureSQLDBSQLServerLoginName&lt;/EM&gt;: Replace this value with the vaue of the SQL Server User Name you create in the Azure Portal.&lt;/LI&gt;
&lt;LI&gt;&lt;EM&gt;ReplaceWith_AzureSQLDBSQLServerLoginPassword&lt;/EM&gt;: Replace this value with the vaue of the SQL Server User Password you create in the Azure Portal.&lt;/LI&gt;
&lt;LI&gt;&lt;EM&gt;ReplaceWith_AzureSQLDBDatabaseName&lt;/EM&gt;: Replace this value with the name of the Azure SQL Database you create using the Azure Portal.&lt;/LI&gt;
&lt;LI&gt;&lt;EM&gt;ReplaceWith_AzureContainerRegistryName&lt;/EM&gt;: Replace this value with the name of the Azure Container Registry you would like to create.&lt;/LI&gt;
&lt;LI&gt;&lt;EM&gt;ReplaceWith_AzureKubernetesServiceName&lt;/EM&gt;: Replace this value with the name of the Azure Kubernetes Service you would like to create.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 id="user-content-pre-requisites" dir="auto" tabindex="-1"&gt;Pre-Requisites&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The developers at AdventureWorks use a mix of Windows, Linux, and Apple systems for development, so they are using Visual Studio Code as their environment and git for the source control, both of which run cross-platform.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For the PoC, The team requires the following pre-requisites:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL dir="auto"&gt;
&lt;LI&gt;&lt;STRONG&gt;Python, pip, and packages&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;- The development team has chosen the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/training/paths/beginner-python/" target="_blank" rel="nofollow noopener"&gt;Python programming language&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;as the standard for this web-based application. Currently they are using version 3.9, but any version supporting the PoC required packages is acceptable.&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://www.python.org/downloads/release/python-390/" target="_blank" rel="nofollow noopener"&gt;You can download the Python language version 3.9 here.&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;The team is using the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;EM&gt;pyodbc package&lt;/EM&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;for database access.&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://pypi.org/project/pyodbc/" target="_blank" rel="nofollow noopener"&gt;You can find the pyodbc package here with the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;EM&gt;pip&lt;/EM&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;commands to install it.&lt;/A&gt;. You may also need the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/sql/connect/odbc/download-odbc-driver-for-sql-server?view=sql-server-ver16" target="_blank" rel="nofollow noopener"&gt;Microsoft ODBC Driver software&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;if you do not have it installed.&lt;/LI&gt;
&lt;LI&gt;The team is using the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;EM&gt;ConfigParser package&lt;/EM&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;for controlling and setting configuration variables.&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://pypi.org/project/configparser/" target="_blank" rel="nofollow noopener"&gt;You can find the configparser package here with the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;EM&gt;pip&lt;/EM&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;commands to install it.&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;The team is using the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;EM&gt;Flask package&lt;/EM&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;for a web interface for the application.&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://flask.palletsprojects.com/en/2.3.x/installation/" target="_blank" rel="nofollow noopener"&gt;You can find the Flask library here.&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;The Microsoft Azure az CLI tool&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;Next, the team installed the Azure&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;EM&gt;AZ CLI&lt;/EM&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;tool. This cross-platform tool allows a command-line and scripted approach to the PoC, so that they can repeat the steps as they make changes and improvements.&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/cli/azure/install-azure-cli" target="_blank" rel="nofollow noopener"&gt;You can find the installation for the AZ CLI tool here.&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With that tool set up, the team used it to log in to their Azure subscription, and set the subscription name they used for the PoC. They then ensured the Azure SQL Database server and database is accessible to the subscription:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="snippet-clipboard-content notranslate position-relative overflow-auto"&gt;
&lt;PRE class="notranslate"&gt;&lt;CODE&gt;az login
az account set --name "ReplaceWith_AzureSubscriptionName"
az sql server list
az sql db list ReplaceWith_AzureSQLDBDatabaseName 
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;DIV class="zeroclipboard-container"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;P&gt;&lt;STRONG&gt;Create a Microsoft Azure Resource Group (RG) to hold the entire PoC&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;A&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/manage-resource-groups-portal#what-is-a-resource-group" target="_blank" rel="nofollow noopener"&gt;Microsoft Azure Resource Group&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;is a logical container that holds related resources for an Azure solution. Generally, resources that share the same lifecycle are added to the same resource group so you can easily deploy, update, and delete them as a group. The resource group stores metadata about the resources, and you can specify a location for the resource group.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Resource groups can be created and managed using the Azure portal or the AZ CLI. They can also be used to group related resources for an application and divide them into groups for production and nonproduction, or any other organizational structure you prefer.&lt;/P&gt;
&lt;DIV id="tinyMceEditorBuckWoody_0" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In the snippet below, you can see the AZ command used to create a resource group in the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;EM&gt;eastus&lt;/EM&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;region of Azure:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="snippet-clipboard-content notranslate position-relative overflow-auto"&gt;
&lt;PRE class="notranslate"&gt;&lt;CODE&gt;az group create --name ReplaceWith_PoCResourceGroupName --location eastus
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;DIV class="zeroclipboard-container"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;P&gt;&lt;STRONG&gt;Microsoft Azure SQL Database with the AdventureWorksLT sample database installed, using SQL Server Logins&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;AdventureWorks has standardized on the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://www.microsoft.com/en-us/sql-server/" target="_blank" rel="nofollow noopener"&gt;Microsoft SQL Server Relational Database Management System platform&lt;/A&gt;, and the Development team wants to use a managed service for the database rather than installing locally. Using Azure SQL Database allows this managed service to be completely code-compatible wherever they run the SQL Server engine - on-premises, in a Container, in Linux or Windows, or even in an Internet of Things (IoT) environment.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The team used the sample&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;EM&gt;AdventureWorksLT&lt;/EM&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;database for the PoC using the same PoC Resource Group,&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-sql/database/single-database-create-quickstart?view=azuresql&amp;amp;tabs=azure-portal" target="_blank" rel="nofollow noopener"&gt;which you can learn to deploy here.&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;They set a SQL Server account for login for testing but will revisit this decision in a security review.&lt;/P&gt;
&lt;DIV id="tinyMceEditorBuckWoody_1" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;During creation, they used the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-sql/database/firewall-create-server-level-portal-quickstart?view=azuresql" target="_blank" rel="nofollow noopener"&gt;Azure Management Portal to set the Firewall for the application&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;to the local development machine, and changed the default you see here to&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;allow all Azure Services&lt;/STRONG&gt;, and also&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-sql/database/azure-sql-python-quickstart?view=azuresql&amp;amp;tabs=windows%2Csql-auth#configure-the-local-connection-string" target="_blank" rel="nofollow noopener"&gt;retrieved the connection credentials.&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;Note that with this approach, the database could be in another region or even a different subscription.&lt;/P&gt;
&lt;DIV id="tinyMceEditorBuckWoody_2" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Create the Application&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Next, the Development team created a simple Python application that opens a connection to Azure SQL Database and returns a list of products. This code will be replaced with much more complex functions and may also include more than one application deployed into the Kubernetes Pods in production for a robust, manifest-driven approach to application solutions.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The Team created a simple text file called&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;EM&gt;.env&lt;/EM&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;to hold variables for the server connections and other information. Using the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;python-dotenv&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;library they can then separate out the variables from the Python Code. This is a common approach to keeping secrets and other information out of the code itself.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="snippet-clipboard-content notranslate position-relative overflow-auto"&gt;
&lt;PRE class="notranslate"&gt;&lt;CODE&gt;SQL_SERVER_ENDPOINT = ReplaceWith_AzureSQLDBServerName
SQL_SERVER_USERNAME = ReplaceWith_AzureSQLDBSQLServerLoginName
SQL_SERVER_PASSWORD = ReplaceWith_AzureSQLDBSQLServerLoginPassword
SQL_SERVER_DATABASE = ReplaceWith_AzureSQLDBDatabaseName
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;DIV class="zeroclipboard-container"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;STRONG&gt;Important Security Considerations:&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;For clarity and simplicity, this application uses a configuration file that is read from Python. Since the code will deploy with the container, the connection information may be able to derive from the contents. You should carefully consider the various methods of working with security, connections, and secrets and determine the best level and mechanism you should use for your application. You have multiple options of working with secret information such as connection strings and the like, and the list below shows a few of those options. Always pick the highest level of security, and even multiple levels to ensure your application is secure.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;UL dir="auto"&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/security/fundamentals/database-security-checklist" target="_blank" rel="nofollow noopener"&gt;You can learn more about Azure SQL Database security here.&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://pypi.org/project/python-secrets/" target="_blank" rel="nofollow noopener"&gt;Another method to work with secrets in Python is to use the python-secrets library, which you can read about here.&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.docker.com/engine/swarm/secrets/" target="_blank" rel="nofollow noopener"&gt;Docker security and secrets are discussed here.&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://kubernetes.io/docs/concepts/configuration/secret/" target="_blank" rel="nofollow noopener"&gt;Kubernetes secrets are discussed here.&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/active-directory-b2c/configure-authentication-sample-python-web-app" target="_blank" rel="nofollow noopener"&gt;You can also learn more about Microsoft Entra, formerly Azure Active Directory authentication here.&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The team next wrote the PoC application and called it&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;EM&gt;app.py&lt;/EM&gt;. You can see the self-documented code here:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="snippet-clipboard-content notranslate position-relative overflow-auto"&gt;
&lt;PRE class="notranslate"&gt;&lt;CODE&gt;# Set up the libraries for the configuration and base web interfaces
from dotenv import load_dotenv
from flask import Flask
from flask_restful import Resource, Api
import pyodbc

# Load the variables from the .env file
load_dotenv()

# Create the Flask-RESTful Application
app = Flask(__name__)
api = Api(app)

# Create connection to Azure SQL Database using the config.ini file values
server_name = os.getenv('SQL_SERVER_ENDPOINT')
database_name = os.getenv('SQL_SERVER_DATABASE')
user_name = os.getenv('SQL_SERVER_USERNAME')
password = os.getenv('SQL_SERVER_PASSWORD')

# Create connection to Azure SQL Database using the config.ini file values
ServerName = config.get('Connection', 'SQL_SERVER_ENDPOINT')
DatabaseName = config.get('Connection', 'SQL_SERVER_DATABASE')
UserName = config.get('Connection', 'SQL_SERVER_USERNAME')
PasswordValue = config.get('Connection', 'SQL_SERVER_PASSWORD')

# Connect to Azure SQL Database using the pyodbc package
# Note: You may need to install the ODBC driver if it is not already there. You can find that at:
# https://learn.microsoft.com/en-us/sql/connect/odbc/download-odbc-driver-for-sql-server?view=sql-server-ver16#version-17
connection = pyodbc.connect(f'Driver=ODBC Driver 17 for SQL Server;Server={ServerName};Database={DatabaseName};uid={UserName};pwd={PasswordValue}')

# Create the SQL query to run against the database
def query_db():
    cursor = connection.cursor()
    cursor.execute("SELECT TOP (10) [ProductID], [Name], [Description] FROM [SalesLT].[vProductAndDescription] WHERE Culture = 'EN' FOR JSON AUTO;")
    result = cursor.fetchone()
    cursor.close()
    return result

# Create the class that will be used to return the data from the API
class Products(Resource):
    def get(self):
        result = query_db()
        json_result = {} if (result == None) else json.loads(result[0])     
        return json_result, 200

# Set the API endpoint to the Products class
api.add_resource(Products, '/products')

# Start App on default Flask port 5000
if __name__ == "__main__":
    app.run(debug=True)
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;DIV class="zeroclipboard-container"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;P&gt;They checked that this application runs locally, and returns a page to&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="http://localhost:5000/products" target="_blank" rel="nofollow noopener"&gt;http://localhost:5000/products:&lt;/A&gt;&lt;/P&gt;
&lt;DIV id="tinyMceEditorBuckWoody_3" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;STRONG&gt;Important Considerations:&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;When building production applications, you should avoid using the administrator account to access the database.&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://devblogs.microsoft.com/azure-sql/create-and-connect-to-an-azure-sql-db/" target="_blank" rel="nofollow noopener"&gt;Read more here for details on how to set up an account for your application.&lt;/A&gt;. The code in this article is simplified so that you can quickly get started with applications using Python and Kubernetes in Microsoft Azure.&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://github.com/Azure-Samples/azure-sql-db-python-rest-api/" target="_blank" rel="noopener"&gt;Check this resource for a complete example on how to create API with Python and Azure SQL Database.&lt;/A&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 id="user-content-deploy-the-application-to-a-docker-container" dir="auto" tabindex="-1"&gt;Deploy the Application to a Docker Container&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;A Container is a reserved, protected space in a computing system that provides isolation and encapsulation. To create one, you use a Manifest file, which is simply a text file describing the binaries and code you wish to contain. Using a Container Runtime (such as Docker), you can then create a binary Image that has all the files you want to run and reference. From there, you can "run" the binary image, and that is called a Container, which you can reference as if it were a full computing system. It's a smaller, simpler way to abstract your application runtimes and environment than using a full Virtual Machine.&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/dotnet/architecture/microservices/container-docker-introduction/docker-defined" target="_blank" rel="nofollow noopener"&gt;You can learn more about Containers and Docker here.&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;The team started with a DockerFile (the Manifest) that layers the elements of what the team wants to use. They start with a base Python image that already has the &lt;EM&gt;pyodbc&lt;/EM&gt; libraries installed, and then they run all commands necessary to contain the program and config file in the previous step.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;You can see the self-annotated Dockerfile here:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="snippet-clipboard-content notranslate position-relative overflow-auto"&gt;
&lt;PRE class="notranslate"&gt;&lt;CODE&gt;# syntax=docker/dockerfile:1

# Start with a Container binary that already has Python and pyodbc installed
FROM laudio/pyodbc

# Create a Working directory for the application
WORKDIR /flask2sql

# Copy all of the code from the current directory into the WORKDIR
COPY . .

# Install the  libraries that are required
RUN pip install -r ./requirements.txt

# Once the container starts, run the application, and open all TCP/IP ports 
CMD ["python3", "-m" , "flask", "run", "--host=0.0.0.0"]
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;DIV class="zeroclipboard-container"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;P&gt;With that file in place, the team dropped to a command-prompt in the coding directory and ran the following code to create the binary Image from the Manifest, and then another command to start the Container:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="snippet-clipboard-content notranslate position-relative overflow-auto"&gt;
&lt;PRE class="notranslate"&gt;&lt;CODE&gt;docker build -t flask2sql .
docker run -d -p 5000:5000 -t flask2sql
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;DIV class="zeroclipboard-container"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;P&gt;Once again, the team tests the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="http://localhost:5000/products" target="_blank" rel="nofollow noopener"&gt;http://localhost:5000/products&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;link to ensure the Container can access the database, and they see the following return:&lt;/P&gt;
&lt;DIV id="tinyMceEditorBuckWoody_4" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 id="user-content-deploy-the-image-to-a-docker-registry" dir="auto" tabindex="-1"&gt;Deploy the Image to a Docker Registry&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The Container is now working but is only available on the developer's machine. The Development team would like to make this application Image available to the rest of the company, and then on to Kubernetes for production deployment.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The storage area for Container Images is called a&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;EM&gt;repository&lt;/EM&gt;, and there can be both public and private repositories for Container Images. In fact, AdventureWorks used a public Image for the Python environment in their Dockerfile.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The team would like to control access to the Image, and rather than putting it on the web they decide they would like to host it themselves, but in Microsoft Azure where they have full control over security and access.&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/aks/tutorial-kubernetes-prepare-acr?tabs=azure-cli" target="_blank" rel="nofollow noopener"&gt;You can read more about Microsoft Azure Container Registry here.&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Returning to the command-line, the Development team uses the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;EM&gt;az CLI&lt;/EM&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;utility to add a Container registry service, enable an administration account, set it to anonymous "pulls" during the testing phase, and set a log in context to the registry:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="snippet-clipboard-content notranslate position-relative overflow-auto"&gt;
&lt;PRE class="notranslate"&gt;&lt;CODE&gt;az acr create --resource-group ReplaceWith_PoCResourceGroupName --name ReplaceWith_AzureContainerRegistryName --sku Standard
az acr update -n ReplaceWith_AzureContainerRegistryName --admin-enabled true
az acr update --name ReplaceWith_AzureContainerRegistryName --anonymous-pull-enabled
az acr login --name ReplaceWith_AzureContainerRegistryName
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;DIV class="zeroclipboard-container"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;P&gt;This context will be used in subsequent steps.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 id="user-content-tag-the-local-docker-image-to-prepare-it-for-uploading" dir="auto" tabindex="-1"&gt;Tag the local Docker Image to prepare it for uploading&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The next step is to send the local application Container Image to the Azure Container Registry (ACR) service so that it is available in the cloud. Returning to the command-line, the team uses the Docker commands to list the Images on the machine, then the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;EM&gt;az CLI&lt;/EM&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;utility to list the Images in the ACR service. They then use the Docker command to "tag" the Image with the destination name of the ACR they created in the previous step, and to set a version number for proper DevOps. They then list the local Image information again to ensure the tag applied correctly:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="snippet-clipboard-content notranslate position-relative overflow-auto"&gt;
&lt;PRE class="notranslate"&gt;&lt;CODE&gt;docker images
az acr list --resource-group ReplaceWith_PoCResourceGroupName --query "[].{acrLoginServer:loginServer}" --output table
docker tag flask2sql ReplaceWith_AzureContainerRegistryName.azurecr.io/azure-flask2sql:v1
docker images
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;DIV class="zeroclipboard-container"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;P&gt;With the code written and tested, the Dockerfile, Image and Container run and tested, the ACR service set up, and all tags applied, the team can upload the Image to the ACR service. They use the Docker "push" command to send the file, and then the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;EM&gt;az CLI&lt;/EM&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;utility to ensure the Image was loaded:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="snippet-clipboard-content notranslate position-relative overflow-auto"&gt;
&lt;PRE class="notranslate"&gt;&lt;CODE&gt;docker push ReplaceWith_AzureContainerRegistryName.azurecr.io/azure-flask2sql:v1
az acr repository list --name ReplaceWith_AzureContainerRegistryName --output table
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Deploy to Kubernetes&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The team could simply run Containers and deploy the application to on-premises and in-cloud environments. However, they would like to add multiple copies of the application for scale and availability, add other Containers performing different tasks, and add monitoring and instrumentation to the entire solution.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;To group Containers together into a complete solution, the team decided to use Kubernetes. Kubernetes runs on-premises, and in all major cloud platforms. Microsoft Azure has a complete managed environment for Kubernetes, called the Azure Kubernetes Service (AKS).&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/training/paths/intro-to-kubernetes-on-azure/" target="_blank" rel="nofollow noopener"&gt;You can learn more about AKS here.&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Using the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;EM&gt;az CLI&lt;/EM&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;utility, the team adds AKS to the Resource Group they created earlier. They add two "nodes" or computing environments for resiliency in the testing phase, they automatically generate SSH Keys for access to the environment, and then they attach the ACR service they created in the previous steps so that the AKS cluster can locate the images they want to use for the deployment:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="snippet-clipboard-content notranslate position-relative overflow-auto"&gt;
&lt;PRE class="notranslate"&gt;&lt;CODE&gt;az aks create --resource-group ReplaceWith_PoCResourceGroupName --name ReplaceWith_AzureKubernetesServiceName --node-count 2 --generate-ssh-keys --attach-acr ReplaceWith_AzureContainerRegistryName
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;DIV class="zeroclipboard-container"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;P&gt;Kubernetes uses a command-line tool to access and control a cluster, called&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;EM&gt;kubectl&lt;/EM&gt;. The team uses the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;EM&gt;az CLI&lt;/EM&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;utility to download the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;EM&gt;kubectl&lt;/EM&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;tool and install it:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="snippet-clipboard-content notranslate position-relative overflow-auto"&gt;
&lt;PRE class="notranslate"&gt;&lt;CODE&gt;az aks install-cli
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;DIV class="zeroclipboard-container"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;P&gt;Since they have a connection to AKS at the moment, they can ask it to send the SSH keys for connection to be used when they execute the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;EM&gt;kubectl&lt;/EM&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;utility:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="snippet-clipboard-content notranslate position-relative overflow-auto"&gt;
&lt;PRE class="notranslate"&gt;&lt;CODE&gt;az aks get-credentials --resource-group ReplaceWith_PoCResourceGroupName --name ReplaceWith_AzureKubernetesServiceName
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;DIV class="zeroclipboard-container"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;P&gt;These keys are stored&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/" target="_blank" rel="nofollow noopener"&gt;in a file called&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;EM&gt;.config&lt;/EM&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;in the user's directory&lt;/A&gt;. With that security context set, the team uses the "get nodes" command using the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;EM&gt;kubectl&lt;/EM&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;utility to show the nodes in the cluster:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="snippet-clipboard-content notranslate position-relative overflow-auto"&gt;
&lt;PRE class="notranslate"&gt;&lt;CODE&gt;kubectl get nodes
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;DIV class="zeroclipboard-container"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;P&gt;Now the team uses the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;EM&gt;az CLI&lt;/EM&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;tool to list the Images in the ACR service:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="snippet-clipboard-content notranslate position-relative overflow-auto"&gt;
&lt;PRE class="notranslate"&gt;&lt;CODE&gt;az acr list --resource-group ReplaceWith_PoCResourceGroupName --query "[].{acrLoginServer:loginServer}" --output table
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;DIV class="zeroclipboard-container"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;P&gt;Now they can build the manifest that Kubernetes uses to control the deployment. This is a text file stored in a&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;EM&gt;yaml&lt;/EM&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;format. Here is the annotated text in the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;EM&gt;flask2sql.yaml&lt;/EM&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;file:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="snippet-clipboard-content notranslate position-relative overflow-auto"&gt;
&lt;PRE class="notranslate"&gt;&lt;CODE&gt;apiVersion: apps/v1
# The type of commands that will be sent, along with the name of the deployment
kind: Deployment
metadata:
  name: flask2sql
# This section sets the general specifications for the application
spec:
  replicas: 1
  selector:
    matchLabels:
      app: flask2sql
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
  minReadySeconds: 5 
  template:
    metadata:
      labels:
        app: flask2sql
    spec:
      nodeSelector:
        "kubernetes.io/os": linux
# This section sets the location of the Image(s) in the deployment, and where to find them 
      containers:
      - name: flask2sql
        image:  bwoodyflask2sqlacr.azurecr.io/azure-flask2sql:v1
# Recall that the Flask application uses (by default) TCIP/IP port 5000 for access. This line tells Kubernetes that this "pod" uses that address.
        ports:
        - containerPort: 5000
---
apiVersion: v1
# This is the front-end of the application access, called a "Load Balancer"
kind: Service
metadata:
  name: flask2sql
spec:
  type: LoadBalancer
# this final step then sets the outside exposed port of the service to TCP/IP port 80, but maps it internally to the app's port of 5000
  ports:
  - protocol: TCP
    port: 80
    targetPort: 5000
  selector:
    app: flask2sql
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;DIV class="zeroclipboard-container"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;P&gt;With that file defined, the team can deploy the application to the running AKS cluster. That's done with the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;EM&gt;apply&lt;/EM&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;command in the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;EM&gt;kubectl&lt;/EM&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;utility, which as you recall still has a security context to the cluster. Then the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;EM&gt;get service&lt;/EM&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;command is sent to watch the cluster as it is being built:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="snippet-clipboard-content notranslate position-relative overflow-auto"&gt;
&lt;PRE class="notranslate"&gt;&lt;CODE&gt;kubectl apply -f flask2sql.yaml
kubectl get service flask2sql --watch
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;DIV class="zeroclipboard-container"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;P&gt;After a few moments, the "&lt;EM&gt;watch&lt;/EM&gt;" command will return an external IP address. At that point the team presses CTRL-C to break the watch command and records the external IP address of the load balancer.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 id="user-content-test-the-application" dir="auto" tabindex="-1"&gt;Test the Application&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Using the IP Address (Endpoint) they obtained in the last step, the team checks to ensure the same output as the local application and the Docker Container:&lt;/P&gt;
&lt;DIV id="tinyMceEditorBuckWoody_5" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A class="heading-link" href="https://github.com/BuckWoody/PresentationsAndBlogs/tree/master/K8s2AzureSQL#coding-assets" target="_blank" rel="noopener"&gt;Coding Assets -&amp;nbsp;&lt;/A&gt;&lt;A href="https://github.com/BuckWoody/PresentationsAndBlogs/tree/master/K8s2AzureSQL/code" target="_self"&gt;You can find all of the code assets for this sample at this location.&lt;/A&gt; Here's what they do:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL dir="auto"&gt;
&lt;LI&gt;&lt;STRONG&gt;app.py&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;- The Python application that performs a simple SELECT from an Azure SQL Database&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;.env&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;- A text file with connection information to the Azure SQL Database&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Dockerfile&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;- The manifest for the Docker Image creation&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;flask2sql.yaml&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;- The manifest for the Kubernetes deployment&lt;/LI&gt;
&lt;/UL&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;In addition, the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;EM&gt;az CLI&lt;/EM&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;utility commands may make additional files such as the key for the application and other information in your application directory.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;H2 id="user-content-clean-up" dir="auto" tabindex="-1"&gt;Clean Up&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With the application created, edited, documented and tested, the team can now "tear down" the application. By keeping everything in a single resource group in Microsoft Azure, it's a simple matter of deleting the resource group using the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;EM&gt;az CLI&lt;/EM&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;utility:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="snippet-clipboard-content notranslate position-relative overflow-auto"&gt;
&lt;PRE class="notranslate"&gt;&lt;CODE&gt;az group delete -n ReplaceWith_PoCResourceGroupName -y
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;DIV class="zeroclipboard-container"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;Note: If you created your Azure SQL Database in another resource group and you no longer need it, you can use the Microsoft Azure Portal to delete it.&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The team member leading the PoC project uses Microsoft Windows as her workstation and wants to retain the secrets file from Kubernetes but wants to remove it from the system as the active location. She simply copies the file to a &lt;EM&gt;"config.old"&lt;/EM&gt; text file and then deletes it:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="snippet-clipboard-content notranslate position-relative overflow-auto"&gt;
&lt;PRE class="notranslate"&gt;&lt;CODE&gt;copy c:\users\ReplaceWith_YourUserName\.kube\config c:\users\ReplaceWith_YourUserName\.kube\config.old
del c:\users\ReplaceWith_YourUserName\.kube\config
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;DIV class="zeroclipboard-container"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;H1 id="user-content-learn-more" dir="auto" tabindex="-1"&gt;Learn More&lt;/H1&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL dir="auto"&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-sql/database/develop-overview?view=azuresql" target="_blank" rel="nofollow noopener"&gt;Application development overview - SQL Database &amp;amp; SQL Managed Instance&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-sql/database/azure-sql-python-quickstart?view=azuresql&amp;amp;tabs=windows%2Csql-inter" target="_blank" rel="nofollow noopener"&gt;Connect to and query Azure SQL Database using Python and the pyodbc driver&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-sql/database/local-dev-experience-publish-emulator?view=azuresql" target="_blank" rel="nofollow noopener"&gt;Publish a Database Project for Azure SQL Database to the local emulator&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-sql/database/saas-tenancy-welcome-wingtip-tickets-app?view=azuresql" target="_blank" rel="nofollow noopener"&gt;The Wingtip Tickets SaaS application&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/samples/browse/?products=azure-sql-database%2Cazure-sql-managed-instance%2Cazure-sqlserver-vm&amp;amp;expanded=azure" target="_blank" rel="nofollow noopener"&gt;Browse code samples for Azure SQL Database&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Tue, 27 Aug 2024 14:12:57 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/data-architecture-blog/creating-a-kubernetes-application-for-azure-sql-database/ba-p/3945641</guid>
      <dc:creator>BuckWoodyMSFT</dc:creator>
      <dc:date>2024-08-27T14:12:57Z</dc:date>
    </item>
    <item>
      <title>The Well-Architected Framework for Oracle is NOW LIVE!</title>
      <link>https://techcommunity.microsoft.com/t5/data-architecture-blog/the-well-architected-framework-for-oracle-is-now-live/ba-p/3721552</link>
      <description>&lt;P&gt;The Well-Architected Framework for Oracle has been published!&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/architecture/framework/oracle-iaas/overview" target="_blank"&gt;Oracle workloads - Microsoft Azure Well-Architected Framework | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This invaluable framework provides clear guidance on the recommended practices to assess, architect and migrate Oracle workloads to the Azure cloud.&amp;nbsp; This should be the first place for answers to success for Oracle on Azure!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;A special thanks to my teammate, Jessica Haessler for working so hard to help me get this to the finish line, as I would have never been able to get this done on my own!&amp;nbsp;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 20 Jan 2023 23:20:14 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/data-architecture-blog/the-well-architected-framework-for-oracle-is-now-live/ba-p/3721552</guid>
      <dc:creator>DBAKevlar</dc:creator>
      <dc:date>2023-01-20T23:20:14Z</dc:date>
    </item>
    <item>
      <title>IOPs is Overrated</title>
      <link>https://techcommunity.microsoft.com/t5/data-architecture-blog/iops-is-overrated/ba-p/3710783</link>
      <description>&lt;P&gt;IOPs is Overrated, yeah, I said it.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;How many compute, storage area networks, hard drive vendors and storage services have posted their IOPs capabilities in marketing and didn’t include the throughput (MBPs)?&amp;nbsp; Why when someone sends me IOPs for an Oracle database do I thank them kindly and ask for throughput?&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Thank you for asking…&lt;/P&gt;
&lt;P&gt;IO requests for Oracle can be exceptionally efficient depending on the type of workload.&amp;nbsp; In this blog post, I’m going to take three, real examples of Oracle workloads and show how different the ratio is between IOPs and MBPs using the AWR report.&amp;nbsp; Now there is a significant difference from what we produce for sizing and a raw AWR report, but I’m going to use the AWR data, as this is something anyone working with Oracle will recognize.&amp;nbsp; The examples here are from different versions of Oracle, single instance vs. Exadata, but hopefully will explain why I am not a fan of IOPs for proving out a workload size.&lt;/P&gt;
&lt;H2&gt;Example #1&lt;/H2&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;IOPS: 7736 per second&lt;/P&gt;
&lt;P&gt;MBPs: 153 per second&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;Example #2&lt;/H1&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;IOPS: 8327 per second&lt;/P&gt;
&lt;P&gt;MBPs: 344 per second&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;Example #3&lt;/H1&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;IOPS: 26215 per second&lt;/P&gt;
&lt;P&gt;MBPs: 13008 per second&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Interesting ratio of IO requests vs. throughput:&lt;/P&gt;
&lt;TABLE&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="163"&gt;
&lt;P&gt;Source&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="167"&gt;
&lt;P&gt;IO Requests Reads&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="159"&gt;
&lt;P&gt;MBPs Reads&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="149"&gt;
&lt;P&gt;Ratio&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="163"&gt;
&lt;P&gt;Example #1&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="167"&gt;
&lt;P&gt;7736&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="159"&gt;
&lt;P&gt;153&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="149"&gt;
&lt;P&gt;51:1&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="163"&gt;
&lt;P&gt;Example #2&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="167"&gt;
&lt;P&gt;8327&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="159"&gt;
&lt;P&gt;344&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="149"&gt;
&lt;P&gt;24:1&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="163"&gt;
&lt;P&gt;Example #3&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="167"&gt;
&lt;P&gt;26215&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="159"&gt;
&lt;P&gt;12872&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="149"&gt;
&lt;P&gt;2:1&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If you based the storage solution for these Oracle workloads based off the IOPS, you could make drastic mistakes on both compute as well as storage.&lt;/P&gt;
&lt;P&gt;For Example #1, we see similar ratios set in compute for max limits on Azure compute for limits on IO:&lt;/P&gt;
&lt;TABLE width="752px"&gt;
&lt;THEAD&gt;
&lt;TR&gt;
&lt;TD width="136px"&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;BR /&gt;SKU&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="79px"&gt;
&lt;P&gt;&lt;STRONG&gt;vCPU&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="79px"&gt;
&lt;P&gt;&lt;STRONG&gt;Memory: GiB&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="67px"&gt;
&lt;P&gt;&lt;STRONG&gt;Temp storage (SSD) GiB&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="48px"&gt;
&lt;P&gt;&lt;STRONG&gt;Max data disks&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="102px"&gt;
&lt;P&gt;&lt;STRONG&gt;Max uncached disk throughput: IOPS/MBps&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="103px"&gt;
&lt;P&gt;&lt;STRONG&gt;Max burst uncached disk throughput: IOPS/MBps&lt;/STRONG&gt;&lt;STRONG&gt;&lt;SUP&gt;1&lt;/SUP&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="46px"&gt;
&lt;P&gt;&lt;STRONG&gt;Max NICs&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="92px"&gt;
&lt;P&gt;&lt;STRONG&gt;Expected network bandwidth (Mbps)&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/THEAD&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="136px"&gt;
&lt;P&gt;Standard_E2s_v4&lt;SUP&gt;4&lt;/SUP&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="79px"&gt;
&lt;P&gt;2&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="79px"&gt;
&lt;P&gt;16&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="67px"&gt;
&lt;P&gt;Remote Storage Only&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="48px"&gt;
&lt;P&gt;4&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="102px"&gt;
&lt;P&gt;3200/48&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="103px"&gt;
&lt;P&gt;4000/200&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="46px"&gt;
&lt;P&gt;2&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="92px"&gt;
&lt;P&gt;5000&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="136px"&gt;
&lt;P&gt;Standard_E4s_v4&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="79px"&gt;
&lt;P&gt;4&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="79px"&gt;
&lt;P&gt;32&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="67px"&gt;
&lt;P&gt;Remote Storage Only&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="48px"&gt;
&lt;P&gt;8&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="102px"&gt;
&lt;P&gt;6400/96&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="103px"&gt;
&lt;P&gt;8000/200&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="46px"&gt;
&lt;P&gt;2&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="92px"&gt;
&lt;P&gt;10000&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="136px"&gt;
&lt;P&gt;Standard_E8s_v4&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="79px"&gt;
&lt;P&gt;8&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="79px"&gt;
&lt;P&gt;64&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="67px"&gt;
&lt;P&gt;Remote Storage Only&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="48px"&gt;
&lt;P&gt;16&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="102px"&gt;
&lt;P&gt;12800/192&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="103px"&gt;
&lt;P&gt;16000/400&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="46px"&gt;
&lt;P&gt;4&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="92px"&gt;
&lt;P&gt;12500&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="136px"&gt;
&lt;P&gt;Standard_E16s_v4&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="79px"&gt;
&lt;P&gt;16&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="79px"&gt;
&lt;P&gt;128&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="67px"&gt;
&lt;P&gt;Remote Storage Only&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="48px"&gt;
&lt;P&gt;32&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="102px"&gt;
&lt;P&gt;25600/384&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="103px"&gt;
&lt;P&gt;32000/800&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="46px"&gt;
&lt;P&gt;8&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="92px"&gt;
&lt;P&gt;12500&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="136px"&gt;
&lt;P&gt;Standard_E20s_v4&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="79px"&gt;
&lt;P&gt;20&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="79px"&gt;
&lt;P&gt;160&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="67px"&gt;
&lt;P&gt;Remote Storage Only&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="48px"&gt;
&lt;P&gt;32&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="102px"&gt;
&lt;P&gt;32000/480&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="103px"&gt;
&lt;P&gt;40000/1000&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="46px"&gt;
&lt;P&gt;8&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="92px"&gt;
&lt;P&gt;10000&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="136px"&gt;
&lt;P&gt;Standard_E32s_v4&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="79px"&gt;
&lt;P&gt;32&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="79px"&gt;
&lt;P&gt;256&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="67px"&gt;
&lt;P&gt;Remote Storage Only&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="48px"&gt;
&lt;P&gt;32&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="102px"&gt;
&lt;P&gt;51200/768&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="103px"&gt;
&lt;P&gt;64000/1600&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="46px"&gt;
&lt;P&gt;8&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="92px"&gt;
&lt;P&gt;16000&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="136px"&gt;
&lt;P&gt;Standard_E48s_v4&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="79px"&gt;
&lt;P&gt;48&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="79px"&gt;
&lt;P&gt;384&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="67px"&gt;
&lt;P&gt;Remote Storage Only&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="48px"&gt;
&lt;P&gt;32&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="102px"&gt;
&lt;P&gt;76800/1152&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="103px"&gt;
&lt;P&gt;80000/2000&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="46px"&gt;
&lt;P&gt;8&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="92px"&gt;
&lt;P&gt;24000&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This is a smaller workload and we could easily go to the Standard E8s_v4 would be covering the average workload from this peak AWR that was submitted for sizing.&amp;nbsp; If the vCPU and memory meets the requirements, then the IO peaks also do.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Using Example #2, where the ratio has more than halved, although the IO Requests on reads hasn’t changed that much, the MBPs (throughput) has more than doubled.&amp;nbsp; We could meet the workload when bursting is available, but we really don’t want to count or pay for this and would need to size up.&amp;nbsp; This is a clear case of why we lean on throughput vs. IOPs.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Example #3 is a very large Oracle workload coming from Exadata.&amp;nbsp; There is considerable offloading, (smart scans) and along with flash cache scanning.&amp;nbsp; With this, the IO requests are incredibly efficient, to the point that the requests vs. throughput is a ratio of 2:1.&amp;nbsp; This is a workload that can only rely on network attached storage to meet its needs and would require some optimizing.&amp;nbsp; &amp;nbsp;&amp;nbsp;There’s a reason the documentation shows both IOPs and throughput (MBPs).&amp;nbsp; Make sure when you’re assessing workloads, especially Oracle, include the throughput.&amp;nbsp; It may surprise you.&lt;/P&gt;</description>
      <pubDate>Mon, 09 Jan 2023 16:41:11 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/data-architecture-blog/iops-is-overrated/ba-p/3710783</guid>
      <dc:creator>DBAKevlar</dc:creator>
      <dc:date>2023-01-09T16:41:11Z</dc:date>
    </item>
    <item>
      <title>Migrating your workloads to Azure Database for MySQL</title>
      <link>https://techcommunity.microsoft.com/t5/data-architecture-blog/migrating-your-workloads-to-azure-database-for-mysql/ba-p/3681888</link>
      <description>&lt;H2&gt;Migrating your workloads to Azure Database for MySQL&lt;/H2&gt;
&lt;P&gt;&lt;A href="https://www.microsoft.com/" target="_blank" rel="noreferrer noopener"&gt;Microsoft&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/A&gt;has a long history of trying to make software development and administration of IT infrastructure easier overtime. Additionally,&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://cloudblogs.microsoft.com/opensource/" target="_blank" rel="noreferrer noopener"&gt;Microsoft in the last 10 years has embraced open source broadly.&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/?product=popular" target="_blank" rel="noreferrer noopener"&gt;Microsoft Azure&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;supports&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/virtual-machines/overview" target="_blank" rel="noreferrer noopener"&gt;virtual machines&lt;/A&gt;,&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/architecture/guide/design-principles/managed-services" target="_blank" rel="noreferrer noopener"&gt;platform services&lt;/A&gt;,&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/aks/intro-kubernetes" target="_blank" rel="noreferrer noopener"&gt;containers&lt;/A&gt;,&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-functions/functions-compare-logic-apps-ms-flow-webjobs" target="_blank" rel="noreferrer noopener"&gt;serverless&lt;/A&gt;, and various&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/training/topics/azure-infrastructure" target="_blank" rel="noreferrer noopener"&gt;other infrastructure&lt;/A&gt;. In this article, I will share some options you can make to deploy your software on MySQL in Azure.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;It may help you to look over the relative cost in terms of effort for administering your infrastructure versus physical, virtual on premises, Azure VM, or PaaS for your MySQL choices. The above diagram works for many other products, like SQL Server, or web servers, or other infrastructure as well.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;MySQL on Azure VMs&lt;/STRONG&gt;. This option falls into the industry category of&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://azure.microsoft.com/en-us/resources/cloud-computing-dictionary/what-is-iaas/" target="_blank" rel="noreferrer noopener"&gt;IaaS&lt;/A&gt;. With this service, you can run MySQL Server inside a managed virtual machine on the Azure cloud platform. All recent versions and editions of&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://www.mysql.com/" target="_blank" rel="noreferrer noopener"&gt;MySQL&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/A&gt;can be installed in the virtual machine.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/mysql/single-server/overview" target="_blank" rel="noreferrer noopener"&gt;Azure database for MySQL&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/A&gt;is a&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://azure.microsoft.com/en-us/resources/cloud-computing-dictionary/what-is-paas/" target="_blank" rel="noreferrer noopener"&gt;Platform Service (PaaS)&lt;/A&gt;, fully managed MySQL database engine based on the stable version of MySQL community edition. With a managed instance of MySQL on Azure, you can use built-in features viz automated patching, high availability, automated backups, elastic scaling, enterprise grade security, compliance and governance, monitoring and alerting that require extensive configuration when MySQL Server is either on-premises or in an Azure VM.&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/mysql/single-server/overview" target="_blank" rel="noopener"&gt;Azure Database for MySQL&lt;/A&gt;, powered by the MySQL community edition, available in two deployment modes:&lt;/P&gt;
&lt;UL&gt;
&lt;LI class="has-small-font-size"&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/mysql/flexible-server/overview" target="_blank" rel="noopener"&gt;Flexible Server&lt;/A&gt;&amp;nbsp;– A fully managed production-ready database service designed for more granular control and flexibility over database management functions and configuration settings. The flexible server architecture allows users to opt for high availability within a single availability zone and across multiple availability zones. The service supports the community version of MySQL 5.7 and 8.0. The service is generally available today in various&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/mysql/flexible-server/overview#azure-regions" target="_blank" rel="noopener"&gt;Azure regions&lt;/A&gt;. Flexible servers are best suited for all new developments and migration of production workloads to Azure Database for MySQL service.&lt;/LI&gt;
&lt;LI class="has-small-font-size"&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/mysql/single-server/single-server-overview" target="_blank" rel="noopener"&gt;Single Server&lt;/A&gt;&amp;nbsp;is a fully managed database service designed for minimal customization. The single server platform is designed to handle most database management functions such as patching, backups, high availability, and security with minimal user configuration and control. The architecture is optimized for built-in high availability with 99.99% availability in a single availability zone. It supports the community version of MySQL 5.6 (retired), 5.7, and 8.0. The service is generally available today in various&amp;nbsp;&lt;A href="https://azure.microsoft.com/global-infrastructure/services/" target="_blank" rel="noopener"&gt;Azure regions&lt;/A&gt;. Single servers are best-suited&amp;nbsp;&lt;STRONG&gt;only for existing applications already leveraging single server&lt;/STRONG&gt;. A Flexible Server would be the recommended deployment option for all new developments or migrations.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;I have two customers, who each had in-house developed applications with MySQL as the database engine, chose Azure Database for MySQL to migrate to as they moved from on-premises to Azure. Each customer has over 200 databases hosted on many MySQL servers.&lt;/P&gt;
&lt;H2&gt;Migration choices for Azure Database for MySQL&lt;/H2&gt;
&lt;P&gt;For detailed information and use cases about migrating databases to Azure Database for MySQL, refer to the&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/mysql/migrate/mysql-on-premises-azure-db/01-mysql-migration-guide-intro" target="_blank" rel="noreferrer noopener"&gt;Database Migration Guide&lt;/A&gt;. This document provides pointers to help you successfully plan and execute a MySQL migration to Azure.&lt;/P&gt;
&lt;P&gt;Microsoft has a great set of&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/" target="_blank" rel="noreferrer noopener"&gt;learning on our website&lt;/A&gt;, with tutorials to get familiar with tools and processes.&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/mysql/migrate/mysql-on-premises-azure-db/01-mysql-migration-guide-intro" target="_blank" rel="noreferrer noopener"&gt;Here is the learning path for migrating MySQL from on-premises to Azure.&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;I hope this article helps clear up the choices of MySQL on Azure, as well as giving you information for how to migrate and learn more.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 22 Nov 2022 18:55:30 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/data-architecture-blog/migrating-your-workloads-to-azure-database-for-mysql/ba-p/3681888</guid>
      <dc:creator>geo_walters</dc:creator>
      <dc:date>2022-11-22T18:55:30Z</dc:date>
    </item>
    <item>
      <title>Addressing Oracle Redo Latency with Ultra Disk</title>
      <link>https://techcommunity.microsoft.com/t5/data-architecture-blog/addressing-oracle-redo-latency-with-ultra-disk/ba-p/3614263</link>
      <description>&lt;P&gt;An enterprise cloud, like Azure, handles an incredible variety of workloads and to be successful running Oracle in Azure means you need to know what you’re doing and where the sweet spot is for a relational workload.&lt;/P&gt;
&lt;P&gt;I don’t want to get too deep here, as Azure Oracle SMEs are both data and infra, which is a hybrid area resulting in us splitting between the two focuses.&lt;/P&gt;
&lt;H1&gt;The Oracle Must-Knows&lt;/H1&gt;
&lt;P&gt;An Oracle database is a complex relational database, but there are terminology and physical architecture that is important to know before we proceed into this post:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Database instance, (background processes)&lt;/LI&gt;
&lt;LI&gt;Memory, (SGA/PGA)&lt;/LI&gt;
&lt;LI&gt;Control files&lt;/LI&gt;
&lt;LI&gt;Datafiles&lt;/LI&gt;
&lt;LI&gt;Redo log files&lt;/LI&gt;
&lt;LI&gt;Archive logs&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;We’re going to discuss a common performance challenge when coming to the cloud and that’s around REDO LOGS.&amp;nbsp; Oracle redo logs are the equivalent in Microsoft databases to the transaction log. &amp;nbsp;Unlike SQL Server, Oracle has multiple logs that are part of the database that it switches between active, non-active and archiving. &amp;nbsp;All operations that modify are written to the redo log buffer and then the log writer (LGWR) writes it to the active redo log.&amp;nbsp; It’s not uncommon for the fastest storage IO to be required for the redo logs vs. datafiles because of this demand.&lt;/P&gt;
&lt;P&gt;Redo logs are commonly mirrored to protect any changes not written to the datafiles and/or from a physical removal of one redo log “member” from a redo log member group.&lt;/P&gt;
&lt;TABLE&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="208px" height="30px"&gt;
&lt;P&gt;&lt;STRONG&gt;Group&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="208px" height="30px"&gt;
&lt;P&gt;&lt;STRONG&gt;1&lt;SUP&gt;st&lt;/SUP&gt; Member&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="208px" height="30px"&gt;
&lt;P&gt;&lt;STRONG&gt;2&lt;SUP&gt;nd&lt;/SUP&gt; Member&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="208px" height="30px"&gt;
&lt;P&gt;Group 1&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="208px" height="30px"&gt;
&lt;P&gt;Redo01a.log&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="208px" height="30px"&gt;
&lt;P&gt;Redo01b.log&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="208px" height="30px"&gt;
&lt;P&gt;Group 2&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="208px" height="30px"&gt;
&lt;P&gt;Redo02a.log&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="208px" height="30px"&gt;
&lt;P&gt;Redo02b.log&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="208px" height="30px"&gt;
&lt;P&gt;Group 3&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="208px" height="30px"&gt;
&lt;P&gt;Redo03a.log&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="208px" height="30px"&gt;
&lt;P&gt;Redo03b.log&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="208px" height="30px"&gt;
&lt;P&gt;Group 4&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="208px" height="30px"&gt;
&lt;P&gt;Redo04a.rdo&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="208px" height="30px"&gt;
&lt;P&gt;Redo04b.rdo&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&lt;FONT size="2"&gt;&lt;EM&gt;Table 1&lt;/EM&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;Notice that the extension can be anything you want- .log or. rdo is most common.&lt;/P&gt;
&lt;H1&gt;Oracle Wait Events&lt;/H1&gt;
&lt;P&gt;For Oracle, wait events are metrics that provide information about what is causing latency in performance of an Oracle workload.&amp;nbsp; This information can be provided in multiple ways, but most often it is provided by the following:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;An Automatic Workload Repository (AWR) Active Session History (ASH) or Statspack report&lt;/LI&gt;
&lt;LI&gt;A query of top waits in Oracle&lt;/LI&gt;
&lt;LI&gt;A trace file or trace file TKPROF report&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;There are two main wait events that we see for Oracle databases:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Log File Sync&lt;/LI&gt;
&lt;LI&gt;Log File Parallel Write&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;A report showing this type of latency would look like this:&lt;/P&gt;
&lt;P&gt;Top 5 Timed Events&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; Avg %Total&lt;/P&gt;
&lt;P&gt;~~~~~~~~~~~~~~~~~~&amp;nbsp; &amp;nbsp; wait&amp;nbsp;&amp;nbsp; Call&lt;/P&gt;
&lt;P&gt;Event&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Waits&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;Time (s)&amp;nbsp;&amp;nbsp; (ms)&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;DBTime&lt;/P&gt;
&lt;P&gt;----------------------------- ------------ ----------- ------ ------&lt;/P&gt;
&lt;P&gt;CPU time&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;5,099&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; 41.0&lt;/P&gt;
&lt;P&gt;&lt;FONT color="#DF0000"&gt;log file sync&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;621,615&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 3,520&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 6&amp;nbsp; &amp;nbsp; 28.3&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT color="#DF0000"&gt;log file parallel write&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp;575,295&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 2,573&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 4&amp;nbsp; &amp;nbsp; 20.7&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For the example above, we expect CPU time to be the top consumer-&amp;nbsp; CPU is on or off and for a database, we expect the usage to be high and Oracle has included the “wait for CPU” in the total, too, so again, it’s not surprising that it’s #1.&amp;nbsp; I commonly look for those wait events that have over a 10% of DB time.&amp;nbsp; In the example, log file sync is 28% and log file parallel write is almost 21%.&amp;nbsp; The two waits together are 49% of the database time consumption.&amp;nbsp; Fix these two waits and the database performance will significantly improve.&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;What is Log File Sync&lt;/H2&gt;
&lt;P&gt;As changes happen in the Oracle database, changes are written to the redo log.&amp;nbsp; Once changes are committed or rolledback, the LGWR will then write what is in the redo buffer to the redo log that is active.&amp;nbsp; Before the process is considered complete, a confirmation has to be sent back and this latency is what is referred to as log file sync or the time required to flush the log buffer to disk and writes confirmed.&amp;nbsp; To address this type of wait event, optimization of the code can be performed or to physically address it, faster storage for the redo can be recommended.&lt;/P&gt;
&lt;H2&gt;What is Log File Parallel Write&lt;/H2&gt;
&lt;P&gt;An Oracle database will have at least three redo logs and if the DBA wants to mirror those redo logs, that means each redo log has been mirrored and placed in a group.&amp;nbsp; This results in all writes having to be done twice from the changes in the database.&amp;nbsp; As shown in table 1, the database could have two or more, (older days we saw more than 2 redo log members per group) that are being written to each time a write is performed.&amp;nbsp; This is another area where faster disk, (write faster) or optimized code, (write less and less often) can remove latency.&lt;/P&gt;
&lt;H1&gt;Ultra Disk to the rescue&lt;/H1&gt;
&lt;P&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/virtual-machines/disks-enable-ultra-ssd?tabs=azure-portal" target="_blank" rel="noopener"&gt;Ultra&lt;/A&gt; disk is a managed disk with configurable IO that the customer pays for depending on the IO demand.&amp;nbsp; For a large Oracle database, it rarely makes sense for datafiles due to the cost, but for redo logs, the size is small and the ability to configure the IO specifically for the high write needs can make a huge difference for the wait events discussed in this blog post.&lt;/P&gt;
&lt;P&gt;Calculate the needs of the log files using the IO STAT FUNCTION SUMMARY in the AWR report, which will tell you the number of MBPs/IO Requests that are needed to meet demand.&amp;nbsp; Pad a little over this to address averages in the report and this should give you the values to target for:&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Using this information, along with the size of your redo logs, you can then price out ultra efficiently and pay for what you need:&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If you only have a Log File Sync issue, then a single ultra disk and moving the redo logs to this new disk can address this problem.&amp;nbsp; If you have both Log File Sync and Log File Parallel Write, then allocate TWO ultra disks, moving the 1&lt;SUP&gt;st&lt;/SUP&gt; members of each group to one ultra disk and the 2&lt;SUP&gt;nd&lt;/SUP&gt; members of each redo log group to the second ultra disk:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;Redo logs can be moved online, so outside of enabling the use of ultra disk on a VM, there isn’t an outage required on the Oracle side to move the redo logs, which simply relies on new redo log groups created in the new ultra disk storage, checkpointing, switching logs and dropping the old groups on the old storage.&lt;/P&gt;
&lt;P&gt;Once this has been performed, a full workload should be run and the waits should be eliminated on the log files.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Top 5 Timed Events&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;Avg %Total&lt;/P&gt;
&lt;P&gt;~~~~~~~~~~~~~~~~~~&amp;nbsp; &amp;nbsp; &amp;nbsp;wait&amp;nbsp;&amp;nbsp; Call&lt;/P&gt;
&lt;P&gt;Event&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Waits&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; Time (s)&amp;nbsp;&amp;nbsp; (ms)&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; DBTime&lt;/P&gt;
&lt;P&gt;----------------------------------------- ----------- --------------&lt;/P&gt;
&lt;P&gt;CPU time&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;4,057&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 9.1&lt;/P&gt;
&lt;P&gt;&lt;FONT color="#DF0000"&gt;log file sync&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;988,793&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 1,382&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 1&amp;nbsp;&amp;nbsp;&amp;nbsp; 3.1&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT color="#DF0000"&gt;log file parallel write&amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;1,060,731&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 1,130&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 1&amp;nbsp;&amp;nbsp;&amp;nbsp; 2.5&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;You should see the time waited and the percentage of DB Time consumed drop considerably using this method of optimizing with faster storage.&amp;nbsp; Anything under 10% is often a good indication that you can focus on other priorities.&lt;/P&gt;
&lt;H1&gt;Caveats of using Ultra Disk with Oracle on Azure IaaS&lt;/H1&gt;
&lt;UL&gt;
&lt;LI&gt;Ultra Disk is still not an option for volume snapshot backups, so RMAN will be required for backups.&lt;/LI&gt;
&lt;LI&gt;Ultra disk is only available on some VM series, so make sure it is available for the system you are hoping to use it on.&lt;/LI&gt;
&lt;LI&gt;Azure Backup for Oracle is impacted when Ultra disk is introduced, but there is a disk exclusion policy that will address this in a future release.&lt;/LI&gt;
&lt;LI&gt;Ultra disk can be expensive, so identify how much IO is required and optimize it to save, using the wait events and acceptable performance from the user experience.&lt;/LI&gt;
&lt;LI&gt;Ultra disk, just like managed disk, is subject to IO throttling, based on the IO limit posed for the VM it resides on.&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Wed, 31 Aug 2022 23:17:20 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/data-architecture-blog/addressing-oracle-redo-latency-with-ultra-disk/ba-p/3614263</guid>
      <dc:creator>DBAKevlar</dc:creator>
      <dc:date>2022-08-31T23:17:20Z</dc:date>
    </item>
    <item>
      <title>Recommendations for Oracle 19c Patches in Azure</title>
      <link>https://techcommunity.microsoft.com/t5/data-architecture-blog/recommendations-for-oracle-19c-patches-in-azure/ba-p/3563907</link>
      <description>&lt;P&gt;Oracle 19c is the terminal release for Oracle 12c.&amp;nbsp; If you aren’t familiar with that term, a terminal release is the last point release of the product.&amp;nbsp; There were terminal releases for previous Oracle versions (10.2.0.4, 11.2.0.7.0) and after 19c, the next terminal release will be 23c.&amp;nbsp; Therefore, you don’t see many 18c, 20c or 21c databases.&amp;nbsp; We’ve gone to yearly release numbers, but the fact remains that 19c is going to receive all major updates and continue to be supported unlike the non-terminal releases.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Oracle will tell you for support, you should be upgrading to 19c.&amp;nbsp; Premier Oracle Support ended for December 1&lt;SUP&gt;st&lt;/SUP&gt;, 2020 and as we discussed, not many are going to choose or stay on non-terminal releases, so 19c is it.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;With that said, we must offer guidance on recommended practices for Oracle versioning and patching in Azure.&amp;nbsp; Although I will list any bugs and document IDs that back up the recommendations I’m making, be aware that many of these will be behind Oracle Support’s paywall, so you’ll only be able to access them with an Oracle Support CSI.&amp;nbsp; Let's talk about the things not to do first-&lt;/P&gt;
&lt;H2&gt;&lt;FONT size="5"&gt;Don’t Upgrade DURING Your Migration&lt;/FONT&gt;&lt;/H2&gt;
&lt;P&gt;I know it sounds like an awesome idea to upgrade to the latest database version while you are migrating to the cloud, but please, don’t do these two things- migrating to the cloud and upgrading the database/app at the same time.&amp;nbsp; It’s a common scenario that I’m brought in after the Azure specialists are left scratching their head or scrambling to explain what has changed and then I come in to tell them to stand down because it’s the DATABASE THAT’S CHANGED.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;FONT size="5"&gt;Do Patch to the LATEST Patchset for Oracle&lt;/FONT&gt;&lt;/H2&gt;
&lt;P&gt;I am part of the crowd that often did the latest patchset -1 approach.&amp;nbsp; We would always be one patchset behind and let others figure out how many bugs might be introduced by the patch that had sneaked through testing.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Not anymore… I have a few customers on 19.14, which should be safe, considering the previous practice I mentioned, but the sheer number of bugs and serious bugs that were experienced has changed my thinking to recommend going to the latest patchset.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I think it’s easy to think, “Oh, it's just a small bug” but I’m in agreement with you, if it’s a small impact and it has an easy work around, that’s one thing, but these bugs I’m referring to are quite impactful and here’s how:&lt;/P&gt;
&lt;H3&gt;&lt;FONT size="5"&gt;High CPU Usage&lt;/FONT&gt;&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;19.14 release, there were 11 bugs that caused high CPU usage for Oracle.&lt;/LI&gt;
&lt;LI&gt;High CPU usage to the point of doubling the core count for the VM the database ran on in Azure.&lt;/LI&gt;
&lt;LI&gt;Doubling the need for Oracle licenses for the database, even though it was a bug that was causing all the additional CPU usage.&lt;/LI&gt;
&lt;LI&gt;At $47500 list price per processor license, this isn’t something I’d recommend letting go on.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;For one customer that I was deeply involved in, the VM sizing required 20 vCPU to run the workload.&amp;nbsp; I sized up to 32 vCPU for peak workloads and yet they were at 97.6% CPU busy with a 64-core machine.&amp;nbsp; The workload hadn’t changed, and the CPU usage traced was out of control!&lt;/P&gt;
&lt;P&gt;I would start here:&lt;FONT color="#0000FF"&gt;&lt;STRONG&gt; After&lt;/STRONG&gt;&lt;STRONG&gt; Upgrade to 19c, One or More of the Following Issues Occur on Non-Linux Platforms: High Paging/Swapping, High CPU, Poor Performance, ORA-27nnn Errors, ORA-00379 Errors, ORA-04036 Errors (Doc ID 2762216.1)&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Bug examples for high CPU usage in 19.14:&lt;/P&gt;
&lt;TABLE&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD&gt;
&lt;P&gt;&lt;STRONG&gt;NB&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;&lt;STRONG&gt;Prob&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;&lt;STRONG&gt;Bug&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;&lt;STRONG&gt;Fixed&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;&lt;STRONG&gt;Description&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;&amp;nbsp;&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;&lt;EM&gt;II&lt;/EM&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;&lt;A href="https://support.oracle.com/epmos/faces/DocumentDisplay?parent=DOCUMENT&amp;amp;sourceId=1965757.1&amp;amp;id=31050103.8" target="_blank" rel="noopener"&gt;31050103&lt;/A&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;19.15, 23.1.0.0.0&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;fbda: slow sql performance when running in pluggable database&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;&amp;nbsp;&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;&lt;EM&gt;-&lt;/EM&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;&lt;A href="https://support.oracle.com/epmos/faces/DocumentDisplay?parent=DOCUMENT&amp;amp;sourceId=1965757.1&amp;amp;id=32869560.8" target="_blank" rel="noopener"&gt;32869560&lt;/A&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;19.15, 21.6&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;HIGH CPU ON KXSGETRUNTIMELOCK AND SSKGSLCAS&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;&amp;nbsp;&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;&lt;EM&gt;I&lt;/EM&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;&lt;A href="https://support.oracle.com/epmos/faces/DocumentDisplay?parent=DOCUMENT&amp;amp;sourceId=1965757.1&amp;amp;id=29446010.8" target="_blank" rel="noopener"&gt;29446010&lt;/A&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;20.1&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;Query Using LIKE Predicate Spins Using NLS_SORT='japanese_m' NLS_COMP='linguistic'&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;&amp;nbsp;&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;&lt;EM&gt;-&lt;/EM&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;&lt;A href="https://support.oracle.com/epmos/faces/DocumentDisplay?parent=DOCUMENT&amp;amp;sourceId=1965757.1&amp;amp;id=32431067.8" target="_blank" rel="noopener"&gt;32431067&lt;/A&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;23.1.0.0.0&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;Data Pump Export is Slow When Exporting Scheduler Jobs Due to Query Against SYS.KU$_PROCOBJ_VIEW&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;&amp;nbsp;&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;&lt;EM&gt;-&lt;/EM&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;&lt;A href="https://support.oracle.com/epmos/faces/DocumentDisplay?parent=DOCUMENT&amp;amp;sourceId=1965757.1&amp;amp;id=33380871.8" target="_blank" rel="noopener"&gt;33380871&lt;/A&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;19.15, 21.6&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;High CPU on KSLWT_UPDATE_STATS_ELEM&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;&amp;nbsp;&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;&lt;EM&gt;-&lt;/EM&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;&lt;A href="https://support.oracle.com/epmos/faces/DocumentDisplay?parent=DOCUMENT&amp;amp;sourceId=1965757.1&amp;amp;id=33921441.8" target="_blank" rel="noopener"&gt;33921441&lt;/A&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;19.15&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;Slow performance in AQ dequeue processing&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;
&lt;P&gt;*&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;&lt;EM&gt;II&lt;/EM&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;&lt;A href="https://support.oracle.com/epmos/faces/DocumentDisplay?parent=DOCUMENT&amp;amp;sourceId=1965757.1&amp;amp;id=32075777.8" target="_blank" rel="noopener"&gt;32075777&lt;/A&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;&amp;nbsp;&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;Performance degradation by Wnnn processes after applying july 2020 DBRU&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;&amp;nbsp;&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;&lt;EM&gt;III&lt;/EM&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;&lt;A href="https://support.oracle.com/epmos/faces/DocumentDisplay?parent=DOCUMENT&amp;amp;sourceId=1965757.1&amp;amp;id=32164034.8" target="_blank" rel="noopener"&gt;32164034&lt;/A&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;&amp;nbsp;&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;Database Hang Updating USER$ When LSLT (LAST SUCCESSFUL LOGIN TIME) Is Enabled&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;&amp;nbsp;&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;&lt;EM&gt;III&lt;/EM&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;&lt;A href="https://support.oracle.com/epmos/faces/DocumentDisplay?parent=DOCUMENT&amp;amp;sourceId=1965757.1&amp;amp;id=30664385.8" target="_blank" rel="noopener"&gt;30664385&lt;/A&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;&amp;nbsp;&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;High count of repetitive executions for sql_id 35c8afbgfm40c during incremental statistics gathering&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;&amp;nbsp;&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;&lt;EM&gt;II&lt;/EM&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;&lt;A href="https://support.oracle.com/epmos/faces/DocumentDisplay?parent=DOCUMENT&amp;amp;sourceId=1965757.1&amp;amp;id=29559415.8" target="_blank" rel="noopener"&gt;29559415&lt;/A&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;&amp;nbsp;&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;DMLs on FDA enabled tables are slow, or potential deadlocks on recursive DML on SYS_FBA_* tables&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;&amp;nbsp;&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;&lt;EM&gt;II&lt;/EM&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;&lt;A href="https://support.oracle.com/epmos/faces/DocumentDisplay?parent=DOCUMENT&amp;amp;sourceId=1965757.1&amp;amp;id=29448426.8" target="_blank" rel="noopener"&gt;29448426&lt;/A&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;20.1&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;Killing Sessions in PDB Eventually Results in Poor Buffer Cache Performance Due To Miscalculating Free Buffer Count&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Time Slip&lt;/H2&gt;
&lt;P&gt;This issue will often display an ORA-00800 error and you will need to check the extended trace file for details.&amp;nbsp; It will include the VKTM in the error arguments.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT color="#000000"&gt;&lt;STRONG&gt;…/trace/xxxxx_vktm_xxxx.trc&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;PRE&gt;ORA-00800: soft external error, arguments: [Set Priority Failed], [VKTM], [Check traces and OS configuration], [Check Oracle document and MOS notes]&lt;/PRE&gt;
&lt;P&gt;The trace file will include additional information about the error, including:&lt;/P&gt;
&lt;PRE&gt;Kstmmainvktm: failed in setting elevated priority&lt;/PRE&gt;
&lt;PRE&gt;Verify: SETUID is set on ORADISM and restart the instance highres_enabled&lt;/PRE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This refers to a bug and has two documents around time drift and how to address it-&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;FONT color="#0000FF"&gt;ORA-00800: soft external error, arguments: [Set Priority Failed], [VKTM] (Doc ID 2718971.1)&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;I’d also refer to this doc, even though you aren’t running AIX:&lt;/P&gt;
&lt;P&gt;&lt;FONT color="#0000FF"&gt;&lt;STRONG&gt;Bug 28831618 : FAILED TO ELEVATE VKTM'S PRIORITY IN AIX WITH EVENT 10795 SET&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Network Connection Timeouts&lt;/H2&gt;
&lt;P&gt;Incident alerting will occur in the alert log, and it will require viewing the corresponding trace file for the incident.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;PRE&gt;ORA-03137: malformed TTC packet from client rejected.&lt;/PRE&gt;
&lt;P&gt;&lt;FONT color="#0000FF"&gt;&lt;STRONG&gt;ORA-03137: Malformed TTC Packet From Client Rejected: [12569] (Doc ID 2498924.1)&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;Potential Tracing to gather more data:&lt;/P&gt;
&lt;P&gt;&lt;FONT color="#0000FF"&gt;&lt;STRONG&gt;Getting ORA-12569: TNS:Packet Checksum Failure While Trying To Connect Through Client (Doc ID 257793.1)&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Block Corruption&lt;/H2&gt;
&lt;P&gt;Thanks to Jeff Steiner from the NetApp team who advised on this one.&lt;/P&gt;
&lt;P&gt;&lt;FONT color="#0000FF"&gt;&lt;STRONG&gt;Bug 32931941 - Fractured block Corruption Found while Using DirectNFS (Doc ID 32931941.8)&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;This can result in 100’s to 1000’s of corrupted blocks in an Oracle database.&lt;/LI&gt;
&lt;LI&gt;All customers using dNFS with 19c should run 19.14 or higher to avoid being vulnerable to this bug.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Also follow the &lt;FONT color="#0000FF"&gt;&lt;STRONG&gt;Recommended Patches for Direct NFS Client (Doc ID 1495104.1)&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;H2&gt;Summary&lt;/H2&gt;
&lt;P&gt;If you’re considering an upgrade to Oracle 19c, please review the following Oracle Doc:&lt;/P&gt;
&lt;P&gt;&lt;FONT color="#0000FF"&gt;&lt;STRONG&gt;Things to Consider to Avoid Database Performance Problems on 19c (Doc ID 2773012.1)&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;It really is worth your time and can save you a lot of time and headache.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 05 Jul 2022 18:00:47 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/data-architecture-blog/recommendations-for-oracle-19c-patches-in-azure/ba-p/3563907</guid>
      <dc:creator>DBAKevlar</dc:creator>
      <dc:date>2022-07-05T18:00:47Z</dc:date>
    </item>
    <item>
      <title>Cross Subscription Prod Refresh on SQL Managed Instance using TDE and Azure Key Vault</title>
      <link>https://techcommunity.microsoft.com/t5/data-architecture-blog/cross-subscription-prod-refresh-on-sql-managed-instance-using/ba-p/3516266</link>
      <description>&lt;P&gt;Hello Dear Reader!&amp;nbsp; I was working with some friends lately and we needed to set up a process to refresh their Development Environment databases from Production.&amp;nbsp; Additionally, the databases are encrypted using Transparent Data Encryption, The SQL MI instances are in different regions, and the SQL MI Instances are in different subscriptions.&amp;nbsp; To duplicate the environment, in order to match our friends, we did the following setup.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;We created a virtual machine and a database named TDE encrypted with a self signed certificate&lt;/LI&gt;
&lt;LI&gt;We exported the Certificate &amp;amp; Private Key and used pvk2pfx to create a pfx file and uploaded this to our SQL MI Instances, see &lt;A href="https://docs.microsoft.com/en-us/azure/azure-sql/managed-instance/tde-certificate-migrate?view=azuresql&amp;amp;tabs=azure-powershell" target="_blank" rel="noopener"&gt;Microsoft Doc’s article Migrate a certificate of a TDE-protected database to Azure SQL Managed Instance&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Backup and Restore the TDE database from our VM to the SQL MI Instances&lt;/LI&gt;
&lt;LI&gt;Create an Azure Key Vault in our development subscription and our production subscription&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This gives us the following environment.&amp;nbsp; We have our SQL Managed Instances in two different regions with our TDE databases restored using the self-signed certificate.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Remember our goal is to be able to perform cross subscription restores and still keep our databases encrypted. To do that we will perform the next 10 steps.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Create a TDE Key in the Development Azure Key Vault&lt;/LI&gt;
&lt;LI&gt;Backup the Key&lt;/LI&gt;
&lt;LI&gt;Create a new “2-b-moved” Azure Key Vault&lt;/LI&gt;
&lt;LI&gt;Restore the Key to the new “2-b-moved” Azure Key Vault&lt;/LI&gt;
&lt;LI&gt;Migrate the new “2-b-moved” Azure Key Vault from the development subscription to the production subscription&lt;/LI&gt;
&lt;LI&gt;Backup the key from the new “2-b-moved” Azure Key Vault now in the production subscription&lt;/LI&gt;
&lt;LI&gt;Restore the Key to the production Azure Key Vault&lt;/LI&gt;
&lt;LI&gt;Associate both SQL Managed Instances, Production &amp;amp; Development, with the Key’s in their respective Azure Key Vaults&lt;/LI&gt;
&lt;LI&gt;Create a T-SQL Credential on each Azure SQL Managed Instances to an Azure Blob Storage account&lt;/LI&gt;
&lt;LI&gt;Backup the TDE database from the production SQL Managed Instance and restore it to the development SQL Managed Instance&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Here is a flow chart of these activities.&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;Azure Key Vault Steps 1 – 7&lt;/U&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;One of the things we have to do is make sure that Production &amp;amp; Development can access the same Keys for TDE.&amp;nbsp; This is because TDE encrypts the database and places a thumbprint of the Key in the header file of the database.&amp;nbsp; Without access to the key the database will not come online.&amp;nbsp; If we are looking to do a “prod refresh”, refreshing the development environment with a current copy of the production database, then we will need the keys to be accessible to both instances.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;“But Brad”, you say, “Why can’t I just backup the keys from the Development Key Vault and restore them to the Production Key Vault?”.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Alas Dear Reader, there is a dependency on the subscription for backing up and restoring a Key Vault Key.&amp;nbsp; If you attempt to restore a Key from one subscription to another you will get this error.&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;So we start at step 1. Create the TDE Key&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We start by opening our Azure Portal and opening our Dev Azure Key Vault.&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Under Settings click Keys.&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Click + Generate Import.&amp;nbsp; We are not going to go in-depth on all the options for creating a Key.&amp;nbsp; There is one essential piece of advice I will give you.&amp;nbsp; Each Key should be unique, and if you plan on switching out the Keys every year, BUT you have a regulatory compliance that you must keep X number of years of backups on hand then you need to ensure the life of the Key is X number of years.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For Example:&amp;nbsp; Company A wants to switch Keys every year.&amp;nbsp; Company A has a requirement to restore backups that are 10 years old if needed.&amp;nbsp; Therefore every Key that is created should have a minimum of 10 years before they expire.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Because in 5 years if Company A must restore a backup, they will need the active non-expired Key from 5 years ago to be associated with the Azure SQL Managed Instance so they can restore the backup.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;*Remember what I said above, thumbprint of the Key is in the header of the database file and the backup, you cannot restore or bring online a database if you do not have the Key with the thumbprint associated with the SQL Managed Instance. ….also Company A needs &lt;A href="https://docs.microsoft.com/en-us/azure/azure-sql/managed-instance/long-term-backup-retention-configure?view=azuresql&amp;amp;tabs=portal" target="_blank" rel="noopener"&gt;Long Term Backup Retention&lt;/A&gt;.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;By default Keys do not have expiration dates or activation dates.&amp;nbsp; You need to select them.&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Enter a name for the Key, go with the default of RSA, RSA Key 2048, no active or expiration date for this demo, Enabled Yes, no rotation policy set, click Create.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Now click on the Key we have created.&amp;nbsp; Now onto Step 2.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL start="2"&gt;
&lt;LI&gt;Backup the Key&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Click on the Download Backup button.&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;A copy of the key will automatically be downloaded.&amp;nbsp; Now onto Step 3.&lt;/P&gt;
&lt;OL start="3"&gt;
&lt;LI&gt;Create New Key Vault&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;There are multiple ways to create a new Key vault, we will do this by clicking on the Azure Portal Menu and selecting + Create a resource.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Type Key vaults in the text box and click enter.&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Click Key Vault.&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Click Create.&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For the Resource Group name we’ll make a new one named bball-keyvault-test-move.&amp;nbsp; The name will be bball-Keyvault-2-b-moved, the region South Central US, and click Next.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We are going to add an Access Policy for the managed identity of our Azure SQL Managed instance, bball-tde-test.&amp;nbsp; To do this we will click on +Add Access Policy.&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Select Key, Secret, &amp;amp; Certificate Management in the drop down menu.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Click None selected and type in the name of the service, select the managed identity, and click add, then Add again.&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;There may be some additional Networking or Tag addition that you need to make.&amp;nbsp; For the purposes of this walk through we can move straight to review + create.&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Then click Create.&amp;nbsp; After the Key vault is created click on Go to resource.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Click on Keys.&amp;nbsp; &amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We are ready for step 4.&lt;/P&gt;
&lt;OL start="4"&gt;
&lt;LI&gt;Restore Key Backup&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Click Restore Backup.&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Browse to where you saved your Key backup from step 2 and select the Key backup. Click Open.&amp;nbsp; Now you should see the Key restored.&amp;nbsp; If you click on the key and look at the CURRENT VERSION they should match.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The Name should match the backed up Key name, and the status should be enabled.&amp;nbsp; Now click on Overview because it is time for step 5.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL start="5"&gt;
&lt;LI&gt;Move Key Vault bball-Keyvault-2-b-moved to Prod Subscription&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Click the -&amp;gt; Move drop down and select Move to another subscription.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Select the subscription and the Resource group to which you would like to move the Key Vault, then click Next.&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;At this point you will see a screen that says, “checking whether these resources can be moved.&amp;nbsp; This might take a few minutes.”.&amp;nbsp; And it will take a few minutes, validating that it can move the Key vault.&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Once this finishes you will get a green check and the Next button will become blue.&amp;nbsp; Click Next.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Check the check box next to the statement, “I understand that tools and scripts associated with this moved resources will not work until I update them to use new resource IDs”.&amp;nbsp;&amp;nbsp; Click Move.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This will take you back to the overview page.&amp;nbsp; Wait until the notifications alerts you that the Key vault has successfully moved subscriptions.&amp;nbsp; This takes us to Step 6.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL start="6"&gt;
&lt;LI&gt;Backup Key&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Now that we have moved our Key vault to our production subscription we will backup the Key so we can restore it to our production Key vault. &amp;nbsp;Go to your production subscription, and to the Key vault we’ve just moved. Just like step 2 go to Keys and click on bball-demo-tde-key.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Click Download Backup.&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This takes us to step 7.&lt;/P&gt;
&lt;OL start="7"&gt;
&lt;LI&gt;Restore Key Backup&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Like step 4 we are restoring the Key Backup, but this time we are doing it to our Production Key vault bball-keyvault-useast.&amp;nbsp; Navigate to our Production Key vault. Click on Keys and Click Restore Backup.&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Browse to the backup of the key and click Open.&amp;nbsp; You should see your key restored inside of our Production Key vault.&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;Azure SQL Managed Instance Steps 8-10&lt;/U&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Now we are ready for Step 8, where we will associate both keys that we have created with their respective Azure SQL Managed Instances in Dev and Prod.&amp;nbsp; Remember the goal is to have the same key in the backup, so we can perform our prod refresh into our development environment.&lt;/P&gt;
&lt;OL start="8"&gt;
&lt;LI&gt;Associate Key with SQL MI Prod &amp;amp; SQL MI Dev&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We will perform these actions in both our Production and Development SQL Managed Instances at the same time.&amp;nbsp; Open your portal blades to the SQL Managed Instances.&amp;nbsp; On the SQL Managed Instance blade click on Transparent Data Encryption under Security.&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We need to change the setting from Service-managed key to Customer-managed key.&amp;nbsp; A quick note on these settings.&amp;nbsp; If you use Service-managed keys and your database is encrypted, you cannot create a Copy Only Backup.&amp;nbsp; It will fail.&amp;nbsp; So we need to change this so we can do our prod refresh of our dev environment.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Click on the link that says Change key.&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Select Key vault, the Key vault for the environment, the Key we created bball-demo-tde-key, and click Select.&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Back on the Transparent Data Encryption screen click Save for both instances.&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Once the operation completes we are ready to move on to Step 9.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL start="9"&gt;
&lt;LI&gt;Create Credential with SAS token to Azure blob storage&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;In Each SQL Managed Instance we will execute the following script.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;CREATE CREDENTIAL [https://bballstorage.blob.core.windows.net/sqlmitcpbackups]

 WITH IDENTITY='SHARED ACCESS SIGNATURE'

,SECRET='SAS TOKEN without the ?';

&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In order to get the SAS Token you will go to the storage account that you would like to use, user Security + networking select Shared access signature.&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I’m not going into setting the expiration date of the SAS token.&amp;nbsp; You will want to discuss this with your team, follow any polices your organization has, in order to determine what expiration date should be.&amp;nbsp; Mine will literally be for a few minutes, because that’s all I need for this blog.&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Copy the SAS token.&amp;nbsp; Remember to remove the question mark after pasting it into the script from above.&amp;nbsp; Now run this script on both SQL Managed Instances in order to create a connection to the storage account.&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This takes us to our final step, Step 10!&lt;/P&gt;
&lt;OL start="10"&gt;
&lt;LI&gt;Backup and Restore TDE.bak to bball-tde-test&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This is the moment we’ve been waiting for, time to complete our prod refresh of dev.&amp;nbsp; We will start by running a backup script on bball-sqlmi-useast to our Azure blob storage account.&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Using &lt;A href="https://azure.microsoft.com/en-us/free/storage/search/?OCID=AID2200277_SEM_c1afdd79a0a014a16c8ebfd29e190c04:G:s&amp;amp;ef_id=c1afdd79a0a014a16c8ebfd29e190c04:G:s&amp;amp;msclkid=c1afdd79a0a014a16c8ebfd29e190c04" target="_blank" rel="noopener"&gt;Azure Storage Explorer&lt;/A&gt; I can validate the backup is in the storage account.&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Now we perform the Restore to dev.&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;And we have completed our prod refresh!&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;U&gt;&lt;STRONG&gt;Clean Up&lt;/STRONG&gt;&lt;/U&gt;&lt;/P&gt;
&lt;P&gt;Earlier we spoke about the need to rotate Keys.&amp;nbsp; At this point we can delete the Azure Key Vault 'bball-keyvault-2-b-moved'.&amp;nbsp; In a year, quarter, or whatever your timeframe is to rotate keys you can repeate all the steps to switch over to new keys and make sure you can still perform prod refreshes to your dev environment.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Alright Dear Reader, I hope this is helpful.&amp;nbsp; As always Thank You for stopping by!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Thanks,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Brad&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 16 Jun 2022 15:05:21 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/data-architecture-blog/cross-subscription-prod-refresh-on-sql-managed-instance-using/ba-p/3516266</guid>
      <dc:creator>Bradley_Ball</dc:creator>
      <dc:date>2022-06-16T15:05:21Z</dc:date>
    </item>
    <item>
      <title>The Importance of Proximity Placement Groups for Oracle multi-tier systems in Azure</title>
      <link>https://techcommunity.microsoft.com/t5/data-architecture-blog/the-importance-of-proximity-placement-groups-for-oracle-multi/ba-p/3513587</link>
      <description>&lt;P&gt;Oracle is a unique, high IO, workload beast, but it's important to recognize it's more often an ecosystem made up of an Oracle database, applications and often other databases that all must connect, feed and push data to users and each other.&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;When migrating to the cloud, the architecture discussion about what apps will be placed on what VMs, in what region, availability zones and even availability sets occur, but many forget that Azure is an enterprise cloud and as such, is massive in scale, let alone that we have over 200 datacenters.&amp;nbsp; With this scale comes a unique learning curve for technical specialists to pivot their view of how this differs from their on-premises datacenter view.&amp;nbsp; &amp;nbsp;We talk about &lt;A href="https://docs.microsoft.com/en-us/azure/availability-zones/az-overview" target="_blank" rel="noopener"&gt;regions and availability zones &lt;/A&gt;&lt;SPAN&gt;(&lt;/SPAN&gt;AZ) which may not provide as much transparency that we really need to understand the complexity that goes into HA, redundancy, scaling and distance.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Same Bat Problem, Same Bat Channel&lt;/H2&gt;
&lt;P&gt;I was brought into two scenarios this week with Severity 1 tickets and I was asked repeatedly how I was able to quickly identify the issue with diagnostic data provided by Oracle and Azure that appeared hidden to so many.&amp;nbsp; I also have to admit that Oracle focuses on what is the responsibility of the database in their wait events captured in the Automatic Workload Repository (AWR) or its predecessor, Statspack, and as Oracle considers the network outside of the database’s responsibility, the waits may be missed if you don’t know what you’re looking for.&amp;nbsp; Oracle lists some valuable info around the waits, but sets a clear boundary that the network is beyond the database.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For both customers in question, irony would have it, the complaints were about IO performance.&amp;nbsp; First on the scene tried to scale the VM and storage and as nothing changed, became frantic and located me through the internal Microsoft network.&amp;nbsp; I quickly asked for both Oracle diagnostic data and Azure internal diagnostic telemetry from Microsoft Support.&amp;nbsp; These tools provide an incredible amount of valuable diagnostic data, marry well together, and provide a data-driven picture of what is really happening that we can then address at a time-consumed level vs. just implementing products or resources at the problem hoping it &lt;EM&gt;might&lt;/EM&gt; offer a resolution.&lt;/P&gt;
&lt;P&gt;For the first customer, they didn’t have any AWR or Oracle’s free version, Statspack data to show us what the system performed like before the changes made.&amp;nbsp; Yes, I wasn't going to have an easy time of this one.&amp;nbsp; I quickly guided them to install Oracle Statspack on the database and then proceeded to identify where time was being consumed and optimize, as understanding what happened before this was going to be difficult at best.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;As the database is always guilty until proven innocent, Oracle was quick to point to latency around the redo logs which resided on the same storage as the datafiles.&amp;nbsp; After separating these from the data, there was only a 35% increase in performance vs. 100% which told me that the VM and the database weren't the real issue.&amp;nbsp; I’d just fixed a problem that had been present the whole time and needed to dig deeper into the data.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;As discussed earlier, Oracle doesn’t consider the network “it’s problem”.&amp;nbsp; &amp;nbsp;The diagnostic tools still track high-level information on the network, which is displayed in data around SQL*Net Message to/from client:&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;As this is Statspack, the number of Total Wait Time in seconds isn't formatted to present such large numbers, so all you get is ############. &amp;nbsp;This is a large red flag, even if SQL*Net isn’t shown as a top 5 wait event. This was quite far down the list, but SQL*Net notice the WAITS and WAITS/TXN.&amp;nbsp; The values demonstrated there were waits happening between the database VM and those VMs interacting with it.&amp;nbsp; Upon investigation, I could see there were both applications, SQL Server and an Oracle app server all in the top connections to this database.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I was also able to view the total waits &amp;nbsp;in MB and the amount of time spent on those waits, which quickly showed, it was the largest ones of all:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Our Last Bottleneck&lt;/H2&gt;
&lt;P&gt;I think we can easily forget the network is our last bottleneck, especially in the cloud and for multi-tier systems, where all targets, (VMs, servers, containers, etc.) &amp;nbsp;should be treated as one unit. &amp;nbsp;What needed to be understood, was when maintenance or changes that require a VM cycle occur, a restart runs the risk of one or more VMs to redeploy &amp;nbsp;in a different location in a data center or region, depending on availability and configuration.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;As I didn’t have diagnostic data from before the performance change, I could only go off of what Oracle statspack was telling me and then confirm with Azure diagnostic data that each of the VMs involved in this “ecosystem” wasn’t connected as part of a &lt;A href="https://docs.microsoft.com/en-us/azure/virtual-machines/co-location" target="_blank" rel="noopener"&gt;Proximity Placement Group&lt;/A&gt;.&amp;nbsp; To understand where PPG’s lie in the hierarchy of Azure, think of it this way:&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Notice that the PPG “groups” the application VMs and database VMs inside the AZ, letting Azure know that these resources are connected, even if they are in different resource groups or using different virtual networks.&amp;nbsp; If you’re looking for resiliency and using Oracle, consider Data Guard and have replication to a standby in a second AZ.&amp;nbsp; If using SQL Server in IaaS, then you’d do the same with Always-on AG, replicating the application tier to the second AZ with Azure Site Recovery (ASR) or similar.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For the first customer, as I’d also addressed the redo log IO latency issue, the performance improved 200% after the PPG was added for the involved VMs and blew the customer away.&amp;nbsp; For the second customer, I expect similar increases in performance based off the AWR data but will have to wait until they send me the AWR report after the changes that are still waiting for a window to deploy.&amp;nbsp; If you look at the high level AWR diagnostic data from the second customer, you’ll notice a trend in waits similar to the first customer:&lt;/P&gt;
&lt;H2&gt;Foreground Wait Events&lt;/H2&gt;
&lt;TABLE&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD&gt;
&lt;P&gt;&lt;STRONG&gt;Event&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;&lt;STRONG&gt;Waits&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;&lt;STRONG&gt;%Time -outs&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;&lt;STRONG&gt;Total Wait Time (s)&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;&lt;STRONG&gt;Avg wait&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="58"&gt;
&lt;P&gt;&lt;STRONG&gt;Waits /txn&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="61"&gt;
&lt;P&gt;&lt;STRONG&gt;% DB time&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;
&lt;P&gt;log file sync&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;2,418,416&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;126,918&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;52.48ms&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="58"&gt;
&lt;P&gt;1.05&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="61"&gt;
&lt;P&gt;55.33&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;
&lt;P&gt;enq: TX - contention&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;130,455&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;35,409&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;271.43ms&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="58"&gt;
&lt;P&gt;0.06&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="61"&gt;
&lt;P&gt;15.44&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;
&lt;P&gt;SQL*Net message from client&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;93,868,885&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;4,846,357&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;51.63ms&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="58"&gt;
&lt;P&gt;40.72&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="61"&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;
&lt;P&gt;SQL*Net message to client&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;93,722,245&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;125&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;1.34us&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="58"&gt;
&lt;P&gt;40.66&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="61"&gt;
&lt;P&gt;0.05&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;Although still not considered Oracle’s problem, the AWR at least identifies that the SQL*Net waits should be looked into.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With the PPG in place, both customers can comfortably deploy changes knowing the VMs involved will be closely located in a single data center to decrease network latency and new VMs that need to be connected can be added to the PPG afterwards.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Incorrect compute SKU family, storage type, redo log latency and connectivity between database and app server are the most frequent issues we see for customers in Oracle on Azure IaaS.&amp;nbsp; I discuss the first two in posts often, but we’re now going to start to discuss the next tier of performance and how easy it is to identify and resolve with the diagnostic data provided by Oracle and Azure.&lt;/P&gt;
&lt;H2&gt;Performance Improvements&lt;/H2&gt;
&lt;P&gt;Once a PPG is created for involved VMs for an Oracle “ecosystem” the wait times clearly decreases and performance, although outside of Oracle’s control is quickly resolved with PPG from Azure, but the resolution wait times is demonstrated in AWR or Statspack reports, (or in queries directly to the Oracle v$ and historical wait event objects.)&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For the first customer, the %Total Call Time dropped from 70% to less than 6% and you’ll notice the decrease in waits on the log file wait events then pushes the overall percentage up for what is now viewed as the new bottleneck: db file sequential and scattered reads:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;As we corrected the SQL*Net waits, those have also decreased drastically, allowing for much better performance, not just experienced by the customer, but seen in our diagnostic data, too.&lt;/P&gt;
&lt;H2&gt;Summary&lt;/H2&gt;
&lt;P&gt;Recommended practice for any Oracle multi-tier system, such as E-business suite, (EBS) Peoplesoft, Hyperion, Essbase, etc. or Oracle database/applications deployed a multiple VMs in Azure is to use a Proximity Placement Group to deter from network latency across multiple datacenters in a region or Availability Group.&amp;nbsp; High Availability and DR should be architected based on Recovery Point Objective (RPO), Recovery Time Objective, (RTO) and any SLAs the business is held to if there is an outage or impact to user access.&lt;/P&gt;</description>
      <pubDate>Thu, 16 Jun 2022 00:59:46 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/data-architecture-blog/the-importance-of-proximity-placement-groups-for-oracle-multi/ba-p/3513587</guid>
      <dc:creator>DBAKevlar</dc:creator>
      <dc:date>2022-06-16T00:59:46Z</dc:date>
    </item>
    <item>
      <title>Bring Vision to Life with Three Horizons, Data Mesh, Data Lakehouse, and Azure Cloud Scale Analytics</title>
      <link>https://techcommunity.microsoft.com/t5/data-architecture-blog/bring-vision-to-life-with-three-horizons-data-mesh-data/ba-p/3390414</link>
      <description>&lt;P&gt;&lt;STRONG&gt;Introduction:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Many of you may have been focused on analytics for years and have shown audacity, acumen, and determination in improving your decision-making and the decisions of those around you.&amp;nbsp; You have personified Teddy Roosevelt’s "the person in the arena" as opposed to sitting on the sidelines and watching.&amp;nbsp; Whether you have been in the arena, are new to the space, or can’t wait to get on the field, I hope this will inspire thought, discussion, debate, and action.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;There’s always been organizational, architectural, and technical blockers to successfully building, deploying, maintaining, and using analytical systems.&amp;nbsp; I have experienced many of these firsthand as an implementer and user since I wrote my first SQL statement back in 1993.&amp;nbsp; So much has changed in the last 29 years, but so much has also stayed the same.&amp;nbsp; We are still getting, preparing, modeling, analyzing, and visualizing data regardless of role (Data Analyst, Data Scientist, Data Engineer, Data Consumer) and tools (BI, AI/ML, Pipelines/Notebooks, Report/Dashboards) used.&amp;nbsp; It’s now just with more diverse personnel using more affordable, intuitive, and scalable tools than in past decades.&amp;nbsp; I am going to consider analytical system architecture from a strategic, organizational, architectural, and technical point of view.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Modeling an organization’s strategic vision for the future requires changing the organization in incremental, evolutionary, and revolutionary ways.&amp;nbsp; Analytics is an important catalyst for change, innovation, and transformation. &amp;nbsp;People using analytical tools can help the organization make better data-driven decisions that support the transition from where an organization sits today to somewhere on and beyond the horizon.&amp;nbsp; Getting started involves tactical assessment of the current state (As-Is) architecture, and discussion and prioritization to define the future state (To-Be) architecture. &amp;nbsp;These first steps can lead to prioritizing which systems should receive operational enhancements initially, and what data, tools, and processes should be used to drive innovation.&amp;nbsp; Essentially transforming complex systems is part of strategy and you need a strategy to vision cast to hook others to move towards the destination.&amp;nbsp; I think that the Data Mesh and Data Lakehouse approaches could help tactically drive change and realize vision.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The methodology I propose to explore and start down the path towards realization of this analytic strategic vision is through the composition of these concepts:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Three Horizon Model/Framework – strategic&lt;/LI&gt;
&lt;LI&gt;Data Mesh Sociotechnical Paradigm – organizational&lt;/LI&gt;
&lt;LI&gt;Data Lakehouse Architecture - architectural&lt;/LI&gt;
&lt;LI&gt;Azure Cloud Scale Analytics Platform – technological&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Exploration:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Like exploratory data analysis (EDA) performed on a dataset, I decided to spend some time exploring these concepts during a recent 7-day camping trip to &lt;A href="https://www.clallam.net/Parks/SaltCreek.html" target="_blank" rel="noopener"&gt;Salt Creek Recreation Area&lt;/A&gt; near Port Angeles, WA.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;I recommend staying in this beautiful setting with small islands, tidal pools, trees, bald eagles, and snowcapped mountains.&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;By the time I reached the midpoint of camping I felt great about Data Mesh and Data Lakehouse.&amp;nbsp; I had been coming to the conclusion about the better together synergies for a while, and the more I read &lt;A href="https://learning.oreilly.com/library/view/data-mesh/9781492092384/" target="_blank" rel="noopener"&gt;Data Mesh&lt;/A&gt; by Zhamak Dehghani (Data Mesh founder – Thoughtworks) the more it appeared that Data Mesh and Data Lakehouse need each other.&amp;nbsp; &amp;nbsp;Watching Matei Zaharia’s (Chief Technologist &amp;amp; Cofounder – Databricks) YouTube video &lt;A href="https://www.youtube.com/watch?v=a00rdlNtW98" target="_blank" rel="noopener"&gt;Data Mesh and Lakehouse&lt;/A&gt; solidified this point further.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;Three Horizons Model&lt;/U&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;I have been reading about the McKinsey Three Horizons model introduced in the book &lt;A href="https://www.getabstract.com/en/summary/the-alchemy-of-growth/10083" target="_blank" rel="noopener"&gt;The Alchemy of Growth - Practical Insights for Building the Enduring Enterprise&lt;/A&gt; by Authors Mehrdad Baghai, Stephen Coley and David White.&amp;nbsp; Three Horizons, like analytics, has been seasoned over multiple decades.&amp;nbsp; This strategy-focused methodology appears promising to help align stakeholders from the business (management and leadership outside of Information Technology).&amp;nbsp; Helping them envision, embrace, and invest in innovation and transformation.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Looking at the diagram below from &lt;A href="https://www.mckinsey.com/business-functions/strategy-and-corporate-finance/our-insights/enduring-ideas-the-three-horizons-of-growth" target="_blank" rel="noopener"&gt;McKinsey’s Enduring Ideas: The three horizons of growth&lt;/A&gt; you can get an idea of how an organization can drive business value creation (growth and profit) over time by reaching: Horizon 1 - improving performance of existing core businesses, Horizon 2 - building new emerging businesses, and Horizon 3 - creating viable new options through research projects or pilot programs.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Here is a similar diagram in the context of operational and analytical systems related to cloud maturity and accumulated total returns.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;This diagram is another view of 3 Horizons that illustrates H1 operational enhancements to existing business systems and its importance to setting up H2 innovation and H3 transformation possibilities.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The reason I like the 3 Horizons model is that strategy is important to the leadership in companies and the support of leadership is essential for architecting analytical systems.&amp;nbsp; Every successful project or program needs some sustained BANTR – Budget, Authority, Need, Timing, and Resources (not the Ted Lasso fictional dating app). &amp;nbsp;As we move to looking at Data Mesh you can see that some strategic vision is going to be applied because it will require a new approach to how teams are organized, and what the team’s priorities are.&amp;nbsp; A similar shift has already taken place with the distribution and scaling of operational systems that use microservices, domain-based teams (2 pizza teams – small enough that they can be fed with two extra-large pizzas), and distributed architectures enabled by containers and orchestrators like Kubernetes.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;Data Mesh&lt;/U&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;As defined in the book &lt;A href="https://learning.oreilly.com/library/view/data-mesh/9781492092384/" target="_blank" rel="noopener"&gt;Data Mesh&lt;/A&gt; by Zhamak Dehghani, Data Mesh is a socio-technical paradigm based on four principles:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Principle of Domain Ownership - decentralization and distribution of data responsibility to people who are closest to the data&lt;/LI&gt;
&lt;LI&gt;Principle of Data as a Product - the analytical data provided by the domains is treated as a product, and the consumers of that data should be treated as customer&lt;/LI&gt;
&lt;LI&gt;Principal of the Self-serve Data Platform - self-serve data platform services that empower domains’ cross-functional teams to share data&lt;/LI&gt;
&lt;LI&gt;Principal of Federated Computational Governance - a decision-making model led by the federation of domain data product owners and data platform product owners, with autonomy and domain-local decision-making power, while creating and adhering to a set of global rules&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The Domains make data products out of domain data that can be Source-aligned domain data (the facts and reality of the business), Aggregate domain data (long-standing aggregate data composed from multiple source-aligned domains), or Consumer-aligned domain data (the applications that consume the data, or data users such as data scientists or analysts).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With Data Mesh you organize teams in a distributed way by Domain (focus on Business or Activity being supported) vs centralized teams of Data Warehouse specialists for analytics.&amp;nbsp; In my way of thinking if you are going to build Data Lakehouse’s we need more skilled resources. The construction business scales to meet the demand for building and maintaining structures, so why shouldn’t analytics scale to do the same with data structures. The Data Mesh approach can help with the bottlenecks of centralized organizational structures and centralized technical architectures.&amp;nbsp; Data Mesh could also limit the power struggles where the Data Warehouse team is the only standard that bullies everyone else around, blocks entry into the Data Warehouse, or throttles query access when you want to get data out.&amp;nbsp; If a group wants to share data let them get some cloud-based storage and compute, and not be blocked by another group.&amp;nbsp; Proprietary storage and table formats along with limited tools for query access (just SQL) are the legacy of on-premises MPP Data Warehousing appliances.&amp;nbsp; Cloud-based Data Warehouses have changed this dynamic but the organizational structure and technological consolidation into one place approach still may be far from distributed.&amp;nbsp; Not because it could not be but because the team and/or organization wants it to be that way.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Data Mesh Domains have the same team responsible for the operational (OLTP) system and the analytical (OLAP) system.&amp;nbsp; Synapse Link HTAP (Hybrid Transaction/Analytical Processing) has features for &lt;A href="https://docs.microsoft.com/en-us/azure/cosmos-db/synapse-link" target="_blank" rel="noopener"&gt;Cosmos DB&lt;/A&gt; and &lt;A href="https://docs.microsoft.com/en-us/azure/architecture/example-scenario/data/azure-sql-htap" target="_blank" rel="noopener"&gt;Azure SQL Database&lt;/A&gt;. &amp;nbsp;For teams that are responsible for Source-aligned domain data in the Data Mesh world, Synapse Link HTAP could be an approach.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;When it comes to self-serve data platforms, I think that tools like Power BI have done a great job of providing self-service for Data consumers and Data Analysts.&amp;nbsp; Databricks and Synapse Notebooks, and Data Factory and Synapse Pipelines or Databricks Delta Live Tables are great tools for Data Engineers.&amp;nbsp; Also Databricks Notebooks, Azure Machine Learning Notebooks, and AutoML have Data Scientists covered.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;When it comes to Data Governance, Data Officers must go on both offense to increase access to data, and on defense to protect data.&amp;nbsp; Tools like M365 Information Protection, Power BI Sensitivity Labels, and Azure Purview access policy seem to help with the Governance principle.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;I really see strong synergies between the Three Horizons Model and the Data Mesh Paradigm&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;Data Lakehouse Architecture&lt;/U&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;img /&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://databricks.com/blog/2020/01/30/what-is-a-data-lakehouse.html" target="_blank" rel="noopener"&gt;What is a Lakehouse? - The Databricks Blog&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;A Data Lakehouse is a new, open architecture that combines the best elements of data lakes and data warehouses.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Some of the characteristics of a Lakehouse include the use of:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Low-cost Object Storage (like ADLS) decoupled from compute&lt;/LI&gt;
&lt;LI&gt;Batch and Stream processing&lt;/LI&gt;
&lt;LI&gt;Diverse Workloads
&lt;UL&gt;
&lt;LI&gt;Data science&lt;/LI&gt;
&lt;LI&gt;AI and Machine learning&lt;/LI&gt;
&lt;LI&gt;BI, SQL and analytics&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;Diverse and open data formats and diverse data typed
&lt;UL&gt;
&lt;LI&gt;File formats: Avro, CSV, Json, ORC, Parquet&lt;/LI&gt;
&lt;LI&gt;Table formats: Delta, Hudi, Iceberg, Hive Acid&lt;/LI&gt;
&lt;LI&gt;Tables, images, video, audio, semi-structured data, and text.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;Metadata Layers
&lt;UL&gt;
&lt;LI&gt;ACID Transactions&lt;/LI&gt;
&lt;LI&gt;Versioning and Time travel&lt;/LI&gt;
&lt;LI&gt;Governance&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;Diverse Engines, Tools, and Languages
&lt;UL&gt;
&lt;LI&gt;Presto, Dremel, RDBMS External tables&lt;/LI&gt;
&lt;LI&gt;Great SQL performance with Photon engine&lt;/LI&gt;
&lt;LI&gt;PySpark, Scala, Java language support&lt;/LI&gt;
&lt;LI&gt;Jupyter, Tensorflow tool support&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;Sharing and governance
&lt;UL&gt;
&lt;LI&gt;Delta Sharing&lt;/LI&gt;
&lt;LI&gt;Unity Catalog&lt;/LI&gt;
&lt;LI&gt;Schema enforcement&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Data Mesh and Data Lakehouse better together&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;IMO Data Mesh using a Data Lakehouse Architecture, built on Azure Data &amp;amp; AI and Data Governance Services can really help organizations.&amp;nbsp; I see domain-driven design foundation of Data Mesh and the data-driven decisions enabled by Data Lakehouse as a great one-two punch.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;Azure Cloud Scale Analytics&lt;/U&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Microsoft Azure has accelerated analytics at a faster rate than I experienced in the first 20 years of my career.&amp;nbsp; Cloud competition has become a space race that accelerates transformation.&amp;nbsp; When building a Cloud Scale Analytics architecture to define, model, and reach an organization’s strategic horizons, architects must consider everything from how teams are organized (see Data Mesh) to how to simplify and unify operational and analytical data systems (see Data Lakehouse, Synapse Link HTAP).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Azure is a great Cloud Scale Platform for architecting Analytical Systems.&amp;nbsp; It includes Azure Data Lake Storage (ADLS) object storage, Azure Data Factory batch and Event Hub, Azure Databricks and Azure Synapse Spark pool streaming pipelines for data engineering.&amp;nbsp; Data Science AI and Machine Learning tools in Azure Machine Learning, Azure Databricks, and Azure Synapse. Data Management capabilities in ADLS, Azure Databricks, and Azure Synapse SQL and Spark pools.&amp;nbsp; Data Analyst tools in Power BI and Excel.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The following image shows some examples of various cloud services that could be used by teams representing Consumer-aligned, Aggregate, and Consumer-aligned data domains.&amp;nbsp; These domains could be governed by Azure Purview and Azure Active Directory in support of access policy and discovery and classification of data existing in the organization.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Balancing Current Realities - But just&lt;/STRONG&gt;&lt;STRONG&gt; a minute:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The week after camping I was back to the realities of work and the challenges of modernizing an existing house.&amp;nbsp; It is not a Lakehouse or a Warehouse, but a residential house.&amp;nbsp; We were getting new windows and exterior doors which needed replacing on the 30-year-old structure.&amp;nbsp; I am thinking pragmatically here because most organizations might be working with legacy systems for operational and analytical data that could be more than decades old.&amp;nbsp; To balance my fondness for Data Mesh and Data Lakehouse (two new shiny objects) I need to measure them against the analytics concepts that have come before.&amp;nbsp; Data Mesh and Data Lakehouse are still relatively new.&amp;nbsp; Therefore, I am going to balance my hope for the To-Be future of analytical system architectures with the reality of keeping the lights on with the current As-Is analytical system architectures. As a result, I will discuss another concept, Data Fabric, which is also a new emerging trend.&amp;nbsp; Data mesh is going to play nice with new operational systems based on microservices and distributed event streaming, but how will it do with Legacy monolithic on premises apps?&amp;nbsp; Tactically what projects are candidates for Data Mesh and Data Lakehouse, versus what should left alone and use alternative ways to integrate?&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Keep the lights on and leverage what an organization already has – Modernize the House&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Bonus concepts:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;What about Legacy Analytics and keeping the lights on – Use Data Fabric&lt;/U&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;IMO Data Fabric is an umbrella term that covers some emerging technology, as well as some existing seasoned technologies that has been around decades before Data Mesh and Data Lakehouse.&amp;nbsp; Initially some of the data virtualization technologies previously known as Enterprise Information Integration (EII) have been marketed as logical analytics systems that don’t move or copy data from sources but get it from sources without using ETL and Data Warehousing - an alternative approach.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;According to Gartner data fabric is a design concept which requires multiple data management technologies to work with the aim of supporting “frictionless access and sharing of data in a distributed network environment”.&amp;nbsp; I have also seen Data Fabric described as modernizing existing investments like databases, data warehouses, data lakes, data catalogs, and data integration platforms.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Data Fabric helps make data the fabric of our lives.&amp;nbsp; For me Data Fabric is at least three things:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;New naming of data management technologies that have been available in some cases for 2 or more decades: BI, ETL, OLAP/In-memory, Data Virtualization, Replication, Change Data Capture (CDC)&lt;/LI&gt;
&lt;LI&gt;Newer emerging data management technologies like Data Sharing, Distributed event streaming, Distributed query engines like Spark, Presto and Dremel; and External table access to files using RDBMS systems&lt;/LI&gt;
&lt;LI&gt;Some newer AI enabled features and catalogs that are showing up as new vendor features&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The first Decision Support Systems (DSS) or BI tools I used had pipelines (Metaphor capsules), and sematic layers (Metaphor, Brio, Business Objects) and OLAP databases (Essbase, Analysis Services).&amp;nbsp; These tools (as Power BI and Tableau are today) were sold as solutions that could work with or instead of data warehouses.&amp;nbsp; Business Objects semantic layers (universes) could be created on top of OLTP database schema, an Operational Data Store (ODS), or star (ROLAP) schemas from Data Marts or Enterprise Data Warehouses (EDW). &amp;nbsp;Sales, marketing, and finance are examples of departments that were not going to wait for the EDW to be built. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In the case of Metaphor it was marketed as an Executive Information System (EIS), but it looked like an ETL tool that could run pipelines on demand.&amp;nbsp; Data preparation/ETL tools could bring data into Lotus or Excel, like the way Power Query does today in Excel.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;CDC and Replication tools were used to create an ODS that was used as data hubs for integration and sources for Reporting.&amp;nbsp; Early EII tools like Nimble Technology were acquired by reporting tools like &lt;A href="https://www.crn.com/news/applications-os/18824829/actuate-acquires-nimble-technology.htm" target="_blank" rel="noopener"&gt;Actuate&lt;/A&gt; to become it’s semantic layer.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Emerging examples of Data Fabric include Data Sharing like Databricks Delta Sharing, Snowflake Secure Data Sharing, and Azure Data Share. &amp;nbsp;Distributed query engines like Spark, Presto and Dremel are examples of Data Fabric that can make Data Lakehouse and Data Mesh work.&lt;/P&gt;
&lt;P&gt;External tables on RDBMS systems like Azure Synapse Analytics Serverless or Dedicated pools can be used for Lakehouse and Data Warehouse queries or for loading DW tables, and &lt;A href="https://docs.microsoft.com/en-us/azure/azure-sql/managed-instance/data-virtualization-overview" target="_blank" rel="noopener"&gt;Azure SQL Managed Instance&lt;/A&gt; can query open file formats like Parquet.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Examples of Data Fabric/Data Virtualization Types (Not exhaustive list and my Opinion):&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Except for Data Replication (which maintains the source schema) and Distributed Event Processing (which has a schema registry) most of the items in the table above create a data layer on top of data sources that allows for the integration of data.&amp;nbsp; The reason I have included Data Replication and Distributed Event Processing in the table is because these are practical ways of getting data out of the operational systems into changing copies that data layers can be created on top of.&amp;nbsp; I am not sure that practical coping of data can be avoided when it comes to integration, security, and retrieval of data.&amp;nbsp; There is always going to be a copy whether it be a replicated transactional database, events set to multiple places by a distributed event processing system like Kafka, a landing zone for files that are processed and rewritten, or a BI tool with data cached in an in-memory database that is refreshed on a BI service like Power BI.&lt;/P&gt;
&lt;P&gt;Improving access to data by consumer-aligned custom applications, data scientists, data analysts, and data consumers justifies the copies.&amp;nbsp; The data product needs consumers.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;IMO Data Mesh and Data Lakehouse make better To-Be data sources for Data Fabric technologies because much of the logic is built into the outputs vs just pointing to the original sources.&amp;nbsp; Find the right use case, business process, or domain and build your own Data Mesh, Data Lakehouse, and Data Fabric with today’s As-Is tools as the To-Be market evolves.&amp;nbsp; Prioritize business critical operational systems and modernize those operational systems with unification to analytical systems as part of the design.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;An aside about Data Marts vs Data Warehouses&lt;/U&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;What is appealing to me about Data Mesh and Data Lakehouse is that it is distributed.&amp;nbsp; I have always been a distributed fan.&amp;nbsp; I was a Ralph Kimball follower, and data mart practitioner who started building data marts in 1996.&amp;nbsp; The tooling has always been the same for data marts and data warehouses (Data Integration - ETL/ELT, Relational Database, BI Tool), but the philosophy was different – data marts where bottom-up and had some department autonomy vs enterprise data warehouses were top down and centralized.&amp;nbsp; Data marts were on a central RDBMS to combine the data needed for the business process, but they were still distributed.&amp;nbsp; Microsoft SQL Server 7.0 and future versions were great places to build data marts.&amp;nbsp; The challenge with data marts was that you could not fit all the fact tables on a single RDBMS and not every group would use the same database vendor for their data mart.&amp;nbsp; BI and data integration tools could glue it all together, but it was siloed.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Initially the goal was to deliver data marts in 90 days, but this did not scale because:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Data Marts never scaled the teams up or out to additional teams&lt;/LI&gt;
&lt;LI&gt;After delivering a couple of Data Marts the team could fail under the technical debt of maintaining the existing Data Marts so they couldn’t also create new ones&lt;/LI&gt;
&lt;LI&gt;The more that were built the higher the complexity got&lt;/LI&gt;
&lt;LI&gt;Data Mart teams had no authority over the source applications, so they continually needed to fix data quality that was never fixed upstream in the source system&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Did data marts fail because they were distributed?&amp;nbsp; Or did they fail because the architecture was monolithic?&amp;nbsp; Did they fail, or did BI tools consume the approach and just do more of what the data mart did like data integration, data modeling, data storage, and data access?&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I think the centralized attitudes in Data Warehousing was one of the reasons that Data teams stayed small and specialized as controlled communities.&amp;nbsp; If you believe in one version of the truth you keep the community small.&amp;nbsp; If you accept the ambiguity that exists in life you let more people participate - it can’t be centralized.&amp;nbsp; All the spread-marts, Microsoft Access databases, Budgeting and Planning OLAP Cubes, SQL Server Data Marts, Analysis Services Cubes, and Reporting Services, Brio, Business Objects, Cognos, Microstrategy, Nquire (OBIEE), Tableau, Power Pivot, and Power BI assets have happened regardless of whether a Data Warehouse existed or not.&amp;nbsp; Will the concepts of Data Mesh and Data Lakehouse fully staffed become the domain data sources that Data Marts and Data Warehouses had mixed success achieving.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;U&gt;Customer Scenarios&lt;/U&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Couple customers scenarios I have recently seen that are reasons to think about distributed vs centralized approaches:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Not treating Data as a Product&lt;/STRONG&gt;: The IT Pros responsible for the analytical source-aligned domain data are apprehensive to improve the integration into operational data stores (ODS) and don’t see the aggregate domain data team as a customer.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Not defining Domain Ownership:&lt;/STRONG&gt; IT Leadership are eager to expand to an enterprise data warehouse focus despite the Institutional Research (IR) team’s wants and needs to get going on a project that is more data mart worthy.&amp;nbsp; The team must take on the burden and technical debt of a centralized approach.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The first scenario needs the Data Mesh and Data Lakehouse approach to restore the trust between the IT team and the analytics team.&amp;nbsp; IT needs to create some SADD teams that are more responsive to changes to the source systems, so the analytics team can be an ADD team.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The second scenario needs the Data Mesh and Data Lakehouse approach because the centralized approach is not going to make anyone happy, may take too long, and be too expensive (can’t distribute the costs).&amp;nbsp; IT Leadership needs to build SADD Data Mart, Data Warehouse, or Data Lakehouse so the IR team can become an ADD Data Mart or Data Lakehouse.&amp;nbsp; IR does not need to be a Data Warehouse.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The following tables show some of the options a Source-aligned Domain Data (SADD) team can use to provide Data Products, and what an Aggregate Domain Data (ADD) team would need to do based on what the SADD team provided.&amp;nbsp; A Consumer-aligned Domain Data (CADD) team could get data from either the SADD or ADD.&amp;nbsp; I would say that the replication and Operational Data Store (ODS) or Data Lake are the most common ways that data sources are provided for analytics currently.&amp;nbsp; It would be the exception currently that a SADD team would provide Data Mart, Data Warehouse, or Data Lakehouse functionality to potential ADD teams.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Source-aligned Domain Data (SADD) Product Options&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;img /&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Aggregate Domain Data (ADD) Sources Options&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;img /&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Consumer-aligned Domain Data (CADD)&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Apps that consume the data or data users such as Data Scientists, Data Analysts, or Data Consumers.&amp;nbsp; Using queries, notebooks, custom apps, portals, reports, dashboards.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Get data from the most mature ADD or SADD product.&amp;nbsp; The least mature product would be SADD Replication, and the most mature would be ADD Data Lakehouse.&amp;nbsp; I am categorizing Data Lakehouse as more mature than Data Mart or Data Warehouse because the user could query across multiple SADD or ADD Data Lakehouse tables using a query spanning multiple Delta Tables (Colocation important to make this practical).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;The drawing above only depicts the Data Lakehouse as storage leaving out the fact that compute would be needed to move data out of the data sources and do the data processing required to transform the data (by perhaps a data engineer in the aggregate domain).&amp;nbsp; The data consumers in the consumer-aligned domain would also need compute to get data out of the Data Lakehouse but they could do this in a serverless way vs needing to pay to persist the data.&lt;/P&gt;
&lt;P&gt;This really seems like a decent way of sharing the costs of usage to the teams that are doing the work creating their data product.&amp;nbsp; If the economics of cost to value don’t workout then that group can adjust their usage appropriately.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;U&gt;Roles involved, and tools and processes&lt;/U&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Here I would like to look at the Data Analyst, Data Scientist, Data Engineer, and Data Consumer roles and the tools they might use and the process they might follow.&amp;nbsp; I general the Data Consumer uses artifacts created by the 3 other roles.&amp;nbsp; The other 3 roles all follow similar processes and could use similar tools.&amp;nbsp; The Data Analysts and Data Scientists can leverage work done by the Data Engineer or go do their own data processing.&amp;nbsp; This is where the friction exists between distributed autonomy and distributed teams.&amp;nbsp; The better the SADD and ADD team the less likely the CADD team or individual is to go do data processing themselves.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The diagram from the &lt;A href="https://learning.oreilly.com/library/view/implementing-a-smart/9781491983492/" target="_blank" rel="noopener"&gt;O’Reilly report “Implementing a Smart Data Platform”&lt;/A&gt; shows the closed loop steps of data processing.&amp;nbsp; My premise is that a data analyst can perform the steps in Power BI, the data scientist can do much of this in a Jupyter notebook, and the data engineer can use pipelines and Spark notebooks that augment the efforts of the other two roles.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Closed loop of data processing (figure courtesy of Wenfeng Xiao) from the O’Reilly report “Implementing a Smart Data Platform”&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Concluding Thoughts&lt;/STRONG&gt;&lt;STRONG&gt;:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This brings me to my back issue.&amp;nbsp; I was not sure whether to ice it, use a heating pad, take anti-inflammatories, stretch, lie flat on my back.&amp;nbsp; While looking for ideas I found out that the advice for treating soft-tissue injuries had changed.&amp;nbsp; Rice had moved to Police, and then on to Peace and Love.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://www.verywellhealth.com/the-police-principle-for-acute-sprains-and-strains-2696549" target="_blank" rel="noopener"&gt;The P.O.L.I.C.E. Principle Emergency Treatment for Acute Injuries&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://bjsm.bmj.com/content/54/2/72.long" target="_blank" rel="noopener"&gt;Soft-tissue injuries simply need PEACE and LOVE&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The basic metaphor here is that things change.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;When to use &lt;STRONG&gt;DW&lt;/STRONG&gt; (RICE) vs &lt;STRONG&gt;DF&lt;/STRONG&gt; (POLICE) vs &lt;STRONG&gt;DM&lt;/STRONG&gt; and &lt;STRONG&gt;DL&lt;/STRONG&gt; (PEACE and LOVE)?&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Below is a recent slide I used in a recent presentation.&amp;nbsp; I see some definite advantages for Data Mesh and Data Lakehouse related to diverse data types, AI/ML and BI Support, the use of low-cost object storage, diverse tools and languages, and options for both serverless and persistent compute.&amp;nbsp; For Data Warehouse you many have the query engine of the DW service itself, the data may be in a proprietary data format (but it could map to open formats as external tables), might use proprietary &amp;nbsp;storage (although may use low-cost object storage), and the compute may be persistent (although it could be paused or scaled as needed).&amp;nbsp; For my broad definition of Data Fabric it can leverage existing systems which could lead to performance issues on the source system requiring at least replication of the database, I am not convinced that don’t move or copy the data is realistic, or the use of the existing systems storage would work and that the storage and caching of data would need persistent compute.&amp;nbsp; I am much more sure of data layers for BI semantic layers than I am of data layers for integration that are not persisted by a copy in OLAP/In-memory, Data Mart, Data Warehouse, or Data Lakehouse.&amp;nbsp; This is definitely getting more blurred with Query engines on top or Parquet or Delta tables, but someone did a bunch of work to create those file and table formats ahead of time and the data was copied.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Guidance&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Do you put new windows in the old house or move to a Lakehouse?&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Data Warehouse, Data Fabric, Data Mesh, and Data Lakehouse &lt;STRONG&gt;can be complementary&lt;/STRONG&gt; but also compete based on ideology/belief&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;If you need RDBMS familiar features like Row-level security, use Data Warehouse or OLAP/In-memory Data Fabric&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;For existing Analytics architectures, use Data Fabric &lt;/STRONG&gt;to connect, compose, and consume what you already have and &lt;STRONG&gt;selectively migrate priority Source-aligned domain data to Data Mesh and Data Lakehouse&lt;/STRONG&gt;, because rip and replace is expensive&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;For new (green field) Cloud Scale Analytics projects, use Data Lakehouse and Data Mesh (DL and DM work well together) &lt;/STRONG&gt;To Be your new doors and windows; then see if the neighbors with the 30-year-old houses follow suit.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Remember technical debt requires scaling workers &lt;/STRONG&gt;(code development, modification, and maintenance) and is a bigger lock-in than vendor lock-in&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;More people and more talent development when capital is decentralized&lt;/STRONG&gt;.&amp;nbsp; Less bureaucracy, more staffing, less expert bias, and a little friendly competition (new windows)&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Building organizational and technical maturity &lt;/STRONG&gt;takes time and participation&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Recognize the existing politics between OLTP and Analytics (OLAP) software developers &lt;/STRONG&gt;on which is priority&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Milk Carton Rule&lt;/STRONG&gt;: “Instead of dwelling on what you can’t change, focus on what you can.” &amp;nbsp;&amp;nbsp;This includes others views of centralized and distributed.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Like Waterfall to Agile&lt;/STRONG&gt;, centralized Data Warehouse to distributed Data Mesh with Data Lakehouse is going to take some time for adoption and transformation&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;IMO Azure Synapse Guidance for DW, DM, and DL&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;When using Synapse Dedicated MPP pool use the RDBMS as a High-Performance Query endpoint not an ELT tool&lt;/LI&gt;
&lt;LI&gt;For Data Integration leverage Pipelines, Spark pools, and Spark Notebooks for ETL/ELT on ADLS using Parquet/Delta&lt;/LI&gt;
&lt;LI&gt;For Power BI in import mode refresh datasets using Lakehouse and Serverless SQL and for direct query mode use Dedicated pool&lt;/LI&gt;
&lt;LI&gt;For telemetry captured in Azure use Data Explorer (Kusto), for example data collected by Azure Application Insights or Azure Monitor&lt;/LI&gt;
&lt;LI&gt;For teams that are responsible for Source-aligned domain data, consider Synapse Link HTAP (Hybrid Transaction/Analytical Processing) for Apps backed by &lt;A href="https://docs.microsoft.com/en-us/azure/cosmos-db/synapse-link" target="_blank" rel="noopener"&gt;Cosmos DB&lt;/A&gt; and &lt;A href="https://docs.microsoft.com/en-us/azure/architecture/example-scenario/data/azure-sql-htap" target="_blank" rel="noopener"&gt;Azure SQL Database&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;For streaming applications delivering data using Kafka, Event Hub, or IoT Hub consider using Spark Structured Streaming in Azure Databricks, or Azure Synapse Spark pools.&amp;nbsp; Captured data in ADLS container can be queried in Spark, Synapse Serverless SQL, or Azure Databricks SQL endpoints.&lt;/LI&gt;
&lt;LI&gt;For AI/ML needs use Lakehouse and Spark (Databricks or Synapse Spark pools and Notebooks) and/or Azure ML and Cognitive Services&amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Last week as I sat in the Building 34 Café on the Redmond campus between my Chiropractic and Acupuncture appointments I feel like my back issue is actually improving.&amp;nbsp; Sometimes it takes multiple disciplines (Chiropractic, Acupuncture, Physical Therapy, Pharmacology) and modalities to diagnose, treat, rehabilitate, and prevent an injury.&amp;nbsp; I think medical professionals make decisions just like decision makers in other domains.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Hopefully this post can help your organization justify investment in analytics and exploration of some of these concepts discussed to break the pain-spasm cycle that exists between the spread-mart wild wild west and the locked down Enterprise Data Warehouse!&amp;nbsp; Please let me know how you feel about my depiction of Three Horizons, Data Mesh, Data Lakehouse, Cloud Scale Analytics, Data Fabric, Data Marts, and Data Warehouse; the customer scenarios, roles involved, and tools and processes used to architect them.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If you have not categorized me yet, I am a distributed data marts, data mesh, and data lakehouse fan that would surround these concepts with distributed AI/ML, BI, and Custom Apps.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Gazing over the horizon, the improved view will “Bring Vision to Life”!&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Links to Data Mesh, Data Lakehouse, Data Fabric, and Azure Resources&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://powerbi.microsoft.com/en-us/desktop/" target="_blank" rel="noopener"&gt;Power BI Desktop&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://powerbi.microsoft.com/en-us/" target="_blank" rel="noopener"&gt;powerbi.microsoft.com&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://powerquery.microsoft.com/en-us/excel/" target="_blank" rel="noopener"&gt;Excel Power Query&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/power-bi/transform-model/dataflows/dataflows-introduction-self-service" target="_blank" rel="noopener"&gt;Power BI Dataflows&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/sql/azure-data-studio/download-azure-data-studio?view=sql-server-ver15" target="_blank" rel="noopener"&gt;Azure Data Studio&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://azure.microsoft.com/en-us/features/storage-explorer/" target="_blank" rel="noopener"&gt;Azure Storage Explorer&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/storage/blobs/data-lake-storage-introduction" target="_blank" rel="noopener"&gt;Azure Data Lake Storage&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/data-factory/" target="_blank" rel="noopener"&gt;Azure Data Factory&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/azure-sql/database/sql-database-paas-overview" target="_blank" rel="noopener"&gt;Azure SQL Database&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://azure.microsoft.com/en-us/services/synapse-analytics/" target="_blank" rel="noopener"&gt;Azure Synapse Analytics&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://azure.microsoft.com/en-us/services/databricks/" target="_blank" rel="noopener"&gt;Azure Databricks&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://azure.microsoft.com/en-us/services/purview/" target="_blank" rel="noopener"&gt;Azure Purview&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://azure.microsoft.com/en-us/services/machine-learning/" target="_blank" rel="noopener"&gt;Azure Machine Learning&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://azure.microsoft.com/en-us/services/event-hubs/" target="_blank" rel="noopener"&gt;Azure Event Hub&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://azure.microsoft.com/en-us/services/data-share/" target="_blank" rel="noopener"&gt;Azure Data Share&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://github.com/microsoft/Unlocked-Cloud-Scale-Analytics/blob/main/presentations-pdf/03-a-microsoft-hybrid-approach-to-data-mesh-and-data-fabric.pdf" target="_blank" rel="noopener"&gt;Unlocked: Cloud Scale Analytics slides&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://www.youtube.com/watch?v=TE8gFOirLe0&amp;amp;t=3080s" target="_blank" rel="noopener"&gt;Unlocked: Cloud Scale Analytics video&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cloud-adoption-framework/scenarios/cloud-scale-analytics/" target="_blank" rel="noopener"&gt;Cloud-scale analytics - Microsoft Cloud Adoption Framework for Azure - Cloud Adoption Framework | Microsoft Docs&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cloud-adoption-framework/scenarios/cloud-scale-analytics/architectures/what-is-data-mesh" target="_blank" rel="noopener"&gt;What is a data mesh?&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 19 May 2022 11:14:39 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/data-architecture-blog/bring-vision-to-life-with-three-horizons-data-mesh-data/ba-p/3390414</guid>
      <dc:creator>DarwinSchweitzer</dc:creator>
      <dc:date>2022-05-19T11:14:39Z</dc:date>
    </item>
    <item>
      <title>Using Oracle AWR and Infra Info to Give Customers Complete Solutions to Performance Problems</title>
      <link>https://techcommunity.microsoft.com/t5/data-architecture-blog/using-oracle-awr-and-infra-info-to-give-customers-complete/ba-p/3361648</link>
      <description>&lt;P&gt;One of the best allocations of an Oracle SME specialist at Microsoft is when there is a complex data/infra issue for one of our customers.&amp;nbsp; We have a unique set of skills, understanding relational workloads along with deep infrastructure knowledge combined to identify issues that may be missed without these skills.&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For one customer this last week, they were receiving poor IO performance on one of their Oracle databases running on a VM in Azure.&amp;nbsp; It’s quite easy to just blame storage or scale up the VM, but that not only costs money, it doesn’t get to the real culprit, which so often has nothing to do with the Azure cloud, but lack of understanding of the inter-dependency between database and infrastructure.&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;Collect the Information&lt;/H1&gt;
&lt;P&gt;For this type of troubleshooting, there are two things required:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&amp;nbsp;Information about the specific Azure VM SKU and the storage/storage configuration the database is running on.&lt;/LI&gt;
&lt;LI&gt;And Automatic Workload Repository, (AWR) report from the window the performance problem was experienced.&amp;nbsp; As AWR reports are by default, 1-hour snapshots, a manual snapshot can be issued for the beginning and ending snapshot to isolate the performance window.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;Armed with this information, we can then start to gather a picture of what has occurred-&lt;/P&gt;
&lt;P&gt;The VM SKU the database is running on is important for us to &lt;A href="https://docs.microsoft.com/en-us/azure/virtual-machines/sizes" target="_blank" rel="noopener"&gt;attain detailed information&lt;/A&gt; about what resources are available and what limits are set:&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Each of the VM series have their own category links and spend the time to understand the differences between storage that’s been configured to take advantage of host level caching vs. uncached and network attached storage.&amp;nbsp; For our customers, the limits highlighted in yellow are what they are held to.&lt;/P&gt;
&lt;P&gt;This VM is a good fit for the customer.&amp;nbsp; It has 64vCPU and if they use host level caching, they can get upwards of 4000MBPs.&amp;nbsp; This was part of the problem though.&amp;nbsp; I was sent the storage used for the Linux VM:&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;No host level read-only caching turned on for any of the P30 disks that could take advantage of it.&lt;/P&gt;
&lt;P&gt;There was more than just demonstrated here, but the important thing to know is that the OS was on a premium SSD P10 disk, (good) and then the rest of the data was residing on P30 disks and one P70 disk.&lt;/P&gt;
&lt;H1&gt;Optimize the Storage&lt;/H1&gt;
&lt;P&gt;I know that most would say, “Go Ultra SSD” but just ignore Ultra disk unless you have redo latency and then Ultra SSD has a use case for redo logs, but for most, large Oracle databases, Ultra is an expensive choice that is still limited to the max uncached IO for the VM chosen.&amp;nbsp; In this case, that means no matter if you use Premium SSD or Ultra SSD, you can only get a max of 1735 MBPs WITHOUT caching.&amp;nbsp; For our customer, they had the P30s and a P70…&amp;nbsp; If we view this data in a different way, I think it might help understand the reason I want a slightly different choice in disk:&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We talk a lot about bursting, but relational workloads need a consistent, high performant storage solution.&amp;nbsp; With premium SSD, we have something called read-only host-level caching.&amp;nbsp; In the last column for the P30-P50 disks, you may realize that this caching option offers us a free boost in performance for high read workloads, which is something we see often with Oracle.&amp;nbsp; Yes, customers could pay for bursting, and although we’re satisfied with it when testing, bursting has to be available and we find in reality that most workloads in the cloud need bursting at the same time, so it results in inconsistent performance for Oracle.&amp;nbsp; Cached read-only workloads do fantastic and if you are deploying multiple P30, (1TB) disks and having IO problems, why wouldn’t you go to P40s for half the number of disks and get 250MBPs more throughput with a free feature you can turn on?&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The reason the customer was using the P70 was to get the throughput, but actually didn’t need all the storage, so when shown they could get 750MBPs with the smaller, 2TB, P40 disks, this made great economic sense, too.&lt;/P&gt;
&lt;P&gt;First step:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Replace the P30 and P70 disks with P40 Premium SSD.&lt;/LI&gt;
&lt;LI&gt;Turn on read-only host-level caching on each P40 disk.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H1&gt;Optimize the Database to Work with the Infrastructure&lt;/H1&gt;
&lt;P&gt;Now that we’ve identified what is needed for the storage, we need to identify anything in the database that may be contributing to the issues and the AWR quickly shows we do have a problem:&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;It might be easy to say, “We fixed the storage, so those User IO problems will go away.”&amp;nbsp; But we really need to understand what is causing it and although the undersized System Global Area, (SGA) is also something we may want to investigate; the undersized Process Global Area, (PGA) is more important. &amp;nbsp;The SGA and PGA are the two memory areas used by Oracle databases. For this situation, we have to ask, “What happens when we don’t have enough PGA to allocate to a given process in Oracle?”&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The PGA varied drastically during the window when performance degraded, yet was it enough?&amp;nbsp; The Foreground wait events quickly lets us know what happens when there isn’t enough PGA to perform sorts, hashes, etc. in memory:&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;That sorting and hashing must run in temp and the temp tablespace is IO.&amp;nbsp; &amp;nbsp;We can see how many times the PGA had to “swap” to temp, along with how much disk had to be used to perform the task in the AWR report:&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The AWR also separates the single passes to temp vs. multiple passes to temp tablespace during the small window of performance degradation.&amp;nbsp; This is also driving a lot of extra IO to datafiles, too, so it’s kind of tripping up on this, requiring more SGA, as well as more direct reads.&amp;nbsp; Even though this is farther down in the list, it’s the driving issue.&lt;/P&gt;
&lt;P&gt;The last issue that we want to take on with this customer degradation is that the initialization parameters show the customer isn’t using Oracle Automatic Storage Management (ASM) which would provide diskgroups that could be balanced and provide better performance across the storage.&amp;nbsp; In the top objects section, it’s apparent that the same objects are concurrently required for the processing and could benefit from ASM.&lt;/P&gt;
&lt;P&gt;At the database level, the recommendations are:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Raise the PGA to 55G to contain all but 2% of PGA processing, (which fits in the current VM memory available.)&lt;/LI&gt;
&lt;LI&gt;Implement ASM as part of the migration to the new storage on the P40 disks.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;Follow All of the Data&lt;/H1&gt;
&lt;P&gt;For relational database workloads running on Infrastructure as a Service, (IaaS) a data/infra specialist can offer a complete picture of performance issues or beneficial migrations strategies vs. a single side of the coin, data or infra.&lt;/P&gt;</description>
      <pubDate>Wed, 11 May 2022 23:41:31 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/data-architecture-blog/using-oracle-awr-and-infra-info-to-give-customers-complete/ba-p/3361648</guid>
      <dc:creator>DBAKevlar</dc:creator>
      <dc:date>2022-05-11T23:41:31Z</dc:date>
    </item>
    <item>
      <title>Oracle Licensing in Azure for Virtual Machines- Sales vs. Reality</title>
      <link>https://techcommunity.microsoft.com/t5/data-architecture-blog/oracle-licensing-in-azure-for-virtual-machines-sales-vs-reality/ba-p/3282222</link>
      <description>&lt;P&gt;This post will address an ongoing licensing myth that has just enough complexity around it, that it really trips up customers and account teams alike. As part of this post we’ll reference &lt;A href="https://techcommunity.microsoft.com/t5/data-architecture-blog/constrained-vcpu-and-oracle-licensing-in-azure/ba-p/3041537" target="_blank" rel="noopener"&gt;my vCPU licensing post&lt;/A&gt;, which I’d hoped would help answer this challenge, but for some reason, we still have folks confounded by Oracle sales people who, as sales will do, try to create friction that may cause people to hesitate from doing what they want with their workloads vs. doing what the Oracle sales people want them to do.&lt;/P&gt;
&lt;P&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/virtual-machines/constrained-vcpu" target="_blank" rel="noopener"&gt;Constrained vCPU machines&lt;/A&gt;, as I’ve stated in the past, are in my opinion, mis-named, as it appears that someone could have access to more vCPU than what is available, (&lt;EM&gt;which the answer is NO, they don’t have access to anything other than what is listed and that was the purpose of my previous blog post demonstrated.)&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Myths vs Reality&lt;/H3&gt;
&lt;P&gt;There shouldn’t be an issue here, but we should accept naming the offering “Constrained vCPU” will continue to drive this debate and challenge for customers.&amp;nbsp; If you’re curious where this myth or confusion is coming from, please see the following &lt;A href="https://www.oracle.com/assets/cloud-licensing-070579.pdf" target="_blank" rel="noopener"&gt;Oracle documentation&lt;/A&gt; for licensing in a third-party cloud for reference as we go forward into the discussion.&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;To close the debate about COULD Oracle SOMEHOW license the customer for more vCPU than what is available on a constrained vCPU VM, the key is in the documentation from &lt;STRONG&gt;Oracle &lt;/STRONG&gt;in which they used the term &lt;STRONG&gt;available&lt;/STRONG&gt; instead of the term &lt;STRONG&gt;allocated.&amp;nbsp; We should always focus on the licensing documentation &lt;/STRONG&gt;and not what “someone said” or “sales claimed” or anything else verbally stated.&lt;/P&gt;
&lt;H3&gt;Contracts are Contracts&lt;/H3&gt;
&lt;P&gt;The only vCPUs that are &lt;STRONG&gt;available&lt;/STRONG&gt; on the constrained VM skus are the ones which are &lt;STRONG&gt;active&lt;/STRONG&gt;.&amp;nbsp; vCPUs that are inactive on a constrained VM are&lt;STRONG&gt; not&lt;/STRONG&gt; &lt;STRONG&gt;available&lt;/STRONG&gt; in any sense of the word.&amp;nbsp; The vCPU may have been &lt;STRONG&gt;allocated&lt;/STRONG&gt; to the virtual machine, but they are not &lt;STRONG&gt;available&lt;/STRONG&gt; to the virtual machine, which is common for any virtual compute in a public cloud.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The wording by which Oracle sales might have achieved their licensing objective is&lt;STRONG&gt;&lt;FONT color="#CF3600"&gt; &lt;EM&gt;customers are required to count the maximum &lt;STRIKE&gt;available &lt;/STRIKE&gt;allocated vCPUs of an instance type&lt;/EM&gt;&lt;/FONT&gt;&lt;/STRONG&gt;, because the inactive vCPUs have only been &lt;STRONG&gt;allocated&lt;/STRONG&gt;, but they are certainly not &lt;STRONG&gt;available&lt;/STRONG&gt; in any way, shape, or form makes this a myth and shouldn’t be entertained.&amp;nbsp; I doubt they’d win many customers choosing this avenue, as they’re already losing significant customers as can be seen in percentage of new relational database projects by platform.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;We Said What We Said&lt;/H3&gt;
&lt;P&gt;For us Oracle SMEs for Azure, this avenue of addressing the constrained vCPU issue closes the discussion and makes it a moot point. &amp;nbsp;In legal matters – and Oracle licensing is indeed a contract – we shouldn’t admit imprecise or incorrect verbiage to the conversation.&amp;nbsp; There’s no reason for us to debate it.&lt;/P&gt;
&lt;P&gt;When any account team or customer is faced with this myth or misunderstanding by the Oracle sales person, we should advise them of Oracle’s mistake and move on to migrating them to Azure.&amp;nbsp; All contractual issues are not liable to the whims of the Oracle sales team, but the global &lt;A href="https://www.oracle.com/corporate/license-management-services/" target="_blank" rel="noopener"&gt;License Management Services (LMS)&lt;/A&gt; organization and must rule upon this &lt;STRONG&gt;in writing&lt;/STRONG&gt;.&amp;nbsp; Engaging with LMS would result in them performing a similar assessment as I did in my referenced post in the beginning of this one and list the number of available vCPUs in writing to the customer that are liable for licensing, which are, the available vCPU on a constrained vCPU VM.&lt;/P&gt;
&lt;H3&gt;Licensing Specialists&lt;/H3&gt;
&lt;P&gt;If your customer is looking for a partner to assist them with a licensing assessment for Oracle, especially an overall one since their Oracle licensing contract is between the customer and Oracle and not between them and Microsoft, I can’t recommend the following partners enough:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://palisadecompliance.com/" target="_blank" rel="noopener"&gt;Palisade&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://www.remend.com/oracle/license-management/" target="_blank" rel="noopener"&gt;Remend&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://p.miroconsulting.com/oracle-licensing-guide--ft?utm_campaign=Awareness%7COracle%7CBMM&amp;amp;utm_source=bing&amp;amp;utm_medium=cpc&amp;amp;utm_content=&amp;amp;utm_term=oracle%20license&amp;amp;msclkid=ec93567b6a2310a2586c342f0cc5552b" target="_blank" rel="noopener"&gt;Miro Consulting&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://seanmcintosh.org/" target="_blank" rel="noopener"&gt;SeanMcintosh Specialists&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Mon, 11 Apr 2022 18:03:06 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/data-architecture-blog/oracle-licensing-in-azure-for-virtual-machines-sales-vs-reality/ba-p/3282222</guid>
      <dc:creator>DBAKevlar</dc:creator>
      <dc:date>2022-04-11T18:03:06Z</dc:date>
    </item>
    <item>
      <title>New White Paper- Recommended Practices for Oracle on Azure IaaS</title>
      <link>https://techcommunity.microsoft.com/t5/data-architecture-blog/new-white-paper-recommended-practices-for-oracle-on-azure-iaas/ba-p/3247796</link>
      <description>&lt;P&gt;I know this is a long-awaited white paper and I would like to get this out into the hands of the public to help those who simply want to know the recommended practices for successfully assessing, sizing, architecting and migrating Oracle onto Azure Infrastructure as a Service (IaaS).&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This white paper provides guidance from beginning to end that can be used by any organization to assess an Oracle workload and begin its journey to successfully migrating it to Oracle on Azure IaaS.&amp;nbsp; The goal is to centralize all the recommended practices into one document and yes, it will become a Microsoft document eventually, (we're still in the early stages or redesigning all the Oracle documentation for Microsoft) but the Data Architecture Blog provides me a quick way to get content to customers in a very easy and consumable manner.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This will also be linked in the upcoming &lt;A href="https://arcade.sqlbits.com/sessions/" target="_self"&gt;SQLBits&lt;/A&gt; and &lt;A href="https://mvp.microsoft.com/en-us/Summit" target="_self"&gt;MVP Summit&lt;/A&gt; presentations around Oracle to Azure Migration Strategy presentations, but this paper is customer available for all those ready to bring Oracle workloads to Azure.&lt;/P&gt;
&lt;P&gt;&lt;A href="https://github.com/Azure/Oracle-Workloads-for-Azure/blob/main/Oracle%20on%20Azure%20IaaS%20Recommended%20Practices%20for%20Success.pdf" target="_self"&gt;Recommended Practices for Success with Oracle on Azure IaaS&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 11 Apr 2022 16:29:37 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/data-architecture-blog/new-white-paper-recommended-practices-for-oracle-on-azure-iaas/ba-p/3247796</guid>
      <dc:creator>DBAKevlar</dc:creator>
      <dc:date>2022-04-11T16:29:37Z</dc:date>
    </item>
    <item>
      <title>Understanding AWR Data for Exadata to Azure IaaS Migrations, Part I</title>
      <link>https://techcommunity.microsoft.com/t5/data-architecture-blog/understanding-awr-data-for-exadata-to-azure-iaas-migrations-part/ba-p/3067771</link>
      <description>&lt;P&gt;High IO workloads in Azure are a topic of common interest and of those workloads, Oracle Exadata tops the list. I’m going to begin to post about how the Oracle on Azure SMEs in the Cloud Architecture and Engineering team handle those.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Exadata is a unique beast- it’s not an appliance or a single database, but an &lt;STRONG&gt;&lt;EM&gt;engineered system&lt;/EM&gt;&lt;/STRONG&gt;.&amp;nbsp; It is a collection of hardware and software intelligence designed to run Oracle workloads, especially those with high IO, efficiently.&amp;nbsp; I’ve built out a high level Exadata architecture and added the main features that we need to address most when going to Azure.&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;There is no way to architect an Exadata in the Azure cloud and since there are most often NUMEROUS Oracle databases running on an Exadata, it’s important to identify the workload needs for each database and then migrate the workload to the appropriately architected IaaS in Azure.&amp;nbsp; Rarely do I recommend refactoring these out of the gate, as each database is a collection of schemas, that serve an individual application/function and to successfully migrate them directly off of Oracle is riddled with pitfalls.&amp;nbsp; Migrating them to Azure IaaS with Oracle, then later identifying what schemas can be refactored to another platform for those customers hoping to eliminate some of their Oracle footprint is the best course of action, (in my experience.)&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;What Oracle Says Exadata Can Provide&lt;/H2&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In the above diagram, we see the following features available on an Exadata:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Multitenant&lt;/LI&gt;
&lt;LI&gt;In-Memory DB&lt;/LI&gt;
&lt;LI&gt;&lt;FONT color="#993300"&gt;&lt;STRONG&gt;Real Application Clusters&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;Active Data Guard&lt;/LI&gt;
&lt;LI&gt;Partitioning&lt;/LI&gt;
&lt;LI&gt;Advanced Compression&lt;/LI&gt;
&lt;LI&gt;Advanced Security, DB Vault&lt;/LI&gt;
&lt;LI&gt;Real Application Testing&lt;/LI&gt;
&lt;LI&gt;Advanced Analytics&lt;/LI&gt;
&lt;LI&gt;Management Packs for Oracle&lt;/LI&gt;
&lt;LI&gt;&lt;FONT color="#993300"&gt;&lt;STRONG&gt;Offload SQL to Storage&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT color="#993300"&gt;&lt;STRONG&gt;InfiniBand Fabric&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT color="#993300"&gt;&lt;STRONG&gt;Smart Flash cache/log&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT color="#993300"&gt;&lt;STRONG&gt;Storage Indexes&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT color="#993300"&gt;&lt;STRONG&gt;Columnar Flash Cache&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT color="#993300"&gt;&lt;STRONG&gt;Hybrid Columnar Compression&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;IO Resource Management&lt;/LI&gt;
&lt;LI&gt;Network Resource Management&lt;/LI&gt;
&lt;LI&gt;In-memory fault tolerance&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;&lt;FONT color="#993300"&gt;Exafusion direct-to-wire Protocol, (This is a benefit of RAC, so put this in the RAC bucket)&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;With the list above, the items highlighted in &lt;STRONG&gt;&lt;FONT color="#993300"&gt;Dark Red&lt;/FONT&gt; &lt;/STRONG&gt;are Exadata specific.&amp;nbsp; All others are Oracle specific and can be done in Azure, not just Exadata.&amp;nbsp; Cross the ones that aren’t highlighted off your list as a concern.&amp;nbsp; If you license for it, you can build it out and license it in Azure, too.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Oracle Real Application Clusters, (RAC) is a moot point.&amp;nbsp; You can’t build it in a third-party cloud and honestly, it’s rarely something we even need.&amp;nbsp; We are now architecting for the Azure cloud, (please see the blog post on HA for Oracle on Azure at the bottom of this article for more information).&lt;/P&gt;
&lt;P&gt;That leaves us with just this to address:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Offload SQL to Storage&lt;/LI&gt;
&lt;LI&gt;InfiniBand Fabric&lt;/LI&gt;
&lt;LI&gt;Smart Flash cache/log&lt;/LI&gt;
&lt;LI&gt;Storage Indexes&lt;/LI&gt;
&lt;LI&gt;Columnar Flash Cache&lt;/LI&gt;
&lt;LI&gt;Hybrid Columnar Compression&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;So, let’s step back and discuss how we identify the workloads and decide the solution on the Azure side.&lt;/P&gt;
&lt;H2&gt;Go for the Gold- AWR&lt;/H2&gt;
&lt;P&gt;Some folks like to boil the ocean- the problem is, the fish will die and there’s just no reason to do it when just trying to identify and size the workload.&amp;nbsp; It’s all about what is the size of bucket it’s going to fit in.&amp;nbsp; If your estimate is off by 100 IOPS/100MBPs, it doesn’t mean you’re going to mis-size the workload when you decide what sizing it falls into.&amp;nbsp; It’s all about small, medium, large and extra-large architectures to begin with.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The Automatic Workload Repository, (AWR) fits this purpose and grants you valuable information about the Exadata feature use per database.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For Oracle 12.2 and above, there are significant enhancements to the AWR report that can be used to help us determine the amount of work that it will be to “decouple” from the Exadata.&lt;/P&gt;
&lt;P&gt;For our example, we’re going to use a 4-node RAC database running on an Exadata.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;At the very top of the report, we can see the database information, the number of nodes, but even more importantly, how busy it is, comparing the elapsed time to the DB Time. &amp;nbsp;&amp;nbsp;Of course, we expect an OLTP is going to have higher numbers for average active sessions, where this is Exadata, and its primary bread and butter are OLAP workloads with fewer beneficial features for OLTP.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I’m not going to focus on the sizing of a database workload.&amp;nbsp; You can learn how to do that in another link at the bottom of this post, but we’re going to identify the parts of the AWR report that can tell you what you’re up against to move it off of Exadata.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;An AWR report is quite large and after 12.2, you will notice that Oracle [often] included the ASH, (Active Session History) and the ADDM, (Automatic Database Diagnostic Monitor) report in with the AWR.&amp;nbsp; Past these two report additions to the AWR data, you will then see a section for Exadata:&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In the Top IO Reasons by MB for Cell Usage, we can quickly see how each of cell nodes are being used by the database hosts for offloading:&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The majority of the work by the cell nodes is smart scans, i.e., offloading, and pretty consistently across the cell nodes at around 85%.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Which Database Are You Targeting?&lt;/H2&gt;
&lt;P&gt;We can then go to the next section of interest and see, broken down by database, (top ten databases, after that, the rest are just aggregated and put in under “other”) what percentage of throughput is used by each:&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This table lists each database, and we can see, the database identified by DBID 696978140, which we’ll refer to as DB01 from this point forward, is using 70% of the total IO throughput and how much MB per second it uses on average.&amp;nbsp; We don’t size by the value shown in this section, but we can easily see which database is our “whale in the pond” and most likely the reason for the Exadata investment.&amp;nbsp; The next database down is less than 11%, so you can easily see the resource demand difference that the DB01 will require vs. the others.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In the section following this one, we will even see the breakdown of percentage and IOPS per database to each cell node.&amp;nbsp; It often falls very close to the total you see above, with minor differences.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;What Don't You Need to Worry About?&lt;/H2&gt;
&lt;P&gt;One of the nice things about the AWR, it will also tell you when they’re NOT using a feature, such as Columnar Flash Cache:&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In this post, I’ve demonstrated how to help identify the ease or challenge to migrate an Oracle database from Exadata to Azure.&amp;nbsp; If you are migrating DB01, then you are going to spend more time on the IO demands for this database.&amp;nbsp; If you’re moving one of the other 8 databases, (the line that appears blank is actually the ASM instance managing the storage layer) then it’s going to be a much easier time and I’d recommend aiming at one of those for a POC, not the "whale in the pond"!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For each one of the additional features in Exadata:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Smart Flash cache/log&lt;/LI&gt;
&lt;LI&gt;Storage Indexes&lt;/LI&gt;
&lt;LI&gt;Flash Cache&lt;/LI&gt;
&lt;LI&gt;Hybrid Columnar Compression (HCC)&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;There is a separate section in the AWR report that corresponds to it and the databases/Exadata workloads use of it.&amp;nbsp; For Hybrid Columnar Compression, it will tell you if the feature is in use and if so, then you can query the space savings and prepare for the amount of storage and the IO throughput additionally required when leaving Exadata.&amp;nbsp; This commonly is the reason for us to bring in a high IO storage solution.&amp;nbsp; Depending on settings for Query low, Query high, etc., the savings and performance can vary, if they need thin clones and deep compression, then I might suggest Silk to make up for the loss of those features on the Exadata.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;The Kitchen Sink&lt;/H2&gt;
&lt;P&gt;Other data that comes in handy and to think about:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Data is provided for flash caching, and single block reads is very helpful.&amp;nbsp; When you see high single block reads, it’s important to have an Oracle SME identify the database level usage of this.&amp;nbsp; It often offers us insight into Oracle workloads that will BENEFIT from not being on Exadata.&amp;nbsp; This is a longer discussion than what I will go in here, but it’s a less optimal use of an Exadata and worth this discussion, (another day, I promise!)&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Last, but not least, is TEMP tablespace usage.&amp;nbsp; Exadata can’t fix your temp problem- I’ve talked about that for years now and it’s still an issue.&amp;nbsp; Temp usage is due to numerous things, but often it’s because the database is not using PGA allocation correctly from inefficient code or databases design and temp is simply disk.&amp;nbsp; The faster the disk, the faster the performance, but needless to say, you’re “swapping” (best description) and performance would benefit by fitting in Process Global Area, (PGA).&amp;nbsp; Slow storage will only worsen this challenge, so it’s important to know how much “swapping” to temp the database in question is doing.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In my next series in these posts, I’ll explain what options there are to deal with:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Offloading calculations&lt;/LI&gt;
&lt;LI&gt;IO Explosion after loss of HCC&lt;/LI&gt;
&lt;LI&gt;Flash logging&lt;/LI&gt;
&lt;LI&gt;High IO Storage solutions for Exadata&lt;/LI&gt;
&lt;LI&gt;InfiniBand for network latency&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If you want to learn more than what was provided in this blog about Oracle on Azure IaaS, please see the following posts:&lt;/P&gt;
&lt;P&gt;&lt;A href="https://techcommunity.microsoft.com/t5/data-architecture-blog/estimate-tool-for-sizing-oracle-workloads-to-azure-iaas-vms/ba-p/1427183" target="_blank" rel="noopener"&gt;Estimate Tool for Sizing Oracle Workloads to Azure IaaS VMs - Microsoft Tech Community&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://techcommunity.microsoft.com/t5/data-architecture-blog/how-to-save-on-oracle-core-licensing-on-azure-cloud/ba-p/2960527" target="_blank" rel="noopener"&gt;How to Save on Oracle Core Licensing on Azure Cloud - Microsoft Tech Community&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://techcommunity.microsoft.com/t5/data-architecture-blog/oracle-ha-in-azure-options/ba-p/2281896" target="_blank" rel="noopener"&gt;Oracle HA in Azure- Options - Microsoft Tech Community&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 20 Jan 2022 23:23:35 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/data-architecture-blog/understanding-awr-data-for-exadata-to-azure-iaas-migrations-part/ba-p/3067771</guid>
      <dc:creator>DBAKevlar</dc:creator>
      <dc:date>2022-01-20T23:23:35Z</dc:date>
    </item>
    <item>
      <title>Constrained vCPU and Oracle Licensing in Azure</title>
      <link>https://techcommunity.microsoft.com/t5/data-architecture-blog/constrained-vcpu-and-oracle-licensing-in-azure/ba-p/3041537</link>
      <description>&lt;P&gt;Numerous times I’ve experienced misunderstanding on licensing around&lt;A href="https://docs.microsoft.com/en-us/azure/virtual-machines/constrained-vcpu" target="_self"&gt; constrained vCPU VMs&lt;/A&gt;.&amp;nbsp; Sometimes the confusion is on the term, “constrained”, (I would have named this VM type, “Same CPU, bigger chassis”.) We’ll go over how we can confirm how many vCPU is on the VM, including comparing a standard VM with 16 vCPU and a constrained 8-vCPU with the 16-vCPU chassis, demonstrating the vCPU count validation between both of them.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We’ll also discuss the continual confusion around the terms of hyperthreading and multithreading.&amp;nbsp; Oracle recently updated their documentation around third party cloud support and licensing, replacing hyperthreading to go to the more generic term for any CPU that can produce &lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fthunderboltlaptop.com%2Fwhat-is-a-multi-threaded-cpu%2F&amp;amp;data=04%7C01%7CKellyn.Gorman%40microsoft.com%7C4c702a9969cb43bf621f08d9c1254ded%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637753188237539228%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&amp;amp;sdata=2C20CD0IKRJgayCdv80%2BMkvE%2FgmNct3L%2F83XL30Qjps%3D&amp;amp;reserved=0" target="_blank"&gt;multiple threads per CPU.&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The third topic stems around the need to distinguish between enabled and disabled in the licensing documentation.&amp;nbsp; &lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fazure%2Fvirtual-machines%2Fmitigate-se%23%3A~%3Atext%3DIf%2520the%2520number%2520of%2520logical%2520processors%2520is%2520greater%2Cdisabled%252C%2520support%2520will%2520require%2520a%2520full%2520VM%2520reboot.&amp;amp;data=04%7C01%7CKellyn.Gorman%40microsoft.com%7C4c702a9969cb43bf621f08d9c1254ded%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637753188237539228%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&amp;amp;sdata=PXC5xz57o53LtE8V8OAy1Jv4aIlq4V4g2eBr50H3zt8%3D&amp;amp;reserved=0" target="_blank"&gt;Microsoft Support can disable hyperthreading, (&lt;/A&gt;or multithreading) on our vCPUs, which some customers choose to do to save on licensing at the cost of performance and paying more on monthly infrastructure.&amp;nbsp; This again, is not to be confused with constrained VM series, as it only covers the vCPU available and that can be counted on the VM.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The following statement from Oracle licensing has nothing to do with constrained vCPUs and everything to do with hyper, now referred to as multithreading vCPUs:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;"Microsoft Azure – count &lt;STRONG&gt;two &lt;/STRONG&gt;vCPUs as equivalent to one Oracle Processor license if &lt;STRONG&gt;multi-threading of processor cores is enabled, &lt;/STRONG&gt;and one vCPU as equivalent to&lt;STRONG&gt; one&lt;/STRONG&gt; Oracle Processor license if &lt;STRONG&gt;multi-threading of processor cores is not enabled."&lt;/STRONG&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If there is a question about the validity of the core counts for licensing for any customer, that’s easy to resolve- Ask for an audit!&amp;nbsp; Oracle is going to use the correct procedure to audit and count the vCPUs, just as I will demonstrate below.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Oracle licensing is based off the CPU or vCPU count that is available to the host.&amp;nbsp; Below I have chosen the most common ways you can gather Core/CPU count from a Linux host.&amp;nbsp; Oracle pulls the data from the host, and I’ve used these commands to demonstrate that although poorly named, constrained vCPU VMs have a bigger chassis, but only the number of vCPU that are listed and those are the ONLY vCPUs that Oracle can charge licensing on.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;One or more of these commands would most likely be used by any audit team. &amp;nbsp; To prove the results, I just built two Oracle VMs, using Oracle Linux in my own subscription, one with 16-vCPU and then it’s same chassis, but with a constrained, 8-vCPU VM sku.&lt;/P&gt;
&lt;P&gt;What do the numbers show?&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;STRONG&gt;E16ds v4 16-vCPU&lt;/STRONG&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;TABLE width="550"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="310"&gt;&lt;STRONG&gt;Command&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD width="68"&gt;&lt;STRONG&gt;Total CPU&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD width="17"&gt;&amp;nbsp;&lt;/TD&gt;
&lt;TD width="68"&gt;&lt;STRONG&gt;LSCPU INFO&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD width="87"&gt;&amp;nbsp;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="310"&gt;1. lscpu command&lt;/TD&gt;
&lt;TD&gt;16&lt;/TD&gt;
&lt;TD&gt;&amp;nbsp;&lt;/TD&gt;
&lt;TD&gt;CPU(s):&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 16&lt;/TD&gt;
&lt;TD width="87"&gt;&amp;nbsp;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="310"&gt;2. cat /proc/cpuinfo | grep processor | wc -l&lt;/TD&gt;
&lt;TD&gt;16&lt;/TD&gt;
&lt;TD&gt;&amp;nbsp;&lt;/TD&gt;
&lt;TD colspan="2"&gt;On-line CPU(s) list:&amp;nbsp;&amp;nbsp; 0-15&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="310"&gt;3. top or htop with "1" option&lt;/TD&gt;
&lt;TD&gt;16&lt;/TD&gt;
&lt;TD&gt;&amp;nbsp;&lt;/TD&gt;
&lt;TD colspan="2"&gt;Thread(s) per core:&amp;nbsp;&amp;nbsp;&amp;nbsp; 2&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="310"&gt;4. nproc&amp;nbsp;&lt;/TD&gt;
&lt;TD&gt;16&lt;/TD&gt;
&lt;TD&gt;&amp;nbsp;&lt;/TD&gt;
&lt;TD colspan="2"&gt;Core(s) per socket:&amp;nbsp;&amp;nbsp;&amp;nbsp; 8&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="310"&gt;5. hwinfo command**&lt;/TD&gt;
&lt;TD&gt;16&lt;/TD&gt;
&lt;TD&gt;&amp;nbsp;&lt;/TD&gt;
&lt;TD&gt;Socket(s):&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;nbsp;1&lt;/TD&gt;
&lt;TD width="87"&gt;&amp;nbsp;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="310"&gt;6. dmidecode -t processor | grep "Status: Populated, Enabled" &amp;gt; cpu.lst&lt;/TD&gt;
&lt;TD&gt;16&lt;/TD&gt;
&lt;TD&gt;&amp;nbsp;&lt;/TD&gt;
&lt;TD&gt;&amp;nbsp;&lt;/TD&gt;
&lt;TD width="87"&gt;&amp;nbsp;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="310"&gt;7. getconf _NPROCESSORS_ONLN&amp;nbsp;&lt;/TD&gt;
&lt;TD&gt;16&lt;/TD&gt;
&lt;TD&gt;&amp;nbsp;&lt;/TD&gt;
&lt;TD&gt;&amp;nbsp;&lt;/TD&gt;
&lt;TD width="87"&gt;&amp;nbsp;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="310"&gt;8. grep -c processor /proc/cpuinfo&lt;/TD&gt;
&lt;TD&gt;16&lt;/TD&gt;
&lt;TD&gt;&amp;nbsp;&lt;/TD&gt;
&lt;TD&gt;&amp;nbsp;&lt;/TD&gt;
&lt;TD width="87"&gt;&amp;nbsp;&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;STRONG&gt;E16-8ds v4 8-vCPU Constrained&lt;/STRONG&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;TABLE width="550"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="310"&gt;&lt;STRONG&gt;Command&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD width="68"&gt;&lt;STRONG&gt;Total CPU&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD width="17"&gt;&amp;nbsp;&lt;/TD&gt;
&lt;TD width="68"&gt;&lt;STRONG&gt;LSCPU INFO&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD width="87"&gt;&amp;nbsp;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="310"&gt;1. lscpu command&lt;/TD&gt;
&lt;TD&gt;8&lt;/TD&gt;
&lt;TD&gt;&amp;nbsp;&lt;/TD&gt;
&lt;TD&gt;CPU(s):&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 8&lt;/TD&gt;
&lt;TD width="87"&gt;&amp;nbsp;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="310"&gt;2. cat /proc/cpuinfo | grep processor | wc -l&lt;/TD&gt;
&lt;TD&gt;8&lt;/TD&gt;
&lt;TD&gt;&amp;nbsp;&lt;/TD&gt;
&lt;TD colspan="2"&gt;On-line CPU(s) list:&amp;nbsp;&amp;nbsp; 0-7&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="310"&gt;3. top or htop with "1" option&lt;/TD&gt;
&lt;TD&gt;8&lt;/TD&gt;
&lt;TD&gt;&amp;nbsp;&lt;/TD&gt;
&lt;TD colspan="2"&gt;Thread(s) per core:&amp;nbsp;&amp;nbsp;&amp;nbsp; 2&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="310"&gt;4. nproc&amp;nbsp;&lt;/TD&gt;
&lt;TD&gt;8&lt;/TD&gt;
&lt;TD&gt;&amp;nbsp;&lt;/TD&gt;
&lt;TD colspan="2"&gt;Core(s) per socket:&amp;nbsp;&amp;nbsp;&amp;nbsp; 4&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="310"&gt;5. hwinfo command**&lt;/TD&gt;
&lt;TD&gt;8&lt;/TD&gt;
&lt;TD&gt;&amp;nbsp;&lt;/TD&gt;
&lt;TD&gt;Socket(s):&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 1&lt;/TD&gt;
&lt;TD width="87"&gt;&amp;nbsp;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="310"&gt;6. dmidecode -t processor | grep "Status: Populated, Enabled" &amp;gt; cpu.lst&lt;/TD&gt;
&lt;TD&gt;8&lt;/TD&gt;
&lt;TD&gt;&amp;nbsp;&lt;/TD&gt;
&lt;TD&gt;&amp;nbsp;&lt;/TD&gt;
&lt;TD width="87"&gt;&amp;nbsp;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="310"&gt;7. getconf _NPROCESSORS_ONLN&amp;nbsp;&lt;/TD&gt;
&lt;TD&gt;8&lt;/TD&gt;
&lt;TD&gt;&amp;nbsp;&lt;/TD&gt;
&lt;TD&gt;&amp;nbsp;&lt;/TD&gt;
&lt;TD width="87"&gt;&amp;nbsp;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="310"&gt;8. grep -c processor /proc/cpuinfo&lt;/TD&gt;
&lt;TD&gt;8&lt;/TD&gt;
&lt;TD&gt;&amp;nbsp;&lt;/TD&gt;
&lt;TD&gt;&amp;nbsp;&lt;/TD&gt;
&lt;TD width="87"&gt;&amp;nbsp;&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;**Had to install outside of the yum repo and install both epel_release and hwinfo, so unavailable normally.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;As you can clearly see, the E16ds v4 has 16-vCPU, (8 cores with hyperthreading to total 16) to be licensed by Oracle.&amp;nbsp; The E16-8ds v4 shows 8-vCPU, (4 cores with hyperthreading to total &lt;span class="lia-unicode-emoji" title=":smiling_face_with_sunglasses:"&gt;😎&lt;/span&gt; allowed to be licensed by Oracle.&amp;nbsp;&amp;nbsp;&amp;nbsp; If for some reason, someone was to attempt to license vCPU that can’t be counted with any tool, legal would have a party with them.&amp;nbsp; It’s essential, when having this type of conversation, where either the customer or the salesperson is making Oracle sound more restricting with its licensing than it really is.&amp;nbsp; To combat this, it’s important to deeply understand what terms are being used in the documentation, (it’s not always easy to translate.)&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Hopefully this is helpful and happy holidays!&lt;/P&gt;</description>
      <pubDate>Fri, 17 Dec 2021 23:47:40 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/data-architecture-blog/constrained-vcpu-and-oracle-licensing-in-azure/ba-p/3041537</guid>
      <dc:creator>DBAKevlar</dc:creator>
      <dc:date>2021-12-17T23:47:40Z</dc:date>
    </item>
  </channel>
</rss>

