database platform cse – sql
45 TopicsAzure SQL Managed Instance – Sync Agent Jobs and Logins in Failover Groups
In an Azure SQL Managed Instance setup with Failover Group, the schema, data, and database-level users will always be synced between primary and secondary instances. As of October 2021, there is no sync mechanism for SQL Server Agent Jobs or Server Logins/Roles because the functionality of replicating system databases does not exist. This document provides guidance and scripts to implement a workaround solution to have the Agent Jobs and Server Logins/Roles synced between primary and secondary instances.Oracle to Azure SQL Post Migration Optimization: Handling Oracle Function-Based Indexes in Azure SQL
Why Post-Migration Optimization Matters For customers migrating from Oracle to Azure SQL Databases, understanding Azure SQL optimization techniques can make a significant difference in overall database performance. During the Application Stress Testing phase, it is essential to identify query patterns, evaluate index usage, and refine queries to prevent performance bottlenecks and ensure system efficiency. While Azure SQL databases Intelligent Query Processing (IQP) can significantly improve performance with minimal intervention, certain complex scenarios may still require manual query rewrites to achieve the desired level of performance. Leveraging SARGable Queries in SQL Server One such optimization technique involves ensuring that queries in SQL Server are SARGable (Search ARGument-able). SARGable queries enable the SQL Engine to efficiently leverage indexes, significantly reducing resource consumption and execution time. For instance, small adjustments such as replacing column transformations with range-based conditions, creating computed columns, and adding required indexes on computed columns can have a significant impact on query performance. Metric Non-SARGable Query SARGable Query Execution Plan Index Scan (usually less efficient) Index Seek (usually more efficient) IO Cost High (Reads Many Pages) Low (Reads Fewer Pages) CPU Time Higher (Function Evaluation for Rows) Lower (Direct Use of Index) Logical Reads High (entire index/table scanned) Low (only relevant index pages accessed) Handling Function-Based Indexes: Oracle vs SQL Server Oracle: In Oracle, function-based indexes allow indexing on the result of a function or expression. This enables efficient query execution when functions or expressions are used in WHERE or JOIN conditions. The TRUNC(APPLN_START_DT) function is precomputed and stored in the Index. Queries can get the value of the expression from the index instead of computing it. The following is an example of this in practice: -- Create a function-based index on the TRUNC(APPLN_START_DT) expression CREATE INDEX idx_appln_start_dt ON appln_data ( trunc(appln_start_dt) ); -- Filter rows where the year of APPLN_START_DT matches the current year. SELECT * FROM appln_data WHERE trunc(appln_start_dt, 'YYYY') = trunc(sysdate, 'YYYY'); SQL Server: SQL Server takes a slightly different approach to handling Indexes and Query Optimization, particularly when working with large datasets or tables containing millions of rows. The SQL optimizer excels when queries are SARGable (Search ARGument-able), meaning the predicates are written to allow direct index utilization without applying transformations to columns. Let’s analyze this with a simple and practical example to see how query design impacts performance, resource usage, and execution efficiency. Let's analyze two versions of a query on a table [APPLN_DATA] with 50 million rows to understand how query formulation impacts performance, resource utilization, and execution efficiency. Comparing SARGable and Non-SARGable Queries Non-SARGable Query: -- Filter rows where the year of APPLN_START_DT matches the current year. SELECT * FROM [dbo].[APPLN_DATA] WHERE YEAR(APPLN_START_DT) = YEAR(GETDATE()); Execution Plan: Index Scan. Logical Reads: 928,804 CPU Time: 7,562 ms Physical Reads: 2 Elapsed Time: 63,130 ms Execution Plan: Why It's Inefficient: The YEAR() function transforms the APPLN_START_DT column, which prevents the optimizer from performing an Index Seek and forces an Index Scan. The execution plan confirms this by showing an Index Scan, where SQL Server reads through all rows instead of directly seeking the relevant ones using the index. This behavior increases resource consumption, including logical reads, CPU usage, I/O costs, and overall query execution time. SARGable Query: SELECT * FROM [dbo].[APPLN_DATA] WHERE APPLN_START_DT >= CAST(YEAR(GETDATE()) AS CHAR(4)) + '-01-01' AND APPLN_START_DT < CAST(YEAR(GETDATE()) + 1 AS CHAR(4)) + '-01-01'; Execution Plan: Index Seek. Logical Reads: 173,041 CPU Time: 3,547 ms Elapsed Time: 52,092 ms Execution Plan: Why It's Better: The query uses range-based conditions (>= and <) directly on the APPLN_START_DT column without applying any functions. This makes the query SARGable, allowing SQL Server to efficiently perform an Index Seek, as shown in the execution plan. An Index Seek ensures that SQL Server retrieves only the relevant rows, significantly reducing logical reads, CPU usage, and overall query execution time. Understanding the Key Differences and Impact of SARGable and Non-SARGable Queries Aspect Non-SARGable SARGable Query Example WHERE YEAR(APPLN_START_DT) = YEAR(GETDATE()) WHERE APPLN_START_DT >= '2024-01-01' Index Usage Index Scan (Due to YEAR() function - inefficient) Index Seek (efficient) Logical Reads High (entire index/table scanned) Low (only relevant index pages accessed) CPU Time High Low Elapsed Time Longer Shorter Query Optimization Applies function on the column Uses range conditions without transformations Additional Optimization Techniques: A Computed column with an index was not used here because the query retrieves rows for an entire year, making range-based conditions more efficient. Computed indexes are more suitable for scenarios with specific transformations on smaller subsets of data, such as Calculations and Filtering for a particular month or day. In our example, we use SELECT * for simplicity, but in real-world workloads, queries typically retrieve specific columns based on application requirements. Selecting only the required columns reduces I/O and improves query performance. Additionally, Covering Indexes (indexes that include all queried columns) should be considered to eliminate Key Lookups, further enhancing performance for frequently accessed queries. If your table has billions of rows, even efficient Index Seeks may take considerable time. Filtered Indexes, Indexed Views and Partitioning the table based on a logical key (e.g., date ranges) can help improve query performance further. Key Takeaways for a successful Heterogenous Migration: Oracle's function-based indexes can be adapted in SQL Server using Range-based Conditions, Computed Columns with Indexes, Indexed Views, or Application-level Optimizations. Aim to write SARGable queries to leverage SQL Server's indexing capabilities effectively. Avoid using Functions in Query Predicates. Evaluate and consider the use of Indexed Views and Filtered Indexes. It is crucial to iteratively analyze and Update Index Statistics, Evaluate Index Access patterns, Review Histograms, monitor Query Plans and Fragmentation, Rebuilding Indexes during the App Stress Testing phase to achieve optimal performance. Collaborate with application developers to refactor query logic when needed. References: Intelligent query processing details - SQL Server | Microsoft Learn Create indexed views - SQL Server | Microsoft Learn Create filtered indexes - SQL Server | Microsoft Learn Migrating Oracle Databases to SQL Server (OracleToSQL) - SQL Server | Microsoft Learn Automatic tuning - SQL Server | Microsoft Learn Feedback and suggestions If you have feedback or suggestions for improving this data migration asset, please contact the Databases SQL Ninja Engineering Team (datasqlninja@microsoft.com). Thanks for your support! Note: For additional information about migrating various source databases to Azure, see the Azure Database Migration Guide.Troubleshooting page-related performance issues in Azure SQL
Azure SQL is a family of managed, secure, and intelligent products that use the SQL Server database engine in the Azure cloud. Though Azure SQL is built upon the familiar SQL Server engine, there are some differences between SQL Server and Azure SQL, such as availability of certain diagnostic commands like DBCC PAGE. DBCC PAGE is a very useful command in SQL Server for troubleshooting and inspecting the internal structure of data pages, but it is not supported in Azure SQL due to differences in the underlying infrastructure and management approaches. This limitation can present some challenges for database administrators and developers who depend on DBCC PAGE for troubleshooting. Nevertheless, Azure SQL provides alternative methods and tools for database troubleshooting, ensuring that DBAs can still achieve effective results, even without the use of DBCC PAGE. This article explores these alternatives, though they do not fully replace DBCC PAGE.Data Archiving Strategies for SQL Server
As databases grow, keeping them performing well becomes increasingly more difficult. Often pruning data that is no longer required is a better strategy. This blog explores different techniques to remove or archive data at a high level. Some of the techniques depend on the version of SQL Server that you are running (box product, Azure SQL DB/HyperScale, or Azure SQL Managed Instance), but some techniques apply to all.Part 1 - Azure SQL DB Hyperscale Table Partitioning - Best Practices & Recommendations
Implementing table partitioning on a table that is exceptionally large in Azure SQL Database Hyperscale is not trivial due to the large data movement operations involved, and potential downtime needed to accomplish them efficiently. To help customers implement partitioning on these large tables, this 2-part article goes over the details. Part 1 of this blog aims to help consider table partitioning fundamentals alongside other database features and concepts, in preparation to implement table partitioning in an actual table which is covered in Part 2.Database Migration / Reverse Migration between Azure SQL (DB/MI) and SQL Server using SmartBulkCopy
The purpose of this blog post is to accelerate and automate database migrations between different SQL Server Source and Target pairs that are out of scope for our 1st Party migration tooling such as DMA/DMS. Many times there is a need to migrate back to on-premises from Azure SQL (for compliance or regulatory reasons) and this blog can cover the option of accomplishing this using SmartBulkCopy for data movement.Resolving ADF Copy Upsert Failures in SQL DB for XML Columns
Customers modernizing their Oracle, Mainframe/Midrange Db2, Sybase and other databases to Azure SQL often use Azure Data Factory (ADF) for data transfer. For large datasets, an initial copy is made a few days prior, followed by incremental data copies during cutover. The ADF Copy activity's Upsert method is common for these incremental transfers, but this method fails for tables which contain XML columns. This blog explains using Stored Procedure as the Sink behavior in Copy activity to perform Upsert into Azure SQL Database. Same method can be used even when target is Azure SQL MI or Azure SQL VM.Database migrations to Azure SQL Managed Instance - Restore with Full and Differential backups
This blog will guide organizations to help accelerate migrations from on-premises, IaaS and/or relevant SQL Server implementation that need to be migrated to an Azure SQL Managed Instance, with little downtime, and having full and differential backups as the only options from source (e.g. no log backup capabilities).Seamless Homogeneous SQL Family Migration via Azure Data Factory
Migrating data across SQL platforms, be it SQL Server, Azure SQL Database, Managed Instance, or SQL Server on IaaS, often involves operational complexity and potential downtime. Azure Data Factory (ADF) removes those barriers by enabling seamless, logical data movement across these services in either direction. Whether using SQL Change Data Capture (CDC) for near-zero downtime or traditional batch-based strategies, ADF ensures data consistency and operational continuity throughout the process. While physical data migration strategies remain valuable in many scenarios, this blog focuses on how ADF delivers a unified, scalable approach to logical database migration, in modernizing the database environments with minimal downtime. Prerequisites NOTE: Please make sure to go through the limitations of CDC as this blog doesn't cover those. SQL CDC Limitations Known Issues with CDC Before proceeding, please ensure you have the following prerequisites: An Azure subscription. Access to Azure Data Factory. Source and target databases, such as SQL Server, Azure SQL Database, Azure SQL MI etc. Enable Change Data Capture (CDC) on the source database for online migration. CDC captures changes like insert, update, and delete (DML) operations in the source database, allowing near real-time replication to a target database with minimal latency. To enable CDC, run: -- Enable CDC on the database EXEC sys.sp_cdc_enable_db; -- Enable CDC on the source table EXEC sys.sp_cdc_enable_table @source_schema = N'dbo', @source_name = N'SourceTable', @role_name = NULL; Azure Data Factory Provisioning ADF should be provisioned to provide a runtime environment for executing the pipeline. Self-hosted Integration Runtime (SHIR) SHIR is required to connect to the data source or destination which is not natively reachable by Azure (e.g., on-premises, private VNET, behind firewall). Linked Services These should be created to connect to the source and target. Datasets Datasets identify data within different data stores, such as tables, files, folders, and documents. Performance Optimization To speed up the process, primary keys, non-clustered indexes and constraints should be dropped on the target to reduce blocking/deadlocks and minimize resource contention. Script Components Adf_source.sql This script should be deployed on the source SQL Server. It will populate information in the dbo.data_extraction_config_adf table to run Change Data Capture (CDC) and the initial load pipeline. Adf_target.sql This script should be deployed on the target SQL server. It will create stored procedures to help merge CDC changes and create objects necessary for running pipelines smoothly. Master tables dbo.cdc__watermark__adf contains information about the CDC tables for the last watermark. dbo.data__extraction__config__adf contains information about the heap tables for initial load and CDC tables. dbo.sqlqueries contains information about the clustered tables for initial load. Let's deep dive into pipelines to handle different scenarios Pipeline 1: ClusteredTableMigration_Initial This pipeline migrates data only from clustered tables. The dbo.sqlqueries table automatically populates with clustered table info via the pipeline (Stored Procedure Activity). Ensure the source table schema matches the target table schema. To run the pipeline for specific tables, set the IsActive flag to 0 (inactive) or 1 (active) in the sqlqueries table or add the table name in the Lookup activity. Pipeline 2: HeapTableMigration_Initial This pipeline is designated for migrating heap tables. Prior to executing this pipeline, ensure that the heap table information has been added to the dbo.data__extraction__config__adf table. The source table schema should be synchronized with the target table schema. To execute the pipeline for a set of tables, the IsActive flag may be set to 0 (inactive) or 1 (active) in the dbo.data__extraction__config__adf table. Pipeline 3: CDCTableMigration This pipeline facilitates the migration of clustered tables with Change Data Capture (CDC) enabled. Prior to execution, please ensure that the relevant information for these clustered tables is entered into the dbo.data__extraction__config__adf table. Ensure the table schema is synchronized with the target schema, and that all tables intended for CDC synchronization possess a primary key and matching schema definition on the target system (excluding constraints and non-clustered indexes). To execute the pipeline for specific tables, the IsActive flag can be set to 0 (inactive) or 1 (active) in the dbo.data__extraction__config__adf table. Schedule the Pipeline - For CDC load only Create a trigger: Create a trigger to schedule the pipeline to run at regular intervals (e.g., every 5-30 minutes based on application requirements) to capture and apply changes incrementally. Monitor the pipeline: Monitor the pipeline runs to verify that the data is being migrated and synchronized accurately. Cutover and cleanup Once the delta changes are synchronized fully on source and target database, cutover can be initiated by setting the source database to read-only and then changing the connection string of the application (or all apps, agent jobs etc. that are impacted) to use the new target database and perform cleanup by deleting the SPs in target database and stop the CDC, remove tables, and SPs in source database. Conclusion Using Azure Data Factory allows for both online and offline data migration with minimal downtime, ensuring consistency between source and target databases. Change Data Capture enables near real-time data migration, suitable for environments requiring continuous data synchronization. Note - To get ADF Pipelines and T-SQL Queries mentioned in this blog please reach out to our team alias : datasqlninja@microsoft.comEnhancing SQL PaaS Security: Automating Azure Blob SAS Key Rotation Using Azure Runbooks
In today’s cloud environment, security and efficient management of storage resources are paramount for organizations. Azure Blob Storage, known for its scalability and robust security, is a popular solution for handling large volumes of unstructured data. However, ensuring secure access to this data requires careful management, particularly when it comes to Shared Access Signatures (SAS) keys, which provide time-limited access to storage resources. To mitigate the risk of unauthorized access, it’s essential to rotate SAS keys regularly. Automating this process not only enhances security by ensuring the timely retirement of outdated keys but also guarantees the secure creation and updating of new keys. In this guide, we’ll show you how to use Azure Automation PowerShell Runbooks to automate the rotation of Azure Blob Storage SAS keys. This solution also covers updating external user credentials used in SQL to access the blob storage, simplifying access management while reducing manual intervention and the potential for human error. By adopting this automated approach, organizations can strengthen their data security practices, align with key management best practices, and ensure uninterrupted, secure access for users—all while minimizing administrative effort. This article will walk you through the process of automating the rotation of Azure Blob Storage SAS keys using Azure Automation PowerShell Runbooks. You’ll learn how to configure the necessary resources and set up permissions to ensure smooth execution. Note: This article uses SQL Managed Instance as an example to outline the steps in detail, which can also be applied to Azure SQL Database for implementation. Prerequisites Before proceeding with the implementation, ensure you meet the following prerequisites: An active Azure subscription. Azure Automation Account with sufficient privileges to perform administrative tasks. Access to an Azure Storage Account, Azure Key Vault, and Azure SQL Managed Instances that will be involved in the automation process. Summary Configure Azure Automation Account Set up an Automation Account in Azure and import required modules for integration. Grant Required Permissions Assign the necessary roles to the Automation Account to enable access to Azure Key Vault, Storage Accounts, and SQL Managed Instances. Create Stored Procedure in SQL Managed Instance Implement a stored procedure to facilitate database credential updates. Create Login and Grant Permissions Create a login for the Automation Account and grant the required permissions to execute operations in the SQL Managed Instance. Setup Documentation Configure Azure Automation Account To start, configure an Azure Automation Account to run the automation scripts and manage resources. Create an Azure Automation Account: Go to the Azure portal and create a new Azure Automation Account. This account will host the PowerShell Runbooks that will automate the SAS key rotation. Import the SQL Server Module: Once the Automation Account is created, import the SQLServer module into the Automation Account. This module allows the Runbook to interact with SQL Managed Instances and perform necessary database operations. a) Go to the Azure Portal and navigate to your Automation Account. b) Select "Modules" under the "Shared Resources" section in the Automation Account. c) Click on "Browse Gallery", and search for "SQLServer" in the gallery. d) Select the "SQLServer" module and click Import. e) Wait for the import to complete and verify the module appears under Installed Modules. Runbook Setup Create a new Runbook, select PowerShell as the type, choose the recommended runtime version, and enter the script code shared (SASKeyRotationAutomationScript.ps1) below as the content. Grant Required Permissions Ensure that the Automation Account has the appropriate permissions to interact with dependent resources such as Azure Key Vault, Storage Account, and SQL Managed Instances. Key Vault: Assign the "Key Vault Secrets Officer" role to the Automation Account on the Key Vault. This role allows the account to manage secrets in the Key Vault, which will be necessary for storing SAS keys. Storage Account: Assign the "Storage Account Key Operator Service Role" to the Automation Account for the relevant Storage Account. This role enables the Automation Account to manage keys for the Storage Account, including generating new SAS tokens. SQL Managed Instances: Assign the "Reader" role to the Automation Account on each SQL Managed Instance. This role allows the Automation Account to read database configurations and settings required for the SAS key update process. Subscription Level: Assign the "Reader" role at the subscription level to ensure that the Automation Account can access all the resources it needs across the subscription. Create Stored Procedure in SQL Managed Instance Next, create a stored procedure in the SQL Managed Instance that will handle updates to the database credentials, as part of the SAS key rotation process. Create the Stored Procedure: Implement the stored procedure UpdateDatabaseScopedCredential.sql within your SQL Managed Instance to manage database credential updates. This procedure will be invoked by the Automation Account during the execution of the Runbook. Create Login and Grant Permissions Create a login for the Automation Account within the SQL Managed Instance and assign the necessary permissions to execute the stored procedure and other required tasks. Execute the Following SQL Commands: USE [master] GO CREATE LOGIN [AutomationAccount] FROM EXTERNAL PROVIDER; GO USE [DBA] GO CREATE USER [AutomationAccount] FOR LOGIN [AutomationAccount]; GRANT EXECUTE ON dbo.UpdateDatabaseScopedCredential TO [AutomationAccount]; GRANT CREATE TABLE TO [AutomationAccount]; Explanation: The above SQL commands will create a login for the Automation Account from an external provider (likely Azure AD), map it to a user, and grant the required permissions to execute the stored procedure and create tables, if necessary, during the process. With these steps completed, your Azure environment will be ready to automate the SAS key rotation and securely manage credentials through Azure Automation. Next, manually run the Runbook to test its functionality, ensuring that it performs as expected. After confirming the successful execution, schedule the Runbook according to your requirements. PowerShell Script File for SAS Key Rotation SASKeyRotationAutomationScript.ps1 #________________________________ #Input parameters #________________________________ $subscriptionId =xxxxx-459a-409f-8339-xxxxxxxx $resourceGroupName = "ProdRG" $storageAccountName = "Prodstorageaccount" $keyVaultName = "Prodkeyvault" $secretName = "SASForHRTK" $SASexpiryDate =90 # days $automationAccountName = "AutomationAccount" $global:NewSASKey=""; $database ="master" Connect-AzAccount -Identity Set-AzContext -subscriptionId $subscriptionId | Out-Null *>$null # Retrieve SQL Managed Instances and construct DNS names $sqlInstances = Get-AzSqlInstance | Select-Object ManagedInstanceName, DnsZone $SQLMIList = $sqlInstances | ForEach-Object { $dnsName = "$($_.ManagedInstanceName).public.$($_.DnsZone).database.windows.net,3342" [PSCustomObject]@{ DNSName = $dnsName } } #_______________________________________________________________ # Variables used #_______________________________________________________________ Write-Output "[$(Get-Date)]: __________________ Beginning of SAS Key Rotation Script __________________" Write-Output "[$(Get-Date)]: Input Parameters for Execution:" # Add input parameters Write-Output "`n[$(Get-Date)]: Subscription ID : $subscriptionId" Write-Output "[$(Get-Date)]: Resource Group Name : $resourceGroupName" Write-Output "[$(Get-Date)]: Storage Account Name : $storageAccountName" Write-Output "[$(Get-Date)]: Key Vault Name : $keyVaultName" Write-Output "[$(Get-Date)]: Secret Name : $secretName" Write-Output "[$(Get-Date)]: SAS Expiry Date : $SASexpiryDate" Write-Output "[$(Get-Date)]: User Principal Name : $UserPrincipalName" Write-Output "[$(Get-Date)]: Connected to subscription: [$( (Get-AzSubscription -subscriptionId $subscriptionId).Name )]`n" Write-Output "[$(Get-Date)]: SQL Managed Instances identified in the current subscription `n" Write-Output ("_" * 60 + "`n"); $SQLMIList.DNSName; Write-Output ("`n" + ("_" * 60)) #_______________________________________________________________ # Generate a new SAS token and update Keyvault #_______________________________________________________________ function GenerateSASToken_And_UpdateKeyVault { $context = New-AzStorageContext -StorageAccountName $storageAccountName -StorageAccountKey ((Get-AzStorageAccountKey -ResourceGroupName $resourceGroupName -Name $storageAccountName -WarningAction SilentlyContinue)[0].Value ) -WarningAction SilentlyContinue $expiryTime = (Get-Date).Adddays($SASexpiryDate).ToUniversalTime().ToString("yyyy-MM-ddTHH:mm:ssZ") try { $sasToken = New-AzStorageAccountSASToken -Service Blob -ResourceType Container,Object -Permission "rwdlact" -ExpiryTime $expiryTime -Context $context Write-Output "[$(Get-Date)]: SAS Token generated successfully" $global:NewSASKey=$sasToken } catch { $ErrorOnSASkeyGeneration=$True Write-Output "[$(Get-Date)]: Failed to generate SAS Token. Error: $_" return ("ErrorOnSASkeyGeneration") } $securesasToken = $sasToken | ConvertTo-SecureString -AsPlainText -Force if ($ErrorOnSASkeyGeneration -ne $True) { # Check if the secret exists in Key Vault $secret = Get-AzKeyVaultSecret -VaultName $keyVaultName -Name $secretName -ErrorAction SilentlyContinue try { if ($secret) { # Update the existing secret with the new SAS token $res=Set-AzKeyVaultSecret -VaultName $keyVaultName -Name $secretName -SecretValue $securesasToken -Expires $expiryTime -ContentType ("SASToken_Created_"+(Get-Date -Format "yyyy-MM-dd_HH:mm:ss")) Write-Output "[$(Get-Date)]: Secret updated in Key Vault - $keyVaultName with key - $secretName." } else { # Create a new secret in Key Vault $res=Set-AzKeyVaultSecret -VaultName $keyVaultName -Name $secretName -SecretValue $securesasToken -Expires $expiryTime -ContentType ("SASToken_Created_"+(Get-Date -Format "yyyy-MM-dd_HH:mm:ss")) Write-Output "[$(Get-Date)]: New Secret created in Key Vault - $keyVaultName with key - $secretName." } } catch { # Handle the error Write-Output "[$(Get-Date)]: An error occurred while updating or creating the secret in Key Vault." Write-Output "Error Message: $_" $exception=$True } } } #_______________________________________________________________ # Update Database scope credentials on SQL Managed instance #_______________________________________________________________ function UpdateDatabaseScopeCredential { param ([string]$sqlmi) $myCred = Get-AutomationPSCredential -Name "CredentialforMI" Write-Output "[$(Get-Date)]:Connecting to the Server : $sqlmi" try { #Using managed identity $secureToken = Get-AzAccessToken -ResourceUrl "https://database.windows.net/" -AsSecureString $token = [System.Runtime.InteropServices.Marshal]::PtrToStringAuto([System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($secureToken.Token)) $Conn = New-Object System.Data.SqlClient.SqlConnection("Server=$sqlmi;Database=$database;Encrypt=True;Connection Timeout=30;") $Conn.AccessToken = $token $Conn.Open() $Command = $Conn.CreateCommand() $Command.CommandText = "EXEC $database.[dbo].[UpdateDatabaseScopedCredential] '$storageAccountName', '$NewSASKey'" $rowsAffected = $Command.ExecuteNonQuery() if ($rowsAffected -eq -1) { Write-Output "[$(Get-Date)]: The command executed successfully." } else { Write-Output "[$(Get-Date)]: $rowsAffected rows were affected." } Write-Output "[$(Get-Date)]: Updated the SASKey on $sqlmi" } catch { Write-Output "[$(Get-Date)]:An error occurred: $_" } finally { if ($Conn.State -eq 'Open') { $Conn.Close() } } } Write-Output "[$(Get-Date)]: Attempting to generate a new SASToken for Storage account :[$storageAccountName] and update Keyvault :[$keyVaultName]" GenerateSASToken_And_UpdateKeyVault Write-Output "[$(Get-Date)]: Attempting to update the database scope credential for SQL Managed instances" Write-Output "$SQLMIList`n" foreach ($SQLMI in $SQLMIList) { UpdateDatabaseScopeCredential -sqlmi $SQLMI.DNSName } Write-Output "[$(Get-Date)]: __________________ End of SAS Key Rotation Script __________________" SQL Script to Update a Database Scoped Credential UpdateDatabaseScopedCredential.sql Use [DBA] go CREATE OR ALTER PROCEDURE UpdateDatabaseScopedCredential @Storactaccount NVARCHAR(128), @NewKey NVARCHAR(MAX) WITH ENCRYPTION AS BEGIN set nocount on -- Declare variables for dynamic SQL DECLARE @sql NVARCHAR(MAX); DECLARE @dbName NVARCHAR(128); DECLARE @credentialName NVARCHAR(128); IF OBJECT_ID('dbo.SASKeyRotationLogging', 'U') IS NULL BEGIN CREATE TABLE SASKeyRotationLogging (Timestamp DATETIME DEFAULT GETUTCDATE(), Details NVARCHAR(MAX)); END PRINT REPLICATE('_', 100)+ CHAR(13) + CHAR(10)+ '[' + CONVERT(NVARCHAR(30), GETUTCDATE(), 126) + '] Starting the script to update the credentials on Server [' + @@SERVERNAME + '].'; INSERT INTO master..SASKeyRotationLogging (Details) VALUES ('[' + CONVERT(NVARCHAR(30), GETUTCDATE(), 126) + '] Starting the script to update the credentials on Server [' + @@SERVERNAME + '].'); INSERT INTO master..SASKeyRotationLogging (Details) SELECT '[' + CONVERT(NVARCHAR(30), GETUTCDATE(), 126) + '] Database name: ' + name FROM sys.databases WHERE name LIKE '%_ems'; -- Cursor to iterate over databases DECLARE db_cursor CURSOR FOR SELECT name FROM sys.databases WHERE name LIKE '%_ems'; OPEN db_cursor; FETCH NEXT FROM db_cursor INTO @dbName; WHILE @@FETCH_STATUS = 0 BEGIN -- Construct the dynamic SQL to be executed in each database SET @sql = ' USE [' + @dbName + ']; DECLARE @Result TABLE (CredentialName NVARCHAR(128)); INSERT INTO @Result (CredentialName) SELECT c.name AS CredentialName FROM sys.database_scoped_credentials AS c INNER JOIN sys.external_data_sources AS ds ON ds.credential_id = c.credential_id WHERE c.credential_identity = ''SHARED ACCESS SIGNATURE'' AND ds.type = 5 AND ds.location LIKE ''%' + @Storactaccount + '%''; -- Corrected LIKE syntax -- Update the credentials DECLARE @credName NVARCHAR(128); DECLARE cred_cursor CURSOR FOR SELECT CredentialName FROM @Result; OPEN cred_cursor; FETCH NEXT FROM cred_cursor INTO @credName; WHILE @@FETCH_STATUS = 0 BEGIN -- Construct the ALTER DATABASE SCOPED CREDENTIAL command DECLARE @alterSql NVARCHAR(MAX); SET @alterSql = ''ALTER DATABASE SCOPED CREDENTIAL ['' + @credName + ''] WITH IDENTITY = ''''SHARED ACCESS SIGNATURE'''', SECRET = ''''' + @NewKey + ''''';'' EXEC sp_executesql @alterSql; PRINT ''['' + CONVERT(NVARCHAR(30), GETUTCDATE(), 126) + ''] [Database: ' + @dbName + '] Updated the key for credential ['' + @credName + ''].'' INSERT INTO master..SASKeyRotationLogging (Details) VALUES (''['' + CONVERT(NVARCHAR(30), GETUTCDATE(), 126) + ''] [Database: ' + @dbName + '] Updated the key for credential ['' + @credName + ''].''); FETCH NEXT FROM cred_cursor INTO @credName; END '; -- Execute the dynamic SQL EXEC sp_executesql @sql, N'@Storactaccount NVARCHAR(128)', @Storactaccount; FETCH NEXT FROM db_cursor INTO @dbName; END; CLOSE db_cursor; DEALLOCATE db_cursor; PRINT '[' + CONVERT(NVARCHAR(30), GETUTCDATE(), 126) + '] End of script to update the credentials on Server [' + @@SERVERNAME + '].'+ CHAR(13) + CHAR(10)+REPLICATE('_', 100); INSERT INTO master..SASKeyRotationLogging (Details) VALUES ('[' + CONVERT(NVARCHAR(30), GETUTCDATE(), 126) + '] Ending the script to update the credentials on Server [' + @@SERVERNAME + '].'); END