azure sql
691 TopicsConfigure Auditing for Azure SQL Database series - Part2
In the previous blog, we covered how to configure server level audit and database level audit for Azure SQL database with default settings. In this blog we will see how you can manage auditing using PowerShell cmdlets. PowerShell cmdlets to manage auditing for Azure SQL database: Get-AzSqlServerAudit - Get Server Auditing Policy Get-AzSqlDatabaseAudit - Get Database Auditing Policy Set-AzSqlDatabaseAudit - Create or Update Database Auditing Policy Set-AzSqlServerAudit - Create or Update Server Auditing Policy Remove-AzSqlDatabaseAudit - Remove Database Auditing Policy Remove-AzSqlServerAudit - Remove Server Auditing Policy Refer manage Azure SQL database auditing using Azur PowerShell documentation here for more details. Sometimes you might have a requirement to customize auditing for Azure SQL database to meet compliance or achieve parity with your on-premises servers or just audit specific details to meet business needs. Using PowerShell cmdlets, you can configure auditing to audit specific action groups and use predicate expressions to filter queries, logins, schemas, and databases. Before we modify the audit settings let us verify the existing server and database audit configuration Review the existing configuration Set the context to your subscription Set-AzContext -Subscription "xxx-xxxx-xxxx-xxxx" Check server audit configuration Get-AzSqlServerAudit "resource group name " "servername" You can see server audit is enabled with default action groups and target as log analytics workspace 3. Check database audit configuration Get-AzSqlDatabaseAudit "resource group name" "servername" "database name" Database audit is disabled, and you can see no action groups configured and all targets are in disabled state. Here we are discussing customizing audit settings based on a few sample scenarios and you can extend it further based on your requirements. Scenario 1: Exclude a specific login from auditing. Set-AzSqlServerAudit -ResourceGroupName "test" -ServerName "test" -BlobStorageTargetState Enabled -StorageAccountResourceId "xxxxxx" -PredicateExpression "[server_principal_name]!= 'dba'" Scenario 2: Exclude a login and specific database from auditing Set-AzSqlServerAudit -ResourceGroupName "test" -ServerName "test" -BlobStorageTargetState Enabled -StorageAccountResourceId "xxxxxx" -PredicateExpression "[server_principal_name] != ‘dbadmin’ and [database_name] != 'audit'" Scenario 3: Exclude select statements from auditing Set-AzSqlServerAudit -ResourceGroupName "test" -ServerName "sravstestdb" -PredicateExpression "statement not like '[select ]%'" Set-AzSqlServerAudit -ResourceGroupName "test" -ServerName "sravstestdb" -PredicateExpression "statement <> 'exec sp_executesql N’SELECT%’'" Scenario 4: Audit different action groups instead of default ones Set-AzSqlServerAudit -ResourceGroupName "test" -ServerName "test" -LogAnalyticsTargetState Enabled -WorkspaceResourceId "/subscriptions/fa58cf66-caaf-xxxxxxx-xxxxxxxxx/resourceGroups/test/providers/Microsoft.OperationalInsights/workspaces/test" -AuditActionGroup SUCCESSFUL_DATABASE_AUTHENTICATION_GROUP","FAILED_DATABASE_AUTHENTICATION_GROUP","DATABASE_OBJECT_CHANGE_GROUP" Scenario 5: use predicate expression to exclude transaction related actions from batch completed action group Set-AzSqlDatabaseAudit -ResourceGroupName "test" -ServerName "test" -LogAnalyticsTargetState Enabled -WorkspaceResourceId "/subscriptions/fa58cf66-caaf-xxxxxxx-xxxxxxxxx/resourceGroups/test/providers/Microsoft.OperationalInsights/workspaces/test" -AuditActionGroup "Batch_completed_group" -PredicateExpression "additional_information not like '%<transaction_info>%" Scenario 6: use predicate expression to exclude action from a given action group. the below example shows filtering RPC completed events from batch completed action group. to get action id, please refer Filter SQL Server Audit on action_id Set-AzSqlServerAudit -ResourceGroupName "RESOURCE_GROUP" -ServerName "SERVER_NAME" -BlobStorageTargetState Enabled -StorageAccountResourceId "STORAGE_ACCOUNT_RESOURCE_ID" -PredicateExpression "action_id != 541934418" Note: Both Set-AzSqlServerAudit and Set-AzSqlDatabaseAudit overwrites the existing configuration, so when you modify you need to add the default action groups if you still want to audit them. Review the server audit settings, you can see auditing is configured with different action groups and filters. In this blog we have successfully configured server audit with the required audit action groups and filter expressions.10KViews2likes4CommentsConvert geo-replicated databases to Hyperscale
Update: On 22 October 2025 we announced the General Availability for this improvement. We’re excited to introduce the next improvement in Hyperscale conversion: a new feature that allows customers to convert Azure SQL databases to Hyperscale by keeping active geo-replication or failover group configurations intact. This builds on our earlier improvements and directly addresses one of the most requested capabilities from customers. With this improvement, customers can now modernize your database architecture with Hyperscale while maintaining business continuity. Overview We have heard feedback from customers about possible improvements we could make while converting their databases to Hyperscale. Customers complained about the complex steps they needed to perform to convert a database to Hyperscale when the database is geo-replicated by active geo-replication or failover groups. Previously, converting to Hyperscale required tearing down geo-replication links and recreating them after the conversion. Now, that’s no longer necessary. This improvement allows customers to preserve their cross-region disaster recovery or read scale-out configurations and still allows conversion to Hyperscale which helps in minimizing downtime and operational complexity. This feature is especially valuable for applications that rely on failover group endpoints for connectivity. Before this improvement, if application needs to be available during conversion, then connection string needed modifications as a part of conversion because the failover group and its endpoints had to be removed. With this new improvement, the conversion process is optimized for minimal disruption, with telemetry showing majority of cutover times under one minute. Even with a geo-replication configuration in place, you can still choose between automatic and manual cutover modes, offering flexibility in scheduling the transition. Progress tracking is now more granular, giving customers better visibility into each stage of the conversion, including the conversion of the geo-secondary to Hyperscale. Customer feedback Throughout the preview phase, we have received overwhelmingly positive feedback from several customers about this improvement. Viktoriia Kuznetcova, Senior Automation Test Engineer from Nintex says: We needed a low-downtime way to move our databases from the Premium tier to Azure SQL Database Hyperscale, and this new feature delivered perfectly; allowing us to complete the migration in our test environments safely and smoothly, even while the service remained under continuous load, without any issues and without needing to break the failover group. We're looking forward to the public release so we can use it in production, where Hyperscale’s ability to scale storage both up and down will help us manage peak loads without overpaying for unused capacity. Get started The good news is that there are no changes needed to the conversion process. The workflow automatically detects that a geo-secondary is present and converts it to Hyperscale. There are no new parameters, and the method remains the same as the existing conversion process which works for non-geo-replicated databases. All you need is to make sure that: You have only one geo-secondary replica because Hyperscale doesn't support more than one geo-secondary replica. If a chained geo-replication configuration exists, it must be removed before starting the conversion to Hyperscale. Creating a geo-replica of a geo-replica (also known as "geo-replica chaining") isn't supported in Hyperscale. Once the above requirements are satisfied, you can use any of the following methods to initiate the conversion process. Conversion to Hyperscale must be initiated starting from the primary geo-replica. The following table provides sample commands to convert a database named WideWorldImporters on a logical server called contososerver to an 8-vcore Hyperscale database with manual cutover option. Method Command T-SQL ALTER DATABASE WideWorldImporters MODIFY (EDITION = 'Hyperscale', SERVICE_OBJECTIVE = 'HS_Gen5_8') WITH MANUAL_CUTOVER; PowerShell Set-AzSqlDatabase -ResourceGroupName "ResourceGroup01" -ServerName "contososerver" -DatabaseName "WideWorldImporters" -Edition "Hyperscale" -RequestedServiceObjectiveName "HS_Gen5_8" -ManualCutover Azure CLI az sql db update --resource-group ResourceGroup01 --server contososerver --name WideWorldImporters --edition Hyperscale --service-objective HS_Gen5_8 --manual-cutover Here are some notable details of this improvement: The geo-secondary database is automatically converted to Hyperscale with the same service level objective as the primary. All database configurations such as maintenance window, zone-resiliency, backup redundancy etc. remain the same as earlier (i.e., both geo-primary and geo-secondary would inherit from their own earlier configuration). A planned failover isn't possible while the conversion to Hyperscale is in progress. A forced failover is possible. However, depending on the state of the conversion when the forced failover occurs, the new geo-primary after failover might use either the Hyperscale service tier, or its original service tier. If the geo-secondary database is in an elastic pool before conversion, it is taken out of the pool and might need to be added back to a Hyperscale elastic pool separately after the conversion. The deployment of the feature is in progress and is expected to be completed in all Azure regions in a few weeks. In case you see error (Update to service objective '<SLO name>' with source DB geo-replicated is not supported for entity '<Database Name>') while converting primary to Hyperscale, wait for this new improvement to become available in your region. If you don’t want to use this capability, make sure to remove any geo-replication configuration before converting your databases to Hyperscale. Conclusion This update marks a significant step forward in the Hyperscale conversion process, offering simple steps, less downtime and keeping the geo-secondary available during the conversion process. We encourage you to try this capability and provide your valuable feedback and help us refine this feature. You can contact us by commenting on this blog post and we’ll be happy to get back to you. Alternatively, you can also email us at sqlhsfeedback AT microsoft DOT com. We are eager to hear from you all!1.5KViews2likes0CommentsGeo-Replication and Transparent Data Encryption Key Management in Azure SQL Database
Transparent data encryption (TDE) in Azure SQL with customer-managed key (CMK) enables Bring Your Own Key (BYOK) scenario for data protection at rest. With customer-managed TDE, the customer is responsible for and in a full control of a key lifecycle management (key creation, upload, rotation, deletion), key usage permissions, and auditing of operations on keys. Geo-replication and Failover Groups in Azure SQL Database creates readable secondary replicas across different regions to support disaster recovery and high availability. There are several considerations when configuring geo-replication for Azure SQL logical servers that use Transparent Data Encryption (TDE) with Customer-Managed Keys (CMK). This blog post provides detailed information about setting up geo-replication for TDE-enabled databases. Understanding the difference between a TDE protector and a server/database Key To understand the geo-replication considerations, I first need to explain the roles of the TDE protector and a server key. The TDE protector is the key responsible for encrypting the Database Encryption Key (DEK), which in turn encrypts the actual database files and transaction logs. It sits at the top of the encryption hierarchy. A server or database key is a broader concept that refers to any key registered at the server or database level in Azure SQL. One of the registered server or database keys is designated as the TDE protector. Multiple keys can be registered, but only one acts as the active protector at any given time. Geo-replication considerations Azure Key Vault considerations In active geo-replication and failover group scenarios, the primary and secondary SQL logical servers can be linked to an Azure Key Vault in any region — they do not need to be located in the same region. Connecting both SQL logical servers to the same key vault reduces the potential for discrepancies in key material that can occur when using separate key vaults. Azure Key Vault incorporates multiple layers of redundancy to ensure the continued availability of keys and key vaults in the event of service or regional failures. The following diagram represents a configuration for paired region (primary and secondary) for an Azure Key Vault cross-failover with Azure SQL setup for geo-replication using a failover group. Azure SQL considerations The primary consideration is to ensure that the server or database keys are present on both the primary and secondary SQL logical servers or databases, and that appropriate permissions have been granted for these keys within Azure Key Vault. The TDE protector used on the primary does not need to be identical to the one used on the secondary. It is sufficient to have the same key material available on both the primary and secondary systems. You can add keys to a SQL logical server with the Azure Portal, PowerShell, Azure CLI or REST API. Assign user-assigned managed identities (UMI) to primary and secondary SQL servers for flexibility across regions and failover scenarios. Grant Get, WrapKey, UnwrapKey permissions to these identities on the key vault. For Azure Key Vaults using Azure RBAC for access configuration, the Key Vault Crypto Service Encryption User role is needed by the server identity to be able to use the key for encryption and decryption operations. Different encryption key combinations Based on interactions with customers and analysis of livesite incidents involving geo-replication with TDE CMK, we noticed that many clients do not configure the primary and secondary servers with the same TDE protector, due to differing compliance requirements. In this chapter, I will explain how you can set up a failover group with 2 different TDE protectors. This scenario will use TDECMK key as the TDE protector on the primary server (tdesql) and TDECMK2 key as the TDE protector on the secondary server (tdedr). As a sidenote, you can also enable the Auto-rotate key to allow end-to-end, zero-touch rotation of the TDE protector Both logical SQL Servers have access to the Azure Key Vault keys. Next step is to create the failover group and replicate 1 database called ContosoHR. The setup of the failover group failed because the CMK key from the primary server was (intentionally) not added to the secondary server. Adding a server key can easily be done with the Add-AzSqlServerKeyVaultKey command. For example: Add-AzSqlServerKeyVaultKey -KeyId 'https://xxxxx.vault.azure.net/keys/TDECMK/ 01234567890123456789012345678901' -ServerName 'tdedr' -ResourceGroupName 'TDEDemo' After the failover group setup, the sample database on the secondary server becomes available and uses the primary's CMK (TDECMK), regardless of the secondary's CMK configuration. You can verify this by running the following query both on primary and secondary server. SELECT DB_NAME(db_id()) AS database_name, dek.encryption_state, dek.encryption_state_desc, -- SQL 2019+ / Azure SQL dek.key_algorithm, dek.key_length, dek.encryptor_type, -- CERTIFICATE (service-managed) or ASYMMETRIC KEY (BYOK/CMK) dek.encryptor_thumbprint FROM sys.dm_database_encryption_keys AS dek WHERE database_id <> 2 ; database_name encryption_state_desc encryptor_type encryptor_thumbprint ContosoHR ENCRYPTED ASYMMETRIC KEY 0xC8F041FB93531FA26BF488740C9AC7D3B5827EF5 The encryptor_type column shows ASYMMETRIC KEY, which means the database uses a CMK for encryption. The encryptor_thumbprint should match on both the primary and secondary servers, indicating that the secondary database is encrypted using the CMK from the primary server. The TDE protector on the secondary server (TDECMK2) becomes active only in the event of a failover, when the primary and secondary server roles are reversed. As illustrated in the image below, the roles of my primary and secondary servers have been reversed. If the above query is executed again following a failover, the encryptor_thumbprint value will be different, which indicates that the database is now encrypted using the TDE protector (TDECMK2) from the secondary server. database_name encryption_state_desc encryptor_type encryptor_thumbprint ContosoHR ENCRYPTED ASYMMETRIC KEY 0x788E5ACA1001C87BA7354122B7D93B8B7894918D As previously mentioned, please ensure that the server or database keys are available on both the primary and secondary SQL logical servers or databases. Additionally, verify that the appropriate permissions for these keys have been granted within Azure Key Vault. This scenario is comparable to other configurations, such as: The primary server uses SMK while the secondary server uses CMK. The primary server uses CMK while the secondary server uses SMK. Conclusion Understanding the distinction between the TDE protector and server keys is essential for geo-replication with Azure SQL. The TDE protector encrypts the database encryption key, while server keys refer to any key registered at the server or database level in Azure SQL. For successful geo-replication setup and failover, all necessary keys must be created and available on both primary and secondary servers. It is possible and, in certain cases, required to configure different TDE protectors on replicas, as long as the key material is available on each server.189Views0likes0Comments🔐 Public Preview: Backup Immutability for Azure SQL Database LTR Backups
The Ransomware Threat Landscape Ransomware attacks have become one of the most disruptive cybersecurity threats in recent years. These attacks typically follow a destructive pattern: Attackers gain unauthorized access to systems. They encrypt or delete critical data. They demand ransom in exchange for restoring access. Organizations without secure, tamper-proof backups are often left with no choice but to pay the ransom or suffer significant data loss. This is where immutable backups play a critical role in defense. 🛡️ What Is Backup Immutability? Backup immutability ensures that once a backup is created, it cannot be modified or deleted for a specified period. This guarantees: Protection against accidental or malicious deletion. Assurance that backups remain intact and trustworthy. Compliance with regulatory requirements for data retention and integrity. 🚀 Azure SQL Database LTR Backup Immutability (Public Preview) Microsoft has introduced backup immutability for Long-Term Retention (LTR) backups in Azure SQL Database, now available in public preview. This feature allows organizations to apply Write Once, Read Many (WORM) policies to LTR backups stored in Azure Blob Storage. Key Features: Time-based immutability: Locks backups for a defined duration (e.g., 30 days). Legal hold immutability: Retains backups indefinitely until a legal hold is explicitly removed. Tamper-proof storage: Backups cannot be deleted or altered, even by administrators. This ensures that LTR backups remain secure and recoverable, even in the event of a ransomware attack. 📜 Regulatory Requirements for Backup Immutability Many global regulations mandate immutable storage to ensure data integrity and auditability. Here are some key examples: Region Regulation Requirement USA SEC Rule 17a-4(f) Requires broker-dealers to store records in WORM-compliant systems. FINRA Mandates financial records be preserved in a non-rewriteable, non-erasable format. HIPAA Requires healthcare organizations to ensure the integrity and availability of electronic health records. EU GDPR Emphasizes data integrity and the ability to demonstrate compliance through audit trails. Global ISO 27001, PCI-DSS Require secure, tamper-proof data retention for audit and compliance purposes. Azure’s immutable storage capabilities help organizations meet these requirements by ensuring that backup data remains unchanged and verifiable. 🕒 Time-Based vs. Legal Hold Immutability ⏱️ Time-Based Immutability Locks data for a predefined period (e.g., 30 days). Ideal for routine compliance and operational recovery. Automatically expires after the retention period. 📌 Legal Hold Immutability Retains data indefinitely until the hold is explicitly removed. Used in legal investigations, audits, or regulatory inquiries. Overrides time-based policies to ensure data preservation. Both types can be applied to Azure SQL LTR backups, offering flexibility and compliance across different scenarios. 🧩 How Immutability Protects Against Ransomware Immutable backups are a critical component of a layered defense strategy: Tamper-proof: Even if attackers gain access, they cannot delete or encrypt immutable backups. Reliable recovery: Organizations can restore clean data from immutable backups without paying ransom. Compliance-ready: Meets regulatory requirements for data retention and integrity. By enabling immutability for Azure SQL LTR backups, organizations can significantly reduce the risk of data loss and ensure business continuity. ✅ Final Thoughts The public preview of backup immutability for Azure SQL Database LTR backups is a major step forward in ransomware resilience and regulatory compliance. With support for both time-based and legal hold immutability, Azure empowers organizations to: Protect critical data from tampering or deletion. Meet global compliance standards. Recover quickly and confidently from cyberattacks. Immutability is not just a feature—it’s a foundational pillar of modern data protection. Documentation is available at - Backup Immutability for Long-Term Retention Backups - Azure SQL Database | Microsoft Learn527Views3likes0CommentsSSMS 21/22 Error Upload BACPAC file to Azure Storage
Hello All In my SSMS 20, I can use "Export Data-tier Application" to export an BACPAC file of Azure SQL database and upload to Azure storage in the same machine, the SSMS 21 gives error message when doing the same export, it created the BACPAC files but failed on the last step, "Uploading BACPAC file to Microsoft Azure Storage", The error message is "Could not load file or assembly 'System.IO.Hashing, Version=6.0.0.0, Culture=neutral, PublicKeyToken=cc7b13ffcd2ddd51' or one of its dependencies. The system cannot find the file specified. (Azure.Storage.Blobs)" I tried the fresh installation of SSMS 21 in a brand-new machine (Windows 11), same issue, Can anyone advice? Thanks59Views0likes2CommentsThe Microsoft.Build.Sql project SDK is now generally available
Your database should be part of a wholistic development process, where iterative development tools are coupled with automation for validation and deployment. As previously announced, the Microsoft.Build.Sql project SDK provides a cross-platform framework for your database-as-code such that the database obejcts are ready to be checked into source control and deployed via pipelines like any other modern application component. Today Microsoft.Build.Sql enters general availability as another step in the evolution of SQL database development. Standardized SQL database as code SQL projects are a .NET-based project type for SQL objects, compiling a folder of SQL scripts into a database artifact (.dacpac) for manual or continuous deployments. As a developer working with SQL projects, you’re creating the T-SQL scripts that define the objects in the database. While the development framework around SQL projects presents a clear build and deploy process for development, there’s no wrong way to incorporate SQL projects into your development cycle. The SQL objects in the project can be manually written or generated via automation, including through the graphical schema compare interfaces or the SqlPackage extract command. Whether you’re developing with SQL Server, Azure SQL, or SQL in Fabric, database development standardizes on a shared project format and the ecosystem of tooling around SQL projects. The same SQL projects tools, like the SqlPackage CLI, can be used to either deploy objects to a database or update those object scripts from a database. Free development tools for SQL projects, like the SQL database projects extension for VS Code and SQL Server Data Tools in Visual Studio, bring the whole development team together. The database model validation of a SQL project build provides early verification of the SQL syntax used in the project, before code is checked in or deployed. Code analysis for antipatterns that impact database design and performance can be enabled as part of the project build and extended. This code analysis capability adds in-depth feedback to your team’s continuous integration or pre-commit checks as part of SQL projects. Objects in a SQL project are database objects you can have confidence in before they’re deployed across your environments. Evolving from original SQL projects SQL projects converted to the Microsoft.Build.Sql SDK benefit from support for .NET 8, enabling cross-platform development and automation environments. While the original SQL project file format explicitly lists each SQL file, SDK-style projects are significantly simplified by including any .sql file in the SQL projects folder structure. Database references enable SQL projects to be constructed for applications where a single project isn’t an effective representation, whether the database includes cross-database references or multiple development cycles contribute to the same database. Incorporate additional objects into a SQL project with database references through project reference, .dacpac artifact reference, and new to Microsoft.Build.Sql, package references. Package references for database objects improve the agility and manageability of the release cycle of your database through improved visibility to versioning and simplified management of the referenced artifacts. Converting existing projects The Microsoft.Build.Sql project SDK is a superset of the functionality of the original SQL projects, enabling you to convert your current projects on a timeline that works best for you. The original SQL projects in SQL Server Data Tools (SSDT) continue to be supported through the Visual Studio lifecycle, providing years of support for your existing original projects. Converting an existing SQL project to a Microsoft.Build.Sql project is currently a manual process to add a single line to the project file and remove several groups of lines. The resulting Microsoft.Build.Sql project file is generally easier to understand and iteratively develop, with significantly fewer merge conflicts than the original SQL projects. A command line tool, DacpacVerify, is now available to validate that your project conversion has completed without degrading the output .dacpac file. By creating a .dacpac before and after you upgrade the project file, you can use DacpacVerify to confirm the database model, database options, pre/post-deployment scripts, and SQLCMD variables match. The road ahead With SQL Server 2025 on the horizon, support for the SQL Server 2025 target platform will be introduced in a future Microsoft.Build.Sql release along with additional improvements to the SDK references. Many Microsoft.Build.Sql releases will coincide with releases to the DacFx .NET library and the SqlPackage CLI with preview releases ahead of general availability releases several times a year. Feature requests and bug reports for the DacFx ecosystem, including Microsoft.Build.Sql, is managed through the GitHub repository. With the v1 GA of Microsoft.Build.Sql, we’re also looking ahead to continued iteration in the development tooling. In Visual Studio, the preview of SDK-style SSDT continues with new features introduced in each Visual Studio release. Plans for Visual Studio include project upgrade assistance in addition to the overall replacement of the existing SQL Server Data Tools. In the SQL projects extension for VS Code, we’re both ensuring SQL projects capabilities from Azure Data Studio are introduced as well as increasing the robustness of the VS Code project build experience. The Microsoft.Build.Sql project SDK empowers database development to integrate with the development cycle, whether you're focused on reporting, web development, AI, or anything else. Use Microsoft.Build.Sql projects to branch, build, commit, and ship your database – get started today from an existing database or with a new project. Get to know SQL projects from the documentation and DevOps samples.6.8KViews6likes5CommentsIntroducing new update policy for Azure SQL Managed Instance
We’re excited to introduce the new update policy SQL Server 2025 for Azure SQL Managed Instance. Now in preview, SQL Server 2025 update policy brings you the latest SQL engine innovation while retaining database portability to the new major release of SQL Server. Update policy is an instance configuration option that provides flexibility and allows you to choose between instant access to the latest SQL engine features and fixed SQL engine feature set corresponding to 2022 and 2025 major releases of SQL Server. Regardless of the update policy chosen, you continue to benefit from Azure SQL platform innovation. New features and capabilities not related to the SQL engine – everything that makes Azure SQL Managed Instance a true PaaS service – are successively delivered to your Azure SQL Managed Instance resources. Update policy for each modernization strategy Always-up-to-date is a “perpetual” update policy. It has no end of lifetime and brings new SQL engine features to instances as soon as they are available in Azure. It enables you to always be at the forefront – to quickly adopt new yet production-ready SQL engine features, benefit from them in everyday operations and keep a competitive edge without waiting for the next major release of SQL Server. In contrast, update policies SQL Server 2022 and SQL Server 2025 contain fixed sets of SQL engine features corresponding to the respective releases of SQL Server. They’re optimized to fulfill regulatory compliance, contractual, or other requirements for database/workload portability from managed instance to SQL Server. Over time, they get security patches, fixes, and incremental functional improvements in form of Cumulative Updates, but not new SQL engine features. They also have limited lifetime, aligned with the period of mainstream support of SQL Server releases. As the end of mainstream support for the update policy approaches, you should upgrade instances to a newer policy. Instances will be automatically upgraded to the next more recent policy at the end of mainstream support of their existing update policy. What’s new in SQL Server 2025 update policy In short, instances with update policy SQL Server 2025 benefit from all the SQL engine features that were gradually added to the Always-up-to-date policy over the past few years, and not available in the SQL Server 2022 update policy. Let’s name few most notable features, with complete list available in the update policy documentation: Optimized locking Mirroring in Fabric Regular expression functions Vector data type and functions JSON data type and aggregate functions Invoking HTTP REST endpoints Best practices with Update policy feature Plan for the end of lifetime of SQL Server 2022 update policy if you’re using it today, and upgrade to a newer policy on your terms before automatic upgrade kicks in. Make sure to add update policy configuration to your deployment templates and scripts, so that you don’t rely on system defaults that may change in the future. Be aware that using some of the newly introduced features may require changing the database compatibility level. Summary and next steps Azure SQL Managed Instance just got new update policy SQL Server 2025. It brings the same set of SQL engine features that exist in the new SQL Server 2025 (currently in preview). Consider it if you have regulatory compliance, contractual, or other reasons for database/workload portability from Azure SQL Managed Instance to SQL Server 2025. Otherwise, use the Always-up-to-date policy which always provides the latest features and benefits available to Azure SQL Managed Instance. For more details visit Update policy documentation. To stay up to date with the latest feature additions to Azure SQL Managed Instance, subscribe to the Azure SQL video channel, subscribe to the Azure SQL Blog feed, or bookmark What’s new in Azure SQL Managed Instance article with regular updates.494Views2likes0Comments