azure sql
689 TopicsSSMS 21/22 Error Upload BACPAC file to Azure Storage
Hello All In my SSMS 20, I can use "Export Data-tier Application" to export an BACPAC file of Azure SQL database and upload to Azure storage in the same machine, the SSMS 21 gives error message when doing the same export, it created the BACPAC files but failed on the last step, "Uploading BACPAC file to Microsoft Azure Storage", The error message is "Could not load file or assembly 'System.IO.Hashing, Version=6.0.0.0, Culture=neutral, PublicKeyToken=cc7b13ffcd2ddd51' or one of its dependencies. The system cannot find the file specified. (Azure.Storage.Blobs)" I tried the fresh installation of SSMS 21 in a brand-new machine (Windows 11), same issue, Can anyone advice? Thanks19Views0likes1CommentThe Microsoft.Build.Sql project SDK is now generally available
Your database should be part of a wholistic development process, where iterative development tools are coupled with automation for validation and deployment. As previously announced, the Microsoft.Build.Sql project SDK provides a cross-platform framework for your database-as-code such that the database obejcts are ready to be checked into source control and deployed via pipelines like any other modern application component. Today Microsoft.Build.Sql enters general availability as another step in the evolution of SQL database development. Standardized SQL database as code SQL projects are a .NET-based project type for SQL objects, compiling a folder of SQL scripts into a database artifact (.dacpac) for manual or continuous deployments. As a developer working with SQL projects, you’re creating the T-SQL scripts that define the objects in the database. While the development framework around SQL projects presents a clear build and deploy process for development, there’s no wrong way to incorporate SQL projects into your development cycle. The SQL objects in the project can be manually written or generated via automation, including through the graphical schema compare interfaces or the SqlPackage extract command. Whether you’re developing with SQL Server, Azure SQL, or SQL in Fabric, database development standardizes on a shared project format and the ecosystem of tooling around SQL projects. The same SQL projects tools, like the SqlPackage CLI, can be used to either deploy objects to a database or update those object scripts from a database. Free development tools for SQL projects, like the SQL database projects extension for VS Code and SQL Server Data Tools in Visual Studio, bring the whole development team together. The database model validation of a SQL project build provides early verification of the SQL syntax used in the project, before code is checked in or deployed. Code analysis for antipatterns that impact database design and performance can be enabled as part of the project build and extended. This code analysis capability adds in-depth feedback to your team’s continuous integration or pre-commit checks as part of SQL projects. Objects in a SQL project are database objects you can have confidence in before they’re deployed across your environments. Evolving from original SQL projects SQL projects converted to the Microsoft.Build.Sql SDK benefit from support for .NET 8, enabling cross-platform development and automation environments. While the original SQL project file format explicitly lists each SQL file, SDK-style projects are significantly simplified by including any .sql file in the SQL projects folder structure. Database references enable SQL projects to be constructed for applications where a single project isn’t an effective representation, whether the database includes cross-database references or multiple development cycles contribute to the same database. Incorporate additional objects into a SQL project with database references through project reference, .dacpac artifact reference, and new to Microsoft.Build.Sql, package references. Package references for database objects improve the agility and manageability of the release cycle of your database through improved visibility to versioning and simplified management of the referenced artifacts. Converting existing projects The Microsoft.Build.Sql project SDK is a superset of the functionality of the original SQL projects, enabling you to convert your current projects on a timeline that works best for you. The original SQL projects in SQL Server Data Tools (SSDT) continue to be supported through the Visual Studio lifecycle, providing years of support for your existing original projects. Converting an existing SQL project to a Microsoft.Build.Sql project is currently a manual process to add a single line to the project file and remove several groups of lines. The resulting Microsoft.Build.Sql project file is generally easier to understand and iteratively develop, with significantly fewer merge conflicts than the original SQL projects. A command line tool, DacpacVerify, is now available to validate that your project conversion has completed without degrading the output .dacpac file. By creating a .dacpac before and after you upgrade the project file, you can use DacpacVerify to confirm the database model, database options, pre/post-deployment scripts, and SQLCMD variables match. The road ahead With SQL Server 2025 on the horizon, support for the SQL Server 2025 target platform will be introduced in a future Microsoft.Build.Sql release along with additional improvements to the SDK references. Many Microsoft.Build.Sql releases will coincide with releases to the DacFx .NET library and the SqlPackage CLI with preview releases ahead of general availability releases several times a year. Feature requests and bug reports for the DacFx ecosystem, including Microsoft.Build.Sql, is managed through the GitHub repository. With the v1 GA of Microsoft.Build.Sql, we’re also looking ahead to continued iteration in the development tooling. In Visual Studio, the preview of SDK-style SSDT continues with new features introduced in each Visual Studio release. Plans for Visual Studio include project upgrade assistance in addition to the overall replacement of the existing SQL Server Data Tools. In the SQL projects extension for VS Code, we’re both ensuring SQL projects capabilities from Azure Data Studio are introduced as well as increasing the robustness of the VS Code project build experience. The Microsoft.Build.Sql project SDK empowers database development to integrate with the development cycle, whether you're focused on reporting, web development, AI, or anything else. Use Microsoft.Build.Sql projects to branch, build, commit, and ship your database – get started today from an existing database or with a new project. Get to know SQL projects from the documentation and DevOps samples.6.5KViews6likes5Comments🔐 Public Preview: Backup Immutability for Azure SQL Database LTR Backups
The Ransomware Threat Landscape Ransomware attacks have become one of the most disruptive cybersecurity threats in recent years. These attacks typically follow a destructive pattern: Attackers gain unauthorized access to systems. They encrypt or delete critical data. They demand ransom in exchange for restoring access. Organizations without secure, tamper-proof backups are often left with no choice but to pay the ransom or suffer significant data loss. This is where immutable backups play a critical role in defense. 🛡️ What Is Backup Immutability? Backup immutability ensures that once a backup is created, it cannot be modified or deleted for a specified period. This guarantees: Protection against accidental or malicious deletion. Assurance that backups remain intact and trustworthy. Compliance with regulatory requirements for data retention and integrity. 🚀 Azure SQL Database LTR Backup Immutability (Public Preview) Microsoft has introduced backup immutability for Long-Term Retention (LTR) backups in Azure SQL Database, now available in public preview. This feature allows organizations to apply Write Once, Read Many (WORM) policies to LTR backups stored in Azure Blob Storage. Key Features: Time-based immutability: Locks backups for a defined duration (e.g., 30 days). Legal hold immutability: Retains backups indefinitely until a legal hold is explicitly removed. Tamper-proof storage: Backups cannot be deleted or altered, even by administrators. This ensures that LTR backups remain secure and recoverable, even in the event of a ransomware attack. 📜 Regulatory Requirements for Backup Immutability Many global regulations mandate immutable storage to ensure data integrity and auditability. Here are some key examples: Region Regulation Requirement USA SEC Rule 17a-4(f) Requires broker-dealers to store records in WORM-compliant systems. FINRA Mandates financial records be preserved in a non-rewriteable, non-erasable format. HIPAA Requires healthcare organizations to ensure the integrity and availability of electronic health records. EU GDPR Emphasizes data integrity and the ability to demonstrate compliance through audit trails. Global ISO 27001, PCI-DSS Require secure, tamper-proof data retention for audit and compliance purposes. Azure’s immutable storage capabilities help organizations meet these requirements by ensuring that backup data remains unchanged and verifiable. 🕒 Time-Based vs. Legal Hold Immutability ⏱️ Time-Based Immutability Locks data for a predefined period (e.g., 30 days). Ideal for routine compliance and operational recovery. Automatically expires after the retention period. 📌 Legal Hold Immutability Retains data indefinitely until the hold is explicitly removed. Used in legal investigations, audits, or regulatory inquiries. Overrides time-based policies to ensure data preservation. Both types can be applied to Azure SQL LTR backups, offering flexibility and compliance across different scenarios. 🧩 How Immutability Protects Against Ransomware Immutable backups are a critical component of a layered defense strategy: Tamper-proof: Even if attackers gain access, they cannot delete or encrypt immutable backups. Reliable recovery: Organizations can restore clean data from immutable backups without paying ransom. Compliance-ready: Meets regulatory requirements for data retention and integrity. By enabling immutability for Azure SQL LTR backups, organizations can significantly reduce the risk of data loss and ensure business continuity. ✅ Final Thoughts The public preview of backup immutability for Azure SQL Database LTR backups is a major step forward in ransomware resilience and regulatory compliance. With support for both time-based and legal hold immutability, Azure empowers organizations to: Protect critical data from tampering or deletion. Meet global compliance standards. Recover quickly and confidently from cyberattacks. Immutability is not just a feature—it’s a foundational pillar of modern data protection. Documentation is available at - Backup Immutability for Long-Term Retention Backups - Azure SQL Database | Microsoft Learn292Views2likes0CommentsIntroducing new update policy for Azure SQL Managed Instance
We’re excited to introduce the new update policy SQL Server 2025 for Azure SQL Managed Instance. Now in preview, SQL Server 2025 update policy brings you the latest SQL engine innovation while retaining database portability to the new major release of SQL Server. Update policy is an instance configuration option that provides flexibility and allows you to choose between instant access to the latest SQL engine features and fixed SQL engine feature set corresponding to 2022 and 2025 major releases of SQL Server. Regardless of the update policy chosen, you continue to benefit from Azure SQL platform innovation. New features and capabilities not related to the SQL engine – everything that makes Azure SQL Managed Instance a true PaaS service – are successively delivered to your Azure SQL Managed Instance resources. Update policy for each modernization strategy Always-up-to-date is a “perpetual” update policy. It has no end of lifetime and brings new SQL engine features to instances as soon as they are available in Azure. It enables you to always be at the forefront – to quickly adopt new yet production-ready SQL engine features, benefit from them in everyday operations and keep a competitive edge without waiting for the next major release of SQL Server. In contrast, update policies SQL Server 2022 and SQL Server 2025 contain fixed sets of SQL engine features corresponding to the respective releases of SQL Server. They’re optimized to fulfill regulatory compliance, contractual, or other requirements for database/workload portability from managed instance to SQL Server. Over time, they get security patches, fixes, and incremental functional improvements in form of Cumulative Updates, but not new SQL engine features. They also have limited lifetime, aligned with the period of mainstream support of SQL Server releases. As the end of mainstream support for the update policy approaches, you should upgrade instances to a newer policy. Instances will be automatically upgraded to the next more recent policy at the end of mainstream support of their existing update policy. What’s new in SQL Server 2025 update policy In short, instances with update policy SQL Server 2025 benefit from all the SQL engine features that were gradually added to the Always-up-to-date policy over the past few years, and not available in the SQL Server 2022 update policy. Let’s name few most notable features, with complete list available in the update policy documentation: Optimized locking Mirroring in Fabric Regular expression functions Vector data type and functions JSON data type and aggregate functions Invoking HTTP REST endpoints Best practices with Update policy feature Plan for the end of lifetime of SQL Server 2022 update policy if you’re using it today, and upgrade to a newer policy on your terms before automatic upgrade kicks in. Make sure to add update policy configuration to your deployment templates and scripts, so that you don’t rely on system defaults that may change in the future. Be aware that using some of the newly introduced features may require changing the database compatibility level. Summary and next steps Azure SQL Managed Instance just got new update policy SQL Server 2025. It brings the same set of SQL engine features that exist in the new SQL Server 2025 (currently in preview). Consider it if you have regulatory compliance, contractual, or other reasons for database/workload portability from Azure SQL Managed Instance to SQL Server 2025. Otherwise, use the Always-up-to-date policy which always provides the latest features and benefits available to Azure SQL Managed Instance. For more details visit Update policy documentation. To stay up to date with the latest feature additions to Azure SQL Managed Instance, subscribe to the Azure SQL video channel, subscribe to the Azure SQL Blog feed, or bookmark What’s new in Azure SQL Managed Instance article with regular updates.415Views2likes0CommentsMSSQL Extension for VS Code: Fabric Integration and GitHub Copilot Slash Commands (Public Preview)
The MSSQL Extension for VS Code continues to evolve, delivering features that make SQL development more integrated, more consistent, and more developer-friendly. In version v1.36.0, we’re announcing the Public Preview of Fabric Connectivity (Browse), SQL Database in Fabric Provisioning, and GitHub Copilot Slash Commands — three capabilities that bring Microsoft Fabric and AI-powered assistance directly into your development workflow inside Visual Studio Code. What’s new in MSSQL extension for VS Code v1.36 This release introduces three major capabilities designed to streamline the SQL development experience: Fabric Connectivity (Public Preview) — Browse and connect to Fabric workspaces directly from the Connection dialog using Microsoft Entra ID, with tree-view navigation and search. SQL Database in Fabric Provisioning (Public Preview) — Create new SQL databases in Fabric directly from the Deployments page, with instant connection in VS Code. GitHub Copilot Slash Commands (Public Preview) — Use structured slash commands in GitHub Copilot Agent Mode to connect, explore schemas, and run queries directly from chat. In addition to these major features, this release includes multiple quality and performance improvements: Enhanced performance and usability of the query results grid. Fixed accessibility issues affecting error messages and UI feedback. Addressed edge case errors in GitHub Copilot Agent Mode when switching connections. Fabric Connectivity (Public Preview) The MSSQL extension for Visual Studio Code now includes Fabric connectivity, making it easier than ever to browse and connect to your Fabric workspaces directly from your development environment. This new integration eliminates the friction of connection strings and brings Fabric resources into the familiar VS Code experience. With this update, you can authenticate once, browse all your workspaces in an intuitive tree view, and establish connections to your Fabric resources in just a few clicks—all without leaving VS Code. Key highlights Dedicated Fabric Experience – A new “Fabric” connection type joins the existing “Parameters” and “Azure” options in the connection dialog. Seamless Authentication – Leverage Microsoft Entra ID for secure, one-click authentication with persistent sign-in. Intelligent Workspace Browsing – Explore Fabric workspaces in an intuitive tree view that loads resources on-demand. Smart Search & Discovery – Find workspaces quickly with real-time search and relevant results at the top. Cross-extension Support – Use the “Open in MSSQL” option directly from the Fabric extension or Portal. SQL database in Fabric Provisioning (Public Preview) With Fabric Provisioning, you can now create SQL databases in Fabric without leaving VS Code. The new provisioning flow allows you to authenticate, s elect or create a workspace, name your database, and connect instantly—all in under three minutes. By bringing provisioning directly into the Deployments page, this experience eliminates the need to use the Fabric Portal or Azure Portal, reducing context switching and making early prototyping significantly faster. Key highlights Simple Provisioning Flow – Authenticate, select or create a workspace, name your database, and create. Immediate Connection – The new database is automatically added to your connections for instant querying. Consistent Experience – Provisioning flow aligns with other backend options, like local SQL Server containers. Capacity Awareness – Disabled workspaces are clearly shown with guidance if capacity isn’t available. GitHub Copilot Slash Commands (Public Preview) The MSSQL extension now contributes slash commands to GitHub Copilot Agent Mode, giving you a faster, more structured way to interact with your databases directly in chat. Instead of typing full prompts, you can invoke commands like /connect, /listDatabases, or /runQuery to perform common tasks with less effort and greater predictability. Key highlights Connection management – Establish, switch, or disconnect database connections with structured commands. Schema exploration – List schemas, tables, views, functions, or generate schema diagrams directly in chat. Query execution – Run queries or request optimizations with simple commands. Other updates Connection reliability – Fixed issues that prevented users from establishing connections. Fabric connectivity stability – Resolved query failures when working with Fabric tables. Authentication persistence – Fixed a bug where saved passwords were lost after failed connections. Performance & memory – Addressed excessive memory usage and unresponsive queries. Conclusion The v1.36 release marks a major milestone by introducing Fabric Connectivity (Browse), SQL Database in Fabric Provisioning, and GitHub Copilot Slash Commands in Public Preview. Together, these capabilities simplify connectivity, accelerate prototyping, and bring AI-powered assistance directly into your SQL development workflows in VS Code. Combined with important reliability fixes, this update makes the MSSQL extension more powerful, stable, and developer-friendly than ever. If there’s something you’d love to see in a future update, here’s how you can contribute: 💬 GitHub discussions – Share your ideas and suggestions to improve the extension ✨ New feature requests – Request missing capabilities and help shape future updates 🐞 Report bugs – Help us track down and fix issues to make the extension more reliable Want to see these features in action? Fabric Connectivity demo SQL database in Fabric provisioning demo Full playlist of demos Thanks for being part of the journey—happy coding! 🚀160Views0likes0CommentsEverything you need to know about TDE key management for database restore
Transparent data encryption (TDE) in Azure SQL with customer-managed key (CMK) supports Bring Your Own Key (BYOK) for data protection at rest and facilitates separation of duties in key and data management. With customer-managed TDE, the user manages the lifecycle of keys (creation, upload, rotation, deletion), usage permissions, and auditing of key operations. The key used for encrypting the Database Encryption Key (DEK), known as the TDE protector, is an asymmetric key managed by the customer and stored in Azure Key Vault (AKV). Once a database is encrypted with TDE using a Key Vault key, new backups are encrypted with the same TDE protector. Changing the TDE protector does not update old backups to use the new protector. To restore a backup encrypted with a Key Vault TDE protector, ensure that the key material is accessible to the target server. The TDE feature was designed with the requirement that both the current and previous TDE protectors are necessary for successful restores. It is recommended to retain all previous versions of the TDE protector in the key vault to enable the restore of database backups. This blog post will provide detailed information on which keys should be available for a database restore and the reasons why they are necessary. Encryption of the transaction log file To understand which keys are required for a point-in-time restore, it is necessary to first explain how the transaction log encryption operates. The SQL Server Database Engine divides each physical log file into several virtual log files (VLFs). Each VLF has its own header. Encrypting the entire log file in one single sweep is not possible, so each VLF is encrypted individually and the encryptor information is stored in the VLF header. When the log manager needs to read a particular VLF for recovery, it uses the encryptor information in the VLF header to locate the encryptor and decrypt the VLF. Unencrypted transaction log Consider the following sequence of blocks as the logical log file, where each block represents a Virtual Log File (VLF). Initially, we are in VLF1, and the current Log Sequence Number (LSN) is within VLF1. Transparent data encryption enabled When TDE is enabled on the database, the current VLF is filled with non-operational log records, and a new VLF (VLF2) is created. Each VLF has one header containing the encryptor information, so whenever the encryptor information changes, the log rolls over to the next VLF boundary. The subsequent VLF will contain the new DEK (DEK_1) and the thumbprint of the encryptor of the DEK in the header. Any additions to the log file will be added to VLF2 and will be encrypted. When VLF2 reaches capacity, a new VLF (VLF3) will be generated. Since encryption is enabled, the new VLF will contain the DEK and its information in its header, and it will also be encrypted. Key rotation When a new DEK is generated or its encryptor changes, the log rolls over to the next VLF boundary. The new VLF (VLF4) will contain the updated DEK and encryptor information. For example, if a new DEK (DEK_2) is generated via key rotation in the Azure Portal, VLF3 will fill with non-operational commands before VLF4 is created and encrypted by the new DEK. A database can use multiple keys at a single time Currently, for server and database level CMK, after a key rotation, some of the active VLFs may still be encrypted with an old key. As key rotations are allowed before these VLFs are flushed, the database can end up using multiple keys simultaneously. To ensure that at a certain point in time, the database is using only the current encryption protector (primary generation p) and the old encryption protector (generation p-1) we used the following approach: Block a protector change operation when there is any active VLF using an old thumbprint different from the current encryption protector. When a customer attempts a protector change or the key is being auto rotated, we will verify if there are any VLFs using the old thumbprint that are "active". If so, the protector change will be blocked. If there are no "active VLFs" using the old thumbprint, we take a log backup to flush the inactive VLFs, then rotate the protector and wait for it to fully rotate. This approach ensures that the database will use 2 keys at any given time. Example Time t0 = DB is created without encryption Time t1 = DB is protected by Thumbprint A Time t2 = DB protector is rotated to Thumbprint B Time t3 = Customer requests a protector change to Thumbprint C We check the active VLFs, they are using Thumbprint A and we block the change This ensures that currently the DB is only using Thumbprint A and Thumbprint B Time t4 = Customer requests a protector change to Thumbprint C We check the active VLFs, and none of them are using thumbprint A. We solicit a log backup, that should flush the inactive VLFs using thumbprint A Then we rotate the encryption protector and succeed the operation only when both (b) and (c) are fully complete This approach ensures that after time t4, the DB is only using Thumbprint B and Thumbprint C Point-in-time restore Based on the provided information, it is evident that multiple keys are necessary for a restore if key rotations have taken place. To restore a backup encrypted with a TDE protector from Azure Key Vault, ensure that the key material is accessible to the target server. Therefore, we recommend that you keep the old versions of the TDE protector in the Azure Key Vault, so database backups can be restored. To mitigate it, run the Get-AzSqlServerKeyVaultKey cmdlet for the target server or Get-AzSqlInstanceKeyVaultKey for the target managed instance to return the list of available keys and identify the missing ones. To ensure all backups can be restored, make sure the target server for the restore has access to all of keys needed. These keys don't need to be marked as TDE protector. Backed up log files remain encrypted with the original TDE protector, even if it was rotated and the database is now using a new TDE protector. At restore time, both keys are needed to restore the database. If the log file is using a TDE protector stored in Azure Key Vault, this key is needed at restore time, even if the database has been changed to use service-managed TDE in the meantime. Point-in-time restore example When a customer wants to restore data to a specific point in time (tx), they will need the current encryption protector (p) and the old encryption protector (p-1) from the period [tx-8 days] to [tx]. The reason for using tx-8 is that there is a full backup every 7 days, so we expect to have a complete backup within the last 8 days. Because VLFs may remain active with the earlier key, the system is designed to use the two latest thumbprints (p-2 and p-3) from outside the buffer period. Consider the following timeline: The PITR request is made for 8/20/2025 (tx), at which point Thumbprint D (p) is active. To ensure we have a full backup, we subtract 8 days, bringing us to 8/12/2025 (tx-8), when Thumbprint C (p-1) is active. Since VLFs might still be active with the previous key, we also need Thumbprint B (p-2) and Thumbprint A (p-3). The required thumbprints for this point-in-time restore are A, B, C and D. Conclusion To restore a backup encrypted with a Key Vault TDE protector, it is essential to ensure that the key material is accessible to the target server. It is recommended to retain all old versions of the TDE protector in the key vault to facilitate the restore of database backups.285Views1like0CommentsIntroducing the Azure SQL hub: A simpler, guided entry into Azure SQL
Choosing the right Azure SQL service can be challenging. To make this easier, we built the Azure SQL hub, a new home for everything related to Azure SQL in the Azure portal. Whether you’re new to Azure SQL or an experienced user, the hub helps you find the right service quickly and decide, without disrupting your existing workflows. For existing users: Your current workflows remain unchanged. The only visible update is a streamlined navigation pane where you access Azure SQL resources. For new users: Start from the Azure SQL hub home page. Get personalized recommendations by answering a few quick questions or chatting with Azure portal Copilot. Or compare services side by side and explore key resources, all without leaving the portal. This is one way to find it: Searching for "azure sql" in main search box or marketplace is also efficient way to get to Azure SQL hub Answer a few questions to get our recommendation and use Copilot to refine your requirements. Get a detailed side-by-side comparison without leaving the hub. Still deciding? Explore a selection of Azure SQL services for free. This option takes you straight to the resource creation page with a pre-applied free offer. Try the Azure SQL hub today in the Azure portal, and share your feedback in the comments!1.1KViews3likes0CommentsHow to take secure, on-demand backups on SQL Managed Instance
In this blog we discuss a couple of Azure technologies that can be used to establish a secure connection between Azure SQL Managed Instance and an Azure Storage blog container for the purpose of taking native, copy-only backups.23KViews2likes18CommentsPreparing for the Deprecation of TLS 1.0 and 1.1 in Azure Databases
Microsoft announced the retirement of TLS 1.0 and TLS 1.1 for Azure Services, including Azure SQL Database, Azure SQL Managed Instance, Cosmos DB, and Azure Database for MySQL by August 31, 2025. Customers are required to upgrade to a more secure minimum TLS protocol version of TLS 1.2 for client-server connections to safeguard data in transit and meet the latest security standards. The retirement of TLS 1.0 and 1.1 for Azure databases was originally scheduled for August 2024. To support customers in completing their upgrades, the deadline was extended to August 31, 2025. Starting August 31, 2025, we will force upgrade servers with minimum TLS 1.0 or 1.1 to TLS 1.2, and connections using TLS 1.0 or 1.1 will be disallowed, and connectivity will fail. To avoid potential service interruptions, we strongly recommend customers complete their migration to TLS 1.2 before August 31, 2025. Why TLS 1.0 and 1.1 Are Being Deprecated TLS (Transport Layer Security) protocols are vital in ensuring encrypted data transmission between clients and servers. However, TLS 1.0 and 1.1, introduced in 1999 and 2006 respectively, are now outdated and vulnerable to modern attack vectors. By retiring these versions, Microsoft is taking a proactive approach to enhance the security landscape for Azure services such as Azure databases. Security Benefits of Upgrading to TLS 1.2 Enhanced encryption algorithms: TLS 1.2 provides stronger cryptographic protocols, reducing the risk of exploitation. Protection against known vulnerabilities: Deprecated versions are susceptible to attacks such as BEAST, POODLE, and others TLS 1.2 addresses. Compliance with industry standards: Many regulations, including GDPR, PCI DSS, and HIPAA, mandate the use of secure, modern TLS versions. How to Verify and Update TLS Settings for Azure Database Services For instructions on how to verify your Azure database is configured with minimum TLS 1.2 or upgrade the minimum TLS setting to 1.2, follow the respective guide below for your database service. Azure SQL Database and Azure SQL Managed Instance The Azure SQL Database and SQL Managed Instance minimum TLS version setting allows customers to choose which version of TLS their database uses. Azure SQL Database To identify clients that are connecting to your Azure SQL DB using TLS 1.0 and 1.1, SQL audit logs must be enabled. With auditing enabled you can view client connections: Connectivity settings - Azure SQL Database and SQL database in Fabric | Microsoft Learn To configure the minimum TLS version for your Azure SQL DB using the Azure portal, Azure PowerShell or Azure CLI: Connectivity settings - Azure SQL Database and SQL database in Fabric | Microsoft Learn Azure SQL Managed Instance To identify clients that are connecting to your Azure SQL MI using TLS 1.0 and 1.1, auditing must be enabled. With auditing enabled, you can consume audit logs using Azure Storage, Event Hubs or Azure Monitor Logs to view client connections: Configure auditing - Azure SQL Managed Instance | Microsoft Learn To configure the minimum TLS version for your Azure SQL MI using Azure PowerShell or Azure CLI: Configure minimal TLS version - managed instance - Azure SQL Managed Instance | Microsoft Learn Azure Cosmos Database The minimum service-wide accepted TLS version for Azure Cosmos Database is TLS 1.2, but this selection can be changed on a per account basis. To verify the minimum TLS version of the minimalTlsVersion property on your Cosmos DB account: Self-serve minimum tls version enforcement in Azure Cosmos DB - Azure Cosmos DB | Microsoft Learn To configure the minimum TLS version for your Cosmos DB account using the Azure Portal, Azure PowerShell, Azure CLI or Arm Template: Self-serve minimum tls version enforcement in Azure Cosmos DB - Azure Cosmos DB | Microsoft Learn Azure Database for MySQL Azure Database for MySQL supports encrypted connections using TLS 1.2 by default, and all incoming connections with TLS 1.0 and TLS 1.1 are denied by default, though users are allowed to change the setting. To verify the minimum TLS version configured for your Azure DB for MySQL server tls_version server parameter using the MySQL command-line interface: Encrypted Connectivity Using TLS/SSL - Azure Database for MySQL - Flexible Server | Microsoft Learn To configure the minimum TLS version for your Azure DB for MySQL server using the MySQL command-line interface: Configure Server Parameters - Azure Portal - Azure Database for MySQL - Flexible Server | Microsoft Learn If your database is currently configured with a minimum TLS setting of TLS 1.2, no action is required. Conclusion The deprecation of TLS 1.0 and 1.1 marks a significant milestone in enhancing the security of Azure databases. By transitioning to TLS 1.2, users can ensure highly secure encrypted data transmission, compliance with industry standards, and robust protection against evolving cyber threats. Upgrade to TLS 1.2 now to prepare for this change and maintain secure, compliant database connectivity settings.3.8KViews0likes0Comments