azure sql security
175 TopicsGeo-Replication and Transparent Data Encryption Key Management in Azure SQL Database
Transparent data encryption (TDE) in Azure SQL with customer-managed key (CMK) enables Bring Your Own Key (BYOK) scenario for data protection at rest. With customer-managed TDE, the customer is responsible for and in a full control of a key lifecycle management (key creation, upload, rotation, deletion), key usage permissions, and auditing of operations on keys. Geo-replication and Failover Groups in Azure SQL Database creates readable secondary replicas across different regions to support disaster recovery and high availability. There are several considerations when configuring geo-replication for Azure SQL logical servers that use Transparent Data Encryption (TDE) with Customer-Managed Keys (CMK). This blog post provides detailed information about setting up geo-replication for TDE-enabled databases. Understanding the difference between a TDE protector and a server/database Key To understand the geo-replication considerations, I first need to explain the roles of the TDE protector and a server key. The TDE protector is the key responsible for encrypting the Database Encryption Key (DEK), which in turn encrypts the actual database files and transaction logs. It sits at the top of the encryption hierarchy. A server or database key is a broader concept that refers to any key registered at the server or database level in Azure SQL. One of the registered server or database keys is designated as the TDE protector. Multiple keys can be registered, but only one acts as the active protector at any given time. Geo-replication considerations Azure Key Vault considerations In active geo-replication and failover group scenarios, the primary and secondary SQL logical servers can be linked to an Azure Key Vault in any region — they do not need to be located in the same region. Connecting both SQL logical servers to the same key vault reduces the potential for discrepancies in key material that can occur when using separate key vaults. Azure Key Vault incorporates multiple layers of redundancy to ensure the continued availability of keys and key vaults in the event of service or regional failures. The following diagram represents a configuration for paired region (primary and secondary) for an Azure Key Vault cross-failover with Azure SQL setup for geo-replication using a failover group. Azure SQL considerations The primary consideration is to ensure that the server or database keys are present on both the primary and secondary SQL logical servers or databases, and that appropriate permissions have been granted for these keys within Azure Key Vault. The TDE protector used on the primary does not need to be identical to the one used on the secondary. It is sufficient to have the same key material available on both the primary and secondary systems. You can add keys to a SQL logical server with the Azure Portal, PowerShell, Azure CLI or REST API. Assign user-assigned managed identities (UMI) to primary and secondary SQL servers for flexibility across regions and failover scenarios. Grant Get, WrapKey, UnwrapKey permissions to these identities on the key vault. For Azure Key Vaults using Azure RBAC for access configuration, the Key Vault Crypto Service Encryption User role is needed by the server identity to be able to use the key for encryption and decryption operations. Different encryption key combinations Based on interactions with customers and analysis of livesite incidents involving geo-replication with TDE CMK, we noticed that many clients do not configure the primary and secondary servers with the same TDE protector, due to differing compliance requirements. In this chapter, I will explain how you can set up a failover group with 2 different TDE protectors. This scenario will use TDECMK key as the TDE protector on the primary server (tdesql) and TDECMK2 key as the TDE protector on the secondary server (tdedr). As a sidenote, you can also enable the Auto-rotate key to allow end-to-end, zero-touch rotation of the TDE protector Both logical SQL Servers have access to the Azure Key Vault keys. Next step is to create the failover group and replicate 1 database called ContosoHR. The setup of the failover group failed because the CMK key from the primary server was (intentionally) not added to the secondary server. Adding a server key can easily be done with the Add-AzSqlServerKeyVaultKey command. For example: Add-AzSqlServerKeyVaultKey -KeyId 'https://xxxxx.vault.azure.net/keys/TDECMK/ 01234567890123456789012345678901' -ServerName 'tdedr' -ResourceGroupName 'TDEDemo' After the failover group setup, the sample database on the secondary server becomes available and uses the primary's CMK (TDECMK), regardless of the secondary's CMK configuration. You can verify this by running the following query both on primary and secondary server. SELECT DB_NAME(db_id()) AS database_name, dek.encryption_state, dek.encryption_state_desc, -- SQL 2019+ / Azure SQL dek.key_algorithm, dek.key_length, dek.encryptor_type, -- CERTIFICATE (service-managed) or ASYMMETRIC KEY (BYOK/CMK) dek.encryptor_thumbprint FROM sys.dm_database_encryption_keys AS dek WHERE database_id <> 2 ; database_name encryption_state_desc encryptor_type encryptor_thumbprint ContosoHR ENCRYPTED ASYMMETRIC KEY 0xC8F041FB93531FA26BF488740C9AC7D3B5827EF5 The encryptor_type column shows ASYMMETRIC KEY, which means the database uses a CMK for encryption. The encryptor_thumbprint should match on both the primary and secondary servers, indicating that the secondary database is encrypted using the CMK from the primary server. The TDE protector on the secondary server (TDECMK2) becomes active only in the event of a failover, when the primary and secondary server roles are reversed. As illustrated in the image below, the roles of my primary and secondary servers have been reversed. If the above query is executed again following a failover, the encryptor_thumbprint value will be different, which indicates that the database is now encrypted using the TDE protector (TDECMK2) from the secondary server. database_name encryption_state_desc encryptor_type encryptor_thumbprint ContosoHR ENCRYPTED ASYMMETRIC KEY 0x788E5ACA1001C87BA7354122B7D93B8B7894918D As previously mentioned, please ensure that the server or database keys are available on both the primary and secondary SQL logical servers or databases. Additionally, verify that the appropriate permissions for these keys have been granted within Azure Key Vault. This scenario is comparable to other configurations, such as: The primary server uses SMK while the secondary server uses CMK. The primary server uses CMK while the secondary server uses SMK. Conclusion Understanding the distinction between the TDE protector and server keys is essential for geo-replication with Azure SQL. The TDE protector encrypts the database encryption key, while server keys refer to any key registered at the server or database level in Azure SQL. For successful geo-replication setup and failover, all necessary keys must be created and available on both primary and secondary servers. It is possible and, in certain cases, required to configure different TDE protectors on replicas, as long as the key material is available on each server.153Views0likes0Comments🔐 Public Preview: Backup Immutability for Azure SQL Database LTR Backups
The Ransomware Threat Landscape Ransomware attacks have become one of the most disruptive cybersecurity threats in recent years. These attacks typically follow a destructive pattern: Attackers gain unauthorized access to systems. They encrypt or delete critical data. They demand ransom in exchange for restoring access. Organizations without secure, tamper-proof backups are often left with no choice but to pay the ransom or suffer significant data loss. This is where immutable backups play a critical role in defense. 🛡️ What Is Backup Immutability? Backup immutability ensures that once a backup is created, it cannot be modified or deleted for a specified period. This guarantees: Protection against accidental or malicious deletion. Assurance that backups remain intact and trustworthy. Compliance with regulatory requirements for data retention and integrity. 🚀 Azure SQL Database LTR Backup Immutability (Public Preview) Microsoft has introduced backup immutability for Long-Term Retention (LTR) backups in Azure SQL Database, now available in public preview. This feature allows organizations to apply Write Once, Read Many (WORM) policies to LTR backups stored in Azure Blob Storage. Key Features: Time-based immutability: Locks backups for a defined duration (e.g., 30 days). Legal hold immutability: Retains backups indefinitely until a legal hold is explicitly removed. Tamper-proof storage: Backups cannot be deleted or altered, even by administrators. This ensures that LTR backups remain secure and recoverable, even in the event of a ransomware attack. 📜 Regulatory Requirements for Backup Immutability Many global regulations mandate immutable storage to ensure data integrity and auditability. Here are some key examples: Region Regulation Requirement USA SEC Rule 17a-4(f) Requires broker-dealers to store records in WORM-compliant systems. FINRA Mandates financial records be preserved in a non-rewriteable, non-erasable format. HIPAA Requires healthcare organizations to ensure the integrity and availability of electronic health records. EU GDPR Emphasizes data integrity and the ability to demonstrate compliance through audit trails. Global ISO 27001, PCI-DSS Require secure, tamper-proof data retention for audit and compliance purposes. Azure’s immutable storage capabilities help organizations meet these requirements by ensuring that backup data remains unchanged and verifiable. 🕒 Time-Based vs. Legal Hold Immutability ⏱️ Time-Based Immutability Locks data for a predefined period (e.g., 30 days). Ideal for routine compliance and operational recovery. Automatically expires after the retention period. 📌 Legal Hold Immutability Retains data indefinitely until the hold is explicitly removed. Used in legal investigations, audits, or regulatory inquiries. Overrides time-based policies to ensure data preservation. Both types can be applied to Azure SQL LTR backups, offering flexibility and compliance across different scenarios. 🧩 How Immutability Protects Against Ransomware Immutable backups are a critical component of a layered defense strategy: Tamper-proof: Even if attackers gain access, they cannot delete or encrypt immutable backups. Reliable recovery: Organizations can restore clean data from immutable backups without paying ransom. Compliance-ready: Meets regulatory requirements for data retention and integrity. By enabling immutability for Azure SQL LTR backups, organizations can significantly reduce the risk of data loss and ensure business continuity. ✅ Final Thoughts The public preview of backup immutability for Azure SQL Database LTR backups is a major step forward in ransomware resilience and regulatory compliance. With support for both time-based and legal hold immutability, Azure empowers organizations to: Protect critical data from tampering or deletion. Meet global compliance standards. Recover quickly and confidently from cyberattacks. Immutability is not just a feature—it’s a foundational pillar of modern data protection. Documentation is available at - Backup Immutability for Long-Term Retention Backups - Azure SQL Database | Microsoft Learn503Views2likes0CommentsEverything you need to know about TDE key management for database restore
Transparent data encryption (TDE) in Azure SQL with customer-managed key (CMK) supports Bring Your Own Key (BYOK) for data protection at rest and facilitates separation of duties in key and data management. With customer-managed TDE, the user manages the lifecycle of keys (creation, upload, rotation, deletion), usage permissions, and auditing of key operations. The key used for encrypting the Database Encryption Key (DEK), known as the TDE protector, is an asymmetric key managed by the customer and stored in Azure Key Vault (AKV). Once a database is encrypted with TDE using a Key Vault key, new backups are encrypted with the same TDE protector. Changing the TDE protector does not update old backups to use the new protector. To restore a backup encrypted with a Key Vault TDE protector, ensure that the key material is accessible to the target server. The TDE feature was designed with the requirement that both the current and previous TDE protectors are necessary for successful restores. It is recommended to retain all previous versions of the TDE protector in the key vault to enable the restore of database backups. This blog post will provide detailed information on which keys should be available for a database restore and the reasons why they are necessary. Encryption of the transaction log file To understand which keys are required for a point-in-time restore, it is necessary to first explain how the transaction log encryption operates. The SQL Server Database Engine divides each physical log file into several virtual log files (VLFs). Each VLF has its own header. Encrypting the entire log file in one single sweep is not possible, so each VLF is encrypted individually and the encryptor information is stored in the VLF header. When the log manager needs to read a particular VLF for recovery, it uses the encryptor information in the VLF header to locate the encryptor and decrypt the VLF. Unencrypted transaction log Consider the following sequence of blocks as the logical log file, where each block represents a Virtual Log File (VLF). Initially, we are in VLF1, and the current Log Sequence Number (LSN) is within VLF1. Transparent data encryption enabled When TDE is enabled on the database, the current VLF is filled with non-operational log records, and a new VLF (VLF2) is created. Each VLF has one header containing the encryptor information, so whenever the encryptor information changes, the log rolls over to the next VLF boundary. The subsequent VLF will contain the new DEK (DEK_1) and the thumbprint of the encryptor of the DEK in the header. Any additions to the log file will be added to VLF2 and will be encrypted. When VLF2 reaches capacity, a new VLF (VLF3) will be generated. Since encryption is enabled, the new VLF will contain the DEK and its information in its header, and it will also be encrypted. Key rotation When a new DEK is generated or its encryptor changes, the log rolls over to the next VLF boundary. The new VLF (VLF4) will contain the updated DEK and encryptor information. For example, if a new DEK (DEK_2) is generated via key rotation in the Azure Portal, VLF3 will fill with non-operational commands before VLF4 is created and encrypted by the new DEK. A database can use multiple keys at a single time Currently, for server and database level CMK, after a key rotation, some of the active VLFs may still be encrypted with an old key. As key rotations are allowed before these VLFs are flushed, the database can end up using multiple keys simultaneously. To ensure that at a certain point in time, the database is using only the current encryption protector (primary generation p) and the old encryption protector (generation p-1) we used the following approach: Block a protector change operation when there is any active VLF using an old thumbprint different from the current encryption protector. When a customer attempts a protector change or the key is being auto rotated, we will verify if there are any VLFs using the old thumbprint that are "active". If so, the protector change will be blocked. If there are no "active VLFs" using the old thumbprint, we take a log backup to flush the inactive VLFs, then rotate the protector and wait for it to fully rotate. This approach ensures that the database will use 2 keys at any given time. Example Time t0 = DB is created without encryption Time t1 = DB is protected by Thumbprint A Time t2 = DB protector is rotated to Thumbprint B Time t3 = Customer requests a protector change to Thumbprint C We check the active VLFs, they are using Thumbprint A and we block the change This ensures that currently the DB is only using Thumbprint A and Thumbprint B Time t4 = Customer requests a protector change to Thumbprint C We check the active VLFs, and none of them are using thumbprint A. We solicit a log backup, that should flush the inactive VLFs using thumbprint A Then we rotate the encryption protector and succeed the operation only when both (b) and (c) are fully complete This approach ensures that after time t4, the DB is only using Thumbprint B and Thumbprint C Point-in-time restore Based on the provided information, it is evident that multiple keys are necessary for a restore if key rotations have taken place. To restore a backup encrypted with a TDE protector from Azure Key Vault, ensure that the key material is accessible to the target server. Therefore, we recommend that you keep the old versions of the TDE protector in the Azure Key Vault, so database backups can be restored. To mitigate it, run the Get-AzSqlServerKeyVaultKey cmdlet for the target server or Get-AzSqlInstanceKeyVaultKey for the target managed instance to return the list of available keys and identify the missing ones. To ensure all backups can be restored, make sure the target server for the restore has access to all of keys needed. These keys don't need to be marked as TDE protector. Backed up log files remain encrypted with the original TDE protector, even if it was rotated and the database is now using a new TDE protector. At restore time, both keys are needed to restore the database. If the log file is using a TDE protector stored in Azure Key Vault, this key is needed at restore time, even if the database has been changed to use service-managed TDE in the meantime. Point-in-time restore example When a customer wants to restore data to a specific point in time (tx), they will need the current encryption protector (p) and the old encryption protector (p-1) from the period [tx-8 days] to [tx]. The reason for using tx-8 is that there is a full backup every 7 days, so we expect to have a complete backup within the last 8 days. Because VLFs may remain active with the earlier key, the system is designed to use the two latest thumbprints (p-2 and p-3) from outside the buffer period. Consider the following timeline: The PITR request is made for 8/20/2025 (tx), at which point Thumbprint D (p) is active. To ensure we have a full backup, we subtract 8 days, bringing us to 8/12/2025 (tx-8), when Thumbprint C (p-1) is active. Since VLFs might still be active with the previous key, we also need Thumbprint B (p-2) and Thumbprint A (p-3). The required thumbprints for this point-in-time restore are A, B, C and D. Conclusion To restore a backup encrypted with a Key Vault TDE protector, it is essential to ensure that the key material is accessible to the target server. It is recommended to retain all old versions of the TDE protector in the key vault to facilitate the restore of database backups.354Views2likes0CommentsPreparing for the Deprecation of TLS 1.0 and 1.1 in Azure Databases
Microsoft announced the retirement of TLS 1.0 and TLS 1.1 for Azure Services, including Azure SQL Database, Azure SQL Managed Instance, Cosmos DB, and Azure Database for MySQL by August 31, 2025. Customers are required to upgrade to a more secure minimum TLS protocol version of TLS 1.2 for client-server connections to safeguard data in transit and meet the latest security standards. The retirement of TLS 1.0 and 1.1 for Azure databases was originally scheduled for August 2024. To support customers in completing their upgrades, the deadline was extended to August 31, 2025. Starting August 31, 2025, we will force upgrade servers with minimum TLS 1.0 or 1.1 to TLS 1.2, and connections using TLS 1.0 or 1.1 will be disallowed, and connectivity will fail. To avoid potential service interruptions, we strongly recommend customers complete their migration to TLS 1.2 before August 31, 2025. Why TLS 1.0 and 1.1 Are Being Deprecated TLS (Transport Layer Security) protocols are vital in ensuring encrypted data transmission between clients and servers. However, TLS 1.0 and 1.1, introduced in 1999 and 2006 respectively, are now outdated and vulnerable to modern attack vectors. By retiring these versions, Microsoft is taking a proactive approach to enhance the security landscape for Azure services such as Azure databases. Security Benefits of Upgrading to TLS 1.2 Enhanced encryption algorithms: TLS 1.2 provides stronger cryptographic protocols, reducing the risk of exploitation. Protection against known vulnerabilities: Deprecated versions are susceptible to attacks such as BEAST, POODLE, and others TLS 1.2 addresses. Compliance with industry standards: Many regulations, including GDPR, PCI DSS, and HIPAA, mandate the use of secure, modern TLS versions. How to Verify and Update TLS Settings for Azure Database Services For instructions on how to verify your Azure database is configured with minimum TLS 1.2 or upgrade the minimum TLS setting to 1.2, follow the respective guide below for your database service. Azure SQL Database and Azure SQL Managed Instance The Azure SQL Database and SQL Managed Instance minimum TLS version setting allows customers to choose which version of TLS their database uses. Azure SQL Database To identify clients that are connecting to your Azure SQL DB using TLS 1.0 and 1.1, SQL audit logs must be enabled. With auditing enabled you can view client connections: Connectivity settings - Azure SQL Database and SQL database in Fabric | Microsoft Learn To configure the minimum TLS version for your Azure SQL DB using the Azure portal, Azure PowerShell or Azure CLI: Connectivity settings - Azure SQL Database and SQL database in Fabric | Microsoft Learn Azure SQL Managed Instance To identify clients that are connecting to your Azure SQL MI using TLS 1.0 and 1.1, auditing must be enabled. With auditing enabled, you can consume audit logs using Azure Storage, Event Hubs or Azure Monitor Logs to view client connections: Configure auditing - Azure SQL Managed Instance | Microsoft Learn To configure the minimum TLS version for your Azure SQL MI using Azure PowerShell or Azure CLI: Configure minimal TLS version - managed instance - Azure SQL Managed Instance | Microsoft Learn Azure Cosmos Database The minimum service-wide accepted TLS version for Azure Cosmos Database is TLS 1.2, but this selection can be changed on a per account basis. To verify the minimum TLS version of the minimalTlsVersion property on your Cosmos DB account: Self-serve minimum tls version enforcement in Azure Cosmos DB - Azure Cosmos DB | Microsoft Learn To configure the minimum TLS version for your Cosmos DB account using the Azure Portal, Azure PowerShell, Azure CLI or Arm Template: Self-serve minimum tls version enforcement in Azure Cosmos DB - Azure Cosmos DB | Microsoft Learn Azure Database for MySQL Azure Database for MySQL supports encrypted connections using TLS 1.2 by default, and all incoming connections with TLS 1.0 and TLS 1.1 are denied by default, though users are allowed to change the setting. To verify the minimum TLS version configured for your Azure DB for MySQL server tls_version server parameter using the MySQL command-line interface: Encrypted Connectivity Using TLS/SSL - Azure Database for MySQL - Flexible Server | Microsoft Learn To configure the minimum TLS version for your Azure DB for MySQL server using the MySQL command-line interface: Configure Server Parameters - Azure Portal - Azure Database for MySQL - Flexible Server | Microsoft Learn If your database is currently configured with a minimum TLS setting of TLS 1.2, no action is required. Conclusion The deprecation of TLS 1.0 and 1.1 marks a significant milestone in enhancing the security of Azure databases. By transitioning to TLS 1.2, users can ensure highly secure encrypted data transmission, compliance with industry standards, and robust protection against evolving cyber threats. Upgrade to TLS 1.2 now to prepare for this change and maintain secure, compliant database connectivity settings.4.1KViews0likes0CommentsBreaking Limits: Full-Length SQL Audit Statements Now Supported for Azure SQL DB
We’re excited to announce a major enhancement to SQL auditing in Azure SQL Database, the removal of the 4,000-character truncation limit for audit records. What’s Changed? Previously, audit records were limited to storing 4,000 characters for fields like statement and data_sensitivity_information. Any content beyond that threshold was truncated, potentially leaving out critical details from lengthy queries or sensitive data traces. With this update, statements are no longer truncated. Audit logs now capture the entire content of auditable actions, ensuring complete visibility and traceability for security and compliance teams. Why It Matters Improved Security & Compliance: Full-length statements provide better context for threat detection and forensic analysis. Enhanced Customer Experience: No more guessing what was left out—customers get the full picture. Feature Parity with SQL Server and Managed Instance: This update brings Azure SQL Database in line with SQL Server and Managed Instance capabilities, ensuring consistency across environments.450Views1like1CommentEnhanced Server Audit for Azure SQL Database: Greater Performance, Availability and Reliability
We are excited to announce a significant update to the server audit feature for Azure SQL Database. We have re-architected major portions of SQL Auditing resulting in increased availability and reliability of server audits. As an added benefit, we have achieved closer feature alignment with SQL Server and Azure SQL Managed Instance. Database auditing remains unchanged. In the remainder of this blog article, we cover Functional changes Changes Affecting customers Sample queries Call for action Implementation and Notification Time-based Filtering Functional Changes In the current design when server audit is enabled, it triggers a database level audit and executes one audit session for each database. With the new architecture, enabling server audit will create one extended event session at the server level that captures audit events for all databases. This optimizes memory and CPU and is consistent with how auditing works in SQL Server and Azure SQL Managed Instance. Changes Affecting Customers Folder Structure change for storage account Folder structure change for Read-Only replicas Permissions required to view Audit logs One of the primary changes involves the folder structure for audit logs stored in storage account containers. Previously, server audit logs were written to separate folders, one for each database, with the database name serving as the folder name. With the new update, all server audit logs will be consolidated into a single folder which is ‘Master’ folder. This behavior is the same as Azure SQL Managed Instance and SQL Server For Read-Only database replicas, which previously had their logs stored in a read-only folder, those logs will now also be written into the Master folder. You can retrieve these logs by filtering on the new column ‘is_secondary_replica_true’. Please note that the audit logs generated after deployment will adhere to the new folder structure, while the existing audit logs will stay in their current folders until their retention periods expire. Sample Queries To help you adopt these changes in your workflows, here are some sample queries: Current New To Query audit logs for a specific database called "test" SELECT * FROM sys.fn_get_audit_file ('https://testaudit.blob.core.windows.net/sqldbauditlogs/auditpoc/test/ SqlDbAuditing_ServerAudit_NoRetention/2023-01-29/07_06_40_590_0.xel', default, default) SELECT * FROM sys.fn_get_audit_file ('https://testaudit.blob.core.windows.net/sqldbauditlogs/auditpoc/master/SqlDbAuditing_ServerAudit_NoRetention/2023-01-29/07_06_40_590_0.xel', default, default) WHERE database_name = 'test'; To query audit logs for test database from read only replica SELECT * FROM sys.fn_get_audit_file ('https://testaudit.blob.core.windows.net/sqldbauditlogs/auditpoc/test/SqlDbAuditing_ServerAudit_NoRetention/2023-01-29/RO/07_06_40_590_0.xel', default, default) SELECT * FROM sys.fn_get_audit_file ('https://testaudit.blob.core.windows.net/sqldbauditlogs/auditpoc/master/SqlDbAuditing_ServerAudit_NoRetention/2023-01-29/07_06_40_590_0.xel', default, default) WHERE is_secondary_replica_true = 'true'; Permissions Control database on user database VIEW Database SECURITY AUDIT permission in User Databases Implementation and Notifications We are rolling out this change region-wise. Subscription owners will receive notifications with the subject “Update your scripts to point to a new folder for server level audit logs” for each region as the update is implemented. It is important to update any scripts that refer to the folder structure to retrieve audit logs based on the database name for the specific region. Note that this change applies only to server-level auditing; database auditing remains unchanged. Call for Action These actions apply only to customers who are using storage account targets. No action is needed for customers using Log Analytics or Event hubs. Folder references: Change the reference for audit logs from the database name folder to the Master folder and use specific filters to retrieve logs for a required database. Read -Only Database Replicas: Update references for audit logs from the Read-Only replica folder to the Master folder and filter using the new parameter as shown in the examples. Permissions: Ensure you have the necessary control server permissions to review the audit logs for each database using fn_get_audit_file. Manual Queries This update also applies to manual queries where you use fn_get_audit_file to retrieve audit logs from the storage account Time-based filtering To enhance your ability to query audit logs using filters, consider using efficient time-based filtering with the fn_get_audit_file_v2 function. This function allows you to retrieve audit log data with improved filtering capabilities. For more details, refer to the official documentation here.2.2KViews2likes2CommentsAnnouncing General Availability of Enhanced Server Audit for Azure SQL Database
We’re excited to announce the general availability of the enhanced server audit feature for Azure SQL Database, now deployed globally. This update delivers improved performance, availability, and reliability, aligning server auditing more closely with SQL Server and Azure SQL Managed Instance. The new architecture introduces a single extended event session at the server level, capturing audit events across all databases. This design optimizes CPU and memory usage, reduces overhead, and ensures consistent audit behavior across Azure SQL platforms. 🔐 Server Audit Action Groups Now Supported With this release, server-level audit action groups are now available in Azure SQL Database. This enhancement enables customers to meet security and compliance requirements more effectively by capturing broader activity at the server scope. For more information on supported action groups, refer to the https://learn.microsoft.com/en-us/sql/relational-databases/security/auditing/sql-server-audit-action-groups-and-actions For more details on the enhancements, please refer to the blog post.885Views0likes0CommentsHow to enable Auditing in Azure SQL Databases to Storage account and store logs in JSON format
In today's data-driven world, auditing is a crucial aspect of database management. It helps ensure compliance, security, and operational efficiency. Azure SQL Databases offer robust auditing capabilities, and in this blog, we'll explore how to enable auditing with a storage target in JSON format. This approach simplifies access to audit logs without the need for specialized tools like SQL Server Management Studio (SSMS). Step 1: Enable Database-Level Auditing with Azure Monitor The first step is to enable database-level auditing using Azure Monitor. This can be achieved through a REST API call. Here’s how you can do it: Request: PUT https://management.azure.com/subscriptions/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/resourceGroups/test_rg/providers/Microsoft.Sql/servers/test-sv2/databases/test_db/extendedAuditingSettings/default?api-version=2021-11-01 Host: management.azure.com Content-Length: 249 { "properties": { "state": "Enabled", "auditActionsAndGroups": [ "FAILED_DATABASE_AUTHENTICATION_GROUP", "SUCCESSFUL_DATABASE_AUTHENTICATION_GROUP", "BATCH_COMPLETED_GROUP" ], "isAzureMonitorTargetEnabled": true } } Explanation: state: Enables the auditing. auditActionsAndGroups: Specifies the audit groups to capture. isAzureMonitorTargetEnabled: Enables Azure Monitor integration. Step 2: Create Database-Level Diagnostic Setting Next, you need to create a diagnostic setting for storing logs in JSON format. This is done using another REST API call: Request: PUT https://management.azure.com/subscriptions/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/resourceGroups/test_rg/providers/Microsoft.Sql/servers/test-sv2/databases/test_db/providers/Microsoft.Insights/diagnosticSettings/testDiagnosti1c?api-version=2021-05-01-preview Content-type: application/json Host: management.azure.com Content-Length: 414 { "properties": { "storageAccountId": "/subscriptions/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/resourceGroups/test_rg/providers/Microsoft.Storage/storageAccounts/teststorage2", "metrics": [], "logs": [ { "category": "sqlsecurityauditevents", "enabled": true, "retentionPolicy": { "enabled": false, "days": 0 } } ] } } Key Details: storageAccountId: Specifies the target storage account. category: Chooses sqlsecurityauditevents to store SQL audit events. retentionPolicy: Retention is disabled in this configuration. Step 3: Run a Sample Query To generate audit logs, execute a simple query: SELECT 1; Step 4: Check the JSON Files in Storage Account Finally, navigate to the Azure Storage account specified in Step 2. Open the insights-logs-sqlsecurityauditevents container and download the generated JSON file to review the audit logs. Advantages: Audit logs are stored in JSON format, making them easily readable without the need for specialized tools like SQL Server Management Studio (SSMS). Limitations: This approach is only applicable for database-level auditing; server-level auditing is not supported. Retention policy settings are not functional in this configuration. By following this blog, you can enable auditing for Azure SQL Databases, which directly generates JSON files in an Azure Storage account. This method streamlines the access and analysis of audit data.616Views1like0CommentsEnhancing Security with Conditional Access Policies for SQL Managed Instances
Customers using SQL Managed Instances with Kerberos token-based authentication encountered failures when Conditional Access (CA) location policies were applied. The only workaround was to exclude Azure SQL Managed Instances (SQL MI) from CA location policies, which was not a viable solution from a security perspective. This situation forced customers to either block the use of SQL MI Kerberos or exclude Azure SQL MI from CA policies, compromising their security. We implemented a solution where the Kerberos ticket records the client IP and sends it back encrypted to the user's client machine. When the client sends an authentication request to SQL MI, SQL MI sends an OBO request, which exchanges the Kerberos ticket for an AAD token (JWT). This process uses the client IP from the Kerberos ticket and replaces the original SQL MI IP with the recorded IP. This validates the CA policy against the correct IP address, ensuring seamless authentication We resolved the issue for scenarios where location-based CA policies are present, specifically when the client machine is in an allowed location, but the SQL MI is in a non-allowed location. This solution addressed the compatibility issue between Microsoft Entra Kerberos and Entra location Conditional Access policies. It ensures that customers can use Kerberos token-based authentication for SQL Managed Instances without compromising their security policies316Views0likes0CommentsNative Windows principals for Azure SQL Managed Instance are now generally available
Today we’re announcing the general availability for Native Windows Principals for Azure SQL Managed Instance. This capability simplifies the migration to Azure SQL Managed Instance and unblock the migration of legacy applications that are tied to windows logins. This feature is crucial for the SQL Managed Instance link. While the Managed Instance link facilitates near real-time data replication between SQL Server and Azure SQL Managed Instance, the read-only replica in the cloud restricts the creation of Microsoft Entra principals. The Windows authentication metadata mode allows customers to use an existing Windows login to authenticate to the replica in the event of a failover With this feature, the following Authentication metadata modes are available for SQL Managed Instance, and the different modes determine which authentication metadata is used for authentication, along with how the login is created: Microsoft Entra (Default): This mode allows authenticating Microsoft Entra users using Microsoft Entra user metadata. In order to use Windows authentication in this mode, see Windows Authentication for Microsoft Entra principals on Azure SQL Managed Instance. Paired (SQL Server default): The default mode for SQL Server authentication. Windows (New Mode): This mode allows authenticating Microsoft Entra users using the Windows user metadata within SQL Managed Instance. The Windows authentication metadata mode is a new mode that allows users to use Windows authentication or Microsoft Entra authentication (using a Windows principal metadata) with Azure SQL Managed Instance. This mode is available for Azure SQL Managed Instance only. The Windows authentication metadata mode isn't available for Azure SQL Database To learn more, please refer to the documentation https://learn.microsoft.com/en-us/azure/azure-sql/managed-instance/native-windows-principals1.6KViews1like2Comments