azure sql database
507 TopicsRecovering Missing Rows (“Gaps”) in Azure SQL Data Sync — Supported Approaches (and What to Avoid)
Azure SQL Data Sync is commonly used to keep selected tables synchronized between a hub database and one or more member databases. In some cases, you may discover a data “gap”: a subset of rows that exist in the source but are missing on the destination for a specific time window, even though synchronization continues afterward for new changes. This post explains supported recovery patterns and what not to do, based on a real support scenario where a customer reported missing rows for a single table within a sync group and requested a way to synchronize only the missing records. The scenario: Data Sync continues, but some rows are missing In the referenced case, the customer observed that: A specific table had a gap on the member side (missing rows for a period), while newer data continued to sync normally afterward. They asked for a Microsoft-supported method to sync only the missing rows, without rebuilding or fully reinitializing the table. This is a reasonable goal—but the recovery method matters, because Data Sync relies on service-managed tracking artifacts. Temptation: “Can we push missing data by editing tracking tables or calling internal triggers?” A frequent idea is to “force” Data Sync to pick up missing rows by manipulating internal artifacts: Writing directly into Data Sync tracking tables (for example, tables under the DataSync schema such as *_dss_tracking), or altering provisioning markers. Manually invoking Data Sync–generated triggers or relying on their internal logic. The case discussion specifically referenced internal triggers such as _dss_insert_trigger, _dss_update_trigger, and _dss_delete_trigger. Why this is not recommended / not supported as a customer-facing solution In the case, the guidance from Microsoft engineering was clear: Manually invoking internal Data Sync triggers is not supported and can increase the risk of data corruption because these triggers are service-generated at runtime and are not intended for manual use. Directly manipulating Data Sync tracking/metadata tables is not recommended. The customer thread also highlights that these tracking tables are part of Data Sync internals, and using them for manual “push” scenarios is not a supported approach. Also, the customer conversation highlights an important conceptual point: tracking tables are part of how the service tracks changes; they are not meant to be treated as a user-managed replication queue. Supported recovery option #1 (recommended): Re-drive change detection via the base table The most supportable approach is to make Data Sync detect the missing rows through its normal change tracking path—by operating on the base/source table, not the service-managed internals. A practical pattern: “No-op update” to re-fire tracking In the internal discussion with the product team, the recommended pattern was to update the source/base table (even with a “no-op” assignment) so that Data Sync’s normal tracking logic is triggered, without manually invoking internal triggers. Example pattern (conceptual): UPDATE t SET some_column = some_column -- no-op: value unchanged FROM dbo.YourTable AS t WHERE <filter identifying the rows that are missing on the destination>; This approach is called out explicitly in the thread as a way to “re-drive” change detection safely through supported mechanisms. Operational guidance (practical): Apply the update in small batches, especially for large tables, to reduce transaction/log impact and avoid long-running operations. Validate the impacted row set first (for example, by comparing keys between hub and member). Supported recovery option #2: Deprovision and re-provision the affected table (safe “reset” path) If the gap is large, the row-set is hard to isolate, or you want a clean realignment of tracking artifacts, the operational approach discussed in the case was: Stop sync Remove the table from the sync group (so the service deprovisions tracking objects) Fix/clean the destination state as needed Add the table back and let Data Sync re-provision and sync again This option is often the safest when the goal is to avoid touching system-managed artifacts directly. Note: In production environments, customers may not be able to truncate/empty tables due to operational constraints. In that situation, the sync may take longer because the service might need to do more row-by-row evaluation. This “tradeoff” was discussed in the case context. Diagnostics: Use the Azure SQL Data Sync Health Checker When you suspect metadata drift, missing objects, or provisioning inconsistencies, the case recommended using the AzureSQLDataSyncHealthChecker script. This tool: Validates hub/member metadata and scopes against the sync metadata database Produces logs that can highlight missing artifacts and other inconsistencies Is intended to help troubleshoot Data Sync issues faster A likely contributor to “gaps”: schema changes during Data Sync (snapshot isolation conflict) In the case discussion, telemetry referenced an error consistent with concurrent DDL/schema changes while the sync process is enumerating changes (snapshot isolation + metadata changes). A well-known related error is SQL Server error 3961, which occurs when a snapshot isolation transaction fails because metadata was modified by a concurrent DDL statement, since metadata is not versioned. Microsoft documents this behavior and explains why metadata changes conflict with snapshot isolation semantics. Prevention guidance (practical) Avoid running schema deployments (DDL) during active sync windows. Use a controlled workflow for schema changes with Data Sync—pause/coordinate changes to prevent mid-sync metadata shifts. (General best practices exist for ongoing Data Sync operations and maintenance.) Key takeaways Do not treat Data Sync tracking tables/triggers as user-managed “replication internals.” Manually invoking internal triggers or editing tracking tables is not a supported customer-facing recovery mechanism. Do recover gaps via base table operations (insert/update) so the service captures changes through its normal path—“no-op update” is one practical pattern when you already know the missing row set. For large/complex gaps, consider the safe reset approach: remove the table from the sync group and re-add it to re-provision artifacts. Use the AzureSQLDataSyncHealthChecker to validate metadata consistency and reduce guesswork. If you see intermittent failures around deployments, consider the schema-change + snapshot isolation pattern (e.g., error 3961) as a possible contributor and schedule DDL changes accordingly.Managed Identity Support for Azure SQL Database Import & Export services (preview)
Today we’re announcing a public preview that lets Azure SQL Database Import & Export services authenticate with user-assigned managed identity. Now Azure SQL Databases can perform import and export operations with no passwords, storage keys or SAS tokens. With this preview, customers can choose to use either a single user-assigned managed identity (UAMI) for both SQL and Storage permissions or assign separate UAMIs, one for the Azure SQL logical server and another for the Storage account, for full separation of duties. At a glance: Run Import/Export using a user-assigned managed identity (UAMI). Use one identity for both SQL and Storage, or split them if you prefer tighter scoping. Works in the portal, REST, Azure CLI, and PowerShell. Why this matters: Managed identity support makes SQL migrations simpler and safer, no passwords, storage keys, or SAS tokens. By leveraging managed identity when integrating Import/Export into a pipeline, you streamline access management and enhance security: permissions are granted directly to the identity, reducing manual credential handling and the risk of exposing sensitive information. This keeps operations efficient and secure, without secrets embedded in scripts You’ve got two straightforward options: One UAMI for everything (simplest setup). Two UAMIs, one for SQL and one for Storage, recommended if you wish to maintain more strictly defined permissions. Getting started: Create a user-assigned managed identity (UAMI) Decide up front whether you want one identity end-to-end, or two identities (SQL vs Storage) for separation of duties. Attach the UAMI to the Azure SQL logical server On the server Identity blade, add the UAMI so the Import/Export job can run as that identity. Set the server’s Microsoft Entra ID admin to the UAMI In Microsoft Entra ID > Set admin, select the UAMI. This is what lets the workflow authenticate to SQL without a password. Grant Storage access Use Storage Blob Data Reader for import and Storage Blob Data Contributor for export, assigned in Access control (IAM). If you can, scope the assignment to the container that holds the .bacpac. Pass resource IDs (not names) in your calls In REST/CLI/PowerShell, you pass the UAMI resource ID as the value of administratorLogin (SQL identity) and storageKey (Storage identity), and set authenticationType / storageKeyType to ManagedIdentity. administratorLogin → UAMI resource ID used for SQL auth storageKey → UAMI resource ID used for Storage authauthenticationType / storageKeyType → ManagedIdentity Run the import/export job Kick it off from the portal, REST, Azure CLI, or PowerShell. From there, the service uses the identity you selected to reach both SQL and Storage. Portal experience In the Azure portal, you can choose Authentication type = Managed identity and select the user-assigned managed identity to use for the operation. Figure 1: Azure portal Import/Export experience with Managed identity authentication selected. Notes This preview supports user-assigned managed identities (UAMIs). For least privilege, scope Storage roles to the specific container used for the .bacpac file and use two user-assigned managed identities (UAMIs), one for SQL and one for the storage. Sample 1: REST API — Export using one UAMI: $exportBody = "{ `n `"storageKeyType`": `"ManagedIdentity`", `n `"storageKey`": `"${managedIdentityServerResourceId}`", `n `"storageUri`": `"${storageUri}`", `n `"administratorLogin`": `"${managedIdentityServerResourceId}`", `n `"authenticationType`": `"ManagedIdentity`" `n}" $export = Invoke-AzRestMethod -Method POST -Path "/subscriptions/${subscriptionId}/resourceGroups/${resourceGroupName}/providers/Microsoft.Sql/servers/${serverName}/databases/${databaseName}/export?api-version=2024-05-01-preview" -Payload $exportBody # Poll operation status Invoke-AzRestMethod -Method GET $export.Headers.Location.AbsoluteUri Sample 2: REST API — Import to a new server using one UAMI: $serverName = "sql-mi-demo-target" $databaseName = "sqldb-mi-demo-target" # Same UAMI for SQL auth + Storage access $importBody = "{ `n `"operationMode`": `"Import`", `n `"administratorLogin`": `"${managedIdentityServerResourceId}`", `n `"authenticationType`": `"ManagedIdentity`", `n `"storageKeyType`": `"ManagedIdentity`", `n `"storageKey`": `"${managedIdentityServerResourceId}`", `n `"storageUri`": `"${storageUri}`", `n `"databaseName`": `"${databaseName}`" `n}" $import = Invoke-AzRestMethod -Method POST -Path "/subscriptions/${subscriptionId}/resourceGroups/${resourceGroupName}/providers/Microsoft.Sql/servers/${serverName}/databases/${databaseName}/import?api-version=2024-05-01-preview" -Payload $importBody # Poll operation status Invoke-AzRestMethod -Method GET $import.Headers.Location.AbsoluteUri Sample 3: PowerShell — Export using two UAMIs: # Server UAMI for SQL auth, Storage UAMI for storage access New-AzSqlDatabaseExport -ResourceGroupName $resourceGroupName -DatabaseName $databaseName -ServerName $serverName -StorageKeyType ManagedIdentity -StorageKey $managedIdentityStorageResourceId -StorageUri $storageUri -AuthenticationType ManagedIdentity -AdministratorLogin $managedIdentityServerResourceId Sample 4: PowerShell — Import to a new server using two UAMIs: New-AzSqlDatabaseImport -ResourceGroupName $resourceGroupName -DatabaseName $databaseName -ServerName $serverName -DatabaseMaxSizeBytes $databaseSizeInBytes -StorageKeyType "ManagedIdentity" -StorageKey $managedIdentityStorageResourceId -StorageUri $storageUri -Edition $edition -ServiceObjectiveName $serviceObjectiveName -AdministratorLogin $managedIdentityServerResourceId -AuthenticationType ManagedIdentity Sample 5: Azure CLI — Export using two UAMIs: az sql db export -s $serverName -n $databaseName -g $resourceGroupName --auth-type ManagedIdentity -u $managedIdentityServerResourceId --storage-key $managedIdentityStorageResourceId --storage-key-type ManagedIdentity --storage-uri $storageUri Sample 6: Azure CLI — Import to a new server using two UAMIs: az sql db import -s $serverName -n $databaseName -g $resourceGroupName --auth-type ManagedIdentity -u $managedIdentityServerResourceId --storage-key $managedIdentityStorageResourceId --storage-key-type ManagedIdentity --storage-uri $storageUrib For more information and samples, please check Tutorial: Use managed identity with Azure SQL import and export (preview)325Views0likes0CommentsWhy Developers and DBAs love SQL’s Dynamic Data Masking (Series-Part 1)
Dynamic Data Masking (DDM) is one of those SQL features (available in SQL Server, Azure SQL DB, Azure SQL MI, SQL Database in Microsoft Fabric) that both developers and DBAs can rally behind. Why? Because it delivers a simple, built-in way to protect sensitive data—like phone numbers, emails, or IDs—without rewriting application logic or duplicating security rules across layers. With just a single line of T-SQL, you can configure masking directly at the column level, ensuring that non-privileged users see only obfuscated values while privileged users retain full access. This not only streamlines development but also supports compliance with data privacy regulations like GDPR and HIPAA, etc. by minimizing exposure to personally identifiable information (PII). In this first post of our DDM series, we’ll walk through a real-world scenario using the default masking function to show how easy it is to implement and how much development effort it can save. Scenario: Hiding customer phone numbers from support queries Imagine you have a support application where agents can look up customer profiles. They need to know if a phone number exists for the customer but shouldn’t see the actual digits for privacy. In a traditional approach, a developer might implement custom logic in the app (or a SQL view) to replace phone numbers with placeholders like “XXXX” for non-privileged users. This adds complexity and duplicate logic across the app. With DDM’s default masking, the database can handle this automatically. By applying a mask to the phone number column, any query by a non-privileged user will return a generic masked value (e.g. “XXXX”) instead of the real number. The support agent gets the information they need (that a number is on file) without revealing the actual phone number, and the developer writes zero masking code in the app. This not only simplifies the application codebase but also ensures consistent data protection across all query access paths. As Microsoft’s documentation puts it, DDM lets you control how much sensitive data to reveal “with minimal effect on the application layer” – exactly what our scenario achieves. Using the ‘Default’ Mask in T-SQL : The ‘Default’ masking function is the simplest mask: it fully replaces the actual value with a fixed default based on data type. For text data, that default is XXXX. Let’s apply this to our phone number example. The following T-SQL snippet works in Azure SQL Database, Azure SQL MI and SQL Server: SQL -- Step 1: Create the table with a default mask on the Phone column CREATE TABLE SupportCustomers ( CustomerID INT PRIMARY KEY, Name NVARCHAR(100), Phone NVARCHAR(15) MASKED WITH (FUNCTION = 'default()') -- Apply default masking ); GO -- Step 2: Insert sample data INSERT INTO SupportCustomers (CustomerID, Name, Phone) VALUES (1, 'Alice Johnson', '222-555-1234'); GO -- Step 3: Create a non-privileged user (no login for simplicity) CREATE USER SupportAgent WITHOUT LOGIN; GO -- Step 4: Grant SELECT permission on the table to the user GRANT SELECT ON SupportCustomers TO SupportAgent; GO -- Step 5: Execute a SELECT as the non-privileged user EXECUTE AS USER = 'SupportAgent'; SELECT Name, Phone FROM SupportCustomers WHERE CustomerID = 1 Alternatively, you can use Azure Portal to configure masking as shown in the following screenshot: Expected result: The query above would return Alice’s name and a masked phone number. Instead of seeing 222-555-1234, the Phone column would show XXXX. Alice’s actual number remains safely stored in the database, but it’s dynamically obscured for the support agent’s query. Meanwhile, privileged users such as administrator or db_owner which has CONTROL permission on the database or user with proper UNMASK permission would see the real phone number when running the same query. How this helps Developers : By pushing the masking logic down to the database, developers and DBAs avoid writing repetitive masking code in every app or report that touches this data. In our scenario, without DDM you might implement a check in the application like: If user_role == “Support”, then show “XXXX” for phone number, else show full phone. With DDM, such conditional code isn’t needed – the database takes care of it. This means: Less application code to write and maintain for masking Consistent masking everywhere (whether data is accessed via app, report, or ad-hoc query). Quick changes to masking rules in one place if requirements change, without hunting through application code. From a security standpoint, DDM reduces the risk of accidental data exposure and helps in compliance scenarios where personal data must be protected in lower environments or by certain roles, while reducing the developer effort drastically. In the next posts of this series, we’ll explore other masking functions (like Email, Partial, and Random etc) with different scenarios. By the end, you’ll see how each built-in mask can be applied to make data security and compliance more developer-friendly! Reference Links : Dynamic Data Masking - SQL Server | Microsoft Learn Dynamic Data Masking - Azure SQL Database & Azure SQL Managed Instance & Azure Synapse Analytics | Microsoft Learn207Views1like0CommentsImproving Azure SQL Database reliability with accelerated database recovery in tempdb
We are pleased to announce that in Azure SQL Database, accelerated database recovery is now enabled in the tempdb database to bring instant transaction rollback and aggressive log truncation for transactions in tempdb. The same improvement is coming to SQL Server and Azure SQL Managed Instance.683Views1like2CommentsAzure Data Studio Retirement
We’re announcing the upcoming retirement of Azure Data Studio (ADS) on February 6, 2025, as we focus on delivering a modern, streamlined SQL development experience. ADS will remain supported until February 28, 2026, giving developers ample time to transition. This decision aligns with our commitment to simplifying SQL development by consolidating efforts on Visual Studio Code (VS Code) with the MSSQL extension, a powerful and versatile tool designed for modern developers. Why Retire Azure Data Studio? Azure Data Studio has been an essential tool for SQL developers, but evolving developer needs and the rise of more versatile platforms like VS Code have made it the right time to transition. Here’s why: Focus on innovation VS Code, widely adopted across the developer community, provides a robust platform for delivering advanced features like cutting-edge schema management and improved query execution. Streamlined tools Consolidating SQL development on VS Code eliminates duplication, reduces engineering maintenance overhead, and accelerates feature delivery, ensuring developers have access to the latest innovations. Why Transition to Visual Studio Code? VS Code is the #1 developer tool, trusted by millions worldwide. It is a modern, versatile platform that meets the evolving demands of SQL and application developers. By transitioning, you gain access to cutting-edge tools, seamless workflows, and an expansive ecosystem designed to enhance productivity and innovation. We’re committed to meeting developers where they are, providing a modern SQL development experience within VS Code. Here’s how: Modern development environment VS Code is a lightweight, extensible, and community-supported code editor trusted by millions of developers. It provides: Regular updates. An active extension marketplace. A seamless cross-platform experience for Windows, macOS, and Linux. Comprehensive SQL features With the MSSQL extension in VS Code, you can: Execute queries faster with filtering, sorting, and export options for JSON, Excel, and CSV. Manage schemas visually with Table Designer, Object Explorer, and support for keys, indexes, and constraints. Connect to SQL Server, Azure SQL (all offerings), and SQL database in Fabric using an improved Connection Dialog. Streamline development with scripting, object modifications, and a unified SQL experience. Optimize performance with an enhanced Query Results Pane and execution plans. Integrate with DevOps and CI/CD pipelines using SQL Database Projects. Stay tuned for upcoming features—we’re continuously building new experiences based on feedback from the community. Make sure to follow the MSSQL repository on GitHub to stay updated and contribute to the project! Streamlined workflow VS Code supports cloud-native development, real-time collaboration, and thousands of extensions to enhance your workflows. Transitioning to Visual Studio Code: What You Need to Know We understand that transitioning tools can raise concerns, but moving from Azure Data Studio (ADS) to Visual Studio Code (VS Code) with the MSSQL extension is designed to be straightforward and hassle-free. Here’s why you can feel confident about this transition: No Loss of Functionality If you use ADS to connect to Azure SQL databases, SQL Server, or SQL database in Fabric, you’ll find that the MSSQL extension supports these scenarios seamlessly. Your database projects, queries, and scripts created in ADS are fully compatible with VS Code and can be opened without additional migration steps. Familiar features, enhanced experience VS Code provides advanced tools like improved query execution, modern schema management, and CI/CD integration. Additionally, alternative tools and extensions are available to replace ADS capabilities like SQL Server Agent and Schema Compare. Cross-Platform and extensible Like ADS, VS Code runs on Windows, macOS, and Linux, ensuring a consistent experience across operating systems. Its extensibility allows you to adapt it to your workflow with thousands of extensions. If you have further questions or need detailed guidance, visit the ADS Retirement page. The page includes step-by-step instructions, recommended alternatives, and additional resources. Continued Support With the Azure Data Studio retirement, we’re committed to supporting you during this transition: Documentation: Find detailed guides, tutorials, and FAQs on the ADS Retirement page. Community Support: Engage with the active Visual Studio Code community for tips and solutions. You can also explore forums like Stack Overflow. GitHub Issues: If you encounter any issues, submit a request or report bugs on the MSSQL extension’s GitHub repository. Microsoft Support: For critical issues, reach out to Microsoft Support directly through your account. Transitioning to VS Code opens the door to a more modern and versatile SQL development experience. We encourage you to explore the new possibilities and start your journey today! Conclusion Azure Data Studio has served the SQL community well,but the Azure Data Studio retirement marks an opportunity to embrace the modern capabilities of Visual Studio Code. Transitioning now ensures you’re equipped with cutting-edge tools and a future-ready platform to enhance your SQL development experience. For a detailed guide on ADS retirement , visit aka.ms/ads-retirement. To get started with the MSSQL extension, check out the official documentation. We’re excited to see what you build with VS Code!33KViews4likes28CommentsImportExportJobError, SQL72012, SQL72014, SQL72045.
When trying to Import .bacpac file via Azure Portal, it fails with below error: 'code': 'ImportExportJobError', 'message': 'The ImportExport operation with Request Id '' failed due to 'The ImportExport operation with Request Id '' failed due to 'Could not import package.\\nWarning SQL72012: The object [data_0] exists in the target, but it will not be dropped even though you selected the 'Generate drop statements for objects that are in the target database but that are not in the source' check box.\\nWarning SQL72012: The object [log] exists in the target, but it will not be dropped even though you selected the 'Generate drop statements for objects that are in the target database but that are not in the source' check box.\\nError SQL72014: Framework Mi'.'.' The DAC framework hit a hard error and failed due to conflicts with existing system objects in the target database. The generated error does not show much information, except a Warning without enough detail to pinpoint the root cause. ⚠️ What the warnings mean (SQL72012) The object [data_0] exists in the target, but it will not be dropped… The object [log] exists in the target, but it will not be dropped… Retrying the import from the portal doesn’t help—the job fails consistently with the same outcome. To understand the exact cause of error, you can use SQLPackage.exe tool and follow steps below: Download SqlPackage for Windows. To extract the file by right clicking on the file in Windows Explorer, and selecting 'Extract All...', and select the target directory. Open a new Terminal window and cd to the location where SqlPackage was extracted: Import: sqlpackage.exe /Action:Import /tsn:ServerName.database.windows.net /tdn:sqlimporttestDB /tu:sql-user /tp:password /sf:"C:\test\DB-file.bacpac" /d:True /df:C:\test\df.txt TSN: Target server name, where the database will be imported. Tdn: Target database name, the name of the new database that will be created. Tu: user Tp: password Sf: source file, where the bacpac file is located. d: diagnostic, this parameter will help us to obtain detailed information for the import/export process. Df: where the diagnostic file will be saved and the name of it, please change the folder location to the same used on the source file. Note: .bacpac file needs to be present locally on your machine for this command. In the diagnostic file generated, we got a clear error indicating that Script "CREATE USER [EntraUser1@contoso.com] FOR EXTERNAL PROVIDER" failed to execute and only connections established with Entra accounts can create other Entra users. Microsoft.Data.Tools.Diagnostics.Tracer Error: 0 : 2026-01-22T06:56:50 : Error encountered during import operation Exception: Microsoft.SqlServer.Dac.DacServicesException: Could not import package. Error SQL72014: Core Microsoft SqlClient Data Provider: Msg 33159, Level 16, State 1, Line 1 Principal 'EntraUser1@contoso.com' could not be created. Only connections established with Active Directory accounts can create other Active Directory users. Error SQL72045: Script execution error. The executed script: CREATE USER [EntraUser1@contoso.com] FOR EXTERNAL PROVIDER; You’ll also notice the database gets created, but none of the schema or tables are deployed—leaving you with an online but completely blank database. SSMS shows the same behavior, the database appears online after creation, but the import ultimately fails and the resulting database is empty. When explicitly executing this script in database, you'll face the exact same error: If the source database contains only Contained Entra users, you typically won’t see this issue. For contained users, the import job uses a different user-creation script—one that can be executed even when the import connection is established using SQL authentication. CREATE USER [EntraUser2@contoso.com] WITH SID = 'User-SID-Here' , TYPE = E; The issue occurs when the Exported database contains Entra Server logins and a corresponding User is created in the User Database. To mitigate this issue: You need to initiate the Import request using Microsoft Entra Account since SQL Authentication account cannot create a user from an External Provider which is Microsoft Entra in this case. Make the Entra Users 'Contained' in User Database before exporting and then use SQL Authentication account for importing the DB. The second option is especially useful if you prefer importing through the Azure portal but your Entra account has MFA enforced. In that case, using SQL authentication for the import workflow can be a practical path forward. REFERENCES: Import a BACPAC File to Create a Database in Azure SQL Database - Azure SQL Database & Azure SQL Managed Instance | Microsoft Learn SqlPackage Import - SQL Server | Microsoft Learn174Views2likes0CommentsSecuring Azure SQL Database with Microsoft Entra Password-less Authentication: Migration Guide
The Secure Future Initiative is Microsoft’s strategic framework for embedding security into every layer of the data platform—from infrastructure to identity. As part of this initiative, Microsoft Entra authentication for Azure SQL Database offers a modern, password less approach to access control that aligns with Zero Trust principles. By leveraging Entra identities, customers benefit from stronger security postures through multifactor authentication, centralized identity governance, and seamless integration with managed identities and service principals. Onboarding Entra authentication enables organizations to reduce reliance on passwords, simplify access management, and improve auditability across hybrid and cloud environments. With broad support across tools and platforms, and growing customer adoption, Entra authentication is a forward-looking investment in secure, scalable data access. Migration Steps Overview Organizations utilizing SQL authentication can strengthen database security by migrating to Entra Id-based authentication. The following steps outline the process. Identify your logins and users – Review the existing SQL databases, along with all related users and logins, to assess what’s needed for migration. Enable Entra auth on Azure SQL logical servers by assigning a Microsoft Entra admin. Identify all permissions associated with the SQL logins & Database users. Recreate SQL logins and users with Microsoft Entra identities. Upgrade application drivers and libraries to min versions & update application connections to SQL Databases to use Entra based managed identities. Update deployments for SQL logical server resources to have Microsoft Entra-only authentication enabled. For all existing Azure SQL Databases, flip to Entra‑only after validation. Enforce Entra-only for all Azure SQL Databases with Azure Policies (deny). Step 1: Identify your logins and users - Use SQL Auditing Consider using SQL Audit to monitor which identities are accessing your databases. Alternatively, you may use other methods or skip this step if you already have full visibility of all your logins. Configure server‑level SQL Auditing. For more information on turning the server level auditing: Configure Auditing for Azure SQL Database series - part1 | Microsoft Community Hub SQL Audit can be enabled on the logical server, which will enable auditing for all existing and new user databases. When you set up auditing, the audit log will be written to your storage account with the SQL Database audit log format. Use sys.fn_get_audit_file_v2 to query the audit logs in SQL. You can join the audit data with sys.server_principals and sys.database_principals to view users and logins connecting to your databases. The following query is an example of how to do this: SELECT (CASE WHEN database_principal_id > 0 THEN dp.type_desc ELSE NULL END) AS db_user_type , (CASE WHEN server_principal_id > 0 THEN sp.type_desc ELSE NULL END) AS srv_login_type , server_principal_name , server_principal_sid , server_principal_id , database_principal_name , database_principal_id , database_name , SUM(CASE WHEN succeeded = 1 THEN 1 ELSE 0 END) AS sucessful_logins , SUM(CASE WHEN succeeded = 0 THEN 1 ELSE 0 END) AS failed_logins FROM sys.fn_get_audit_file_v2( '<Storage_endpoint>/<Container>/<ServerName>', DEFAULT, DEFAULT, '2023-11-17T08:40:40Z', '2023-11-17T09:10:40Z') -- join on database principals (users) metadata LEFT OUTER JOIN sys.database_principals dp ON database_principal_id = dp.principal_id -- join on server principals (logins) metadata LEFT OUTER JOIN sys.server_principals sp ON server_principal_id = sp.principal_id -- filter to actions DBAF (Database Authentication Failed) and DBAS (Database Authentication Succeeded) WHERE (action_id = 'DBAF' OR action_id = 'DBAS') GROUP BY server_principal_name , server_principal_sid , server_principal_id , database_principal_name , database_principal_id , database_name , dp.type_desc , sp.type_desc Step 2: Enable Microsoft Entra authentication (assign admin) Follow this to enable Entra authentication and assign a Microsoft Entra admin at the server. This is mixed mode; existing SQL auth continues to work. WARNING: Do NOT enable Entra‑only (azureADOnlyAuthentications) yet. That comes in Step 7. Entra admin Recommendation: For production environments, it is advisable to utilize an PIM Enabled Entra group as the server administrator for enhanced access control. Step 3: Identity & document existing permissions (SQL Logins & Users) Retrieve a list of all your SQL auth logins. Make sure to run on the master database.: SELECT * FROM sys.sql_logins List all SQL auth users, run the below query on all user Databases. This would list the users per Database. SELECT * FROM sys.database_principals WHERE TYPE = 'S' Note: You may need only the column ‘name’ to identify the users. List permissions per SQL auth user: SELECT database_principals.name , database_principals.principal_id , database_principals.type_desc , database_permissions.permission_name , CASE WHEN class = 0 THEN 'DATABASE' WHEN class = 3 THEN 'SCHEMA: ' + SCHEMA_NAME(major_id) WHEN class = 4 THEN 'Database Principal: ' + USER_NAME(major_id) ELSE OBJECT_SCHEMA_NAME(database_permissions.major_id) + '.' + OBJECT_NAME(database_permissions.major_id) END AS object_name , columns.name AS column_name , database_permissions.state_desc AS permission_type FROM sys.database_principals AS database_principals INNER JOIN sys.database_permissions AS database_permissions ON database_principals.principal_id = database_permissions.grantee_principal_id LEFT JOIN sys.columns AS columns ON database_permissions.major_id = columns.object_id AND database_permissions.minor_id = columns.column_id WHERE type_desc = 'SQL_USER' ORDER BY database_principals.name Step 4: Create SQL users for your Microsoft Entra identities You can create users(preferred) for all Entra identities. Learn more on Create user The "FROM EXTERNAL PROVIDER" clause in TSQL distinguishes Entra users from SQL authentication users. The most straightforward approach to adding Entra users is to use a managed identity for Azure SQL and grant the required three Graph API permissions. These permissions are necessary for Azure SQL to validate Entra users. User.Read.All: Allows access to Microsoft Entra user information. GroupMember.Read.All: Allows access to Microsoft Entra group information. Application.Read.ALL: Allows access to Microsoft Entra service principal (application) information. For creating Entra users with non-unique display names, use Object_Id in the Create User TSQL: -- Retrieve the Object Id from the Entra blade from the Azure portal. CREATE USER [myapp4466e] FROM EXTERNAL PROVIDER WITH OBJECT_ID = 'aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb' For more information on finding the Entra Object ID: Find tenant ID, domain name, user object ID - Partner Center | Microsoft Learn Alternatively, if granting these API permissions to SQL is undesirable, you may add Entra users directly using the T-SQL commands provided below. In these scenarios, Azure SQL will bypass Entra user validation. Create SQL user for managed identity or an application - This T-SQL code snippet establishes a SQL user for an application or managed identity. Please substitute the `MSIname` and `clientId` (note: use the client id, not the object id), variables with the Display Name and Client ID of your managed identity or application. -- Replace the two variables with the managed identity display name and client ID declare @MSIname sysname = '<Managed Identity/App Display Name>' declare @clientId uniqueidentifier = '<Managed Identity/App Client ID>'; -- convert the guid to the right type and create the SQL user declare @castClientId nvarchar(max) = CONVERT(varchar(max), convert (varbinary(16), @clientId), 1); -- Construct command: CREATE USER [@MSIname] WITH SID = @castClientId, TYPE = E; declare nvarchar(max) = N'CREATE USER [' + @MSIname + '] WITH SID = ' + @castClientId + ', TYPE = E;' EXEC (@cmd) For more information on finding the Entra Client ID: Register a client application in Microsoft Entra ID for the Azure Health Data Services | Microsoft Learn Create SQL user for Microsoft Entra user - Use this T-SQL to create a SQL user for a Microsoft Entra account. Enter your username and object Id: -- Replace the two variables with the MS Entra user alias and object ID declare sysname = '<MS Entra user alias>'; -- (e.g., username@contoso.com) declare uniqueidentifier = '<User Object ID>'; -- convert the guid to the right type declare @castObjectId nvarchar(max) = CONVERT(varchar(max), convert (varbinary(16), ), 1); -- Construct command: CREATE USER [@username] WITH SID = @castObjectId, TYPE = E; declare nvarchar(max) = N'CREATE USER [' + + '] WITH SID = ' + @castObjectId + ', TYPE = E;' EXEC (@cmd) Create SQL user for Microsoft Entra group - This T-SQL snippet creates a SQL user for a Microsoft Entra group. Set groupName and object Id to your values. -- Replace the two variables with the MS Entra group display name and object ID declare @groupName sysname = '<MS Entra group display name>'; -- (e.g., ContosoUsersGroup) declare uniqueidentifier = '<Group Object ID>'; -- convert the guid to the right type and create the SQL user declare @castObjectId nvarchar(max) = CONVERT(varchar(max), convert (varbinary(16), ), 1); -- Construct command: CREATE USER [@groupName] WITH SID = @castObjectId, TYPE = X; declare nvarchar(max) = N'CREATE USER [' + @groupName + '] WITH SID = ' + @castObjectId + ', TYPE = X;' EXEC (@cmd) For more information on finding the Entra Object ID: Find tenant ID, domain name, user object ID - Partner Center | Microsoft Learn Validate SQL user creation - When a user is created correctly, the EntraID column in this query shows the user's original MS Entra ID. select CAST(sid as uniqueidentifier) AS EntraID, * from sys.database_principals Assign permissions to Entra based users – After creating Entra users, assign them SQL permissions to read or write by either using GRANT statements or adding them to roles like db_datareader. Refer to your documentation from Step 3, ensuring you include all necessary user permissions for new Entra SQL users and that security policies remain enforced. Step 5: Update Programmatic Connections Change your application connection strings to managed identities for SQL authentication and test each app for Microsoft Entra compatibility. Upgrade your drivers to these versions or newer. JDBC driver version 7.2.0 (Java) ODBC driver version 17.3 (C/C++, COBOL, Perl, PHP, Python) OLE DB driver version 18.3.0 (COM-based applications) Microsoft.Data.SqlClient 5.2.2+ (ADO.NET) Microsoft.EntityFramework.SqlServer 6.5.0 (Entity Framework) System.Data.SqlClient(SDS) doesn't support managed identity; switch to Microsoft.Data.SqlClient(MDS). If you need to port your applications from SDS to MDS the following cheat sheet will be helpful: https://github.com/dotnet/SqlClient/blob/main/porting-cheat-sheet.md. Microsoft.Data.SqlClient also takes a dependency on these packages & most notably the MSAL for .NET (Version 4.56.0+). Here is an example of Azure web application connecting to Azure SQL, using managed identity. Step 6: Validate No Local Auth Traffic Be sure to switch all your connections to managed identity before you redeploy your Azure SQL logical servers with Microsoft Entra-only authentication turned on. Repeat the use of SQL Audit, just as you did in Step 1, but now to confirm that every connection has moved away from SQL authentication. Once your server is up and running with only Entra authentication, any connections still based on SQL authentication will not work, which could disrupt services. Test your systems thoroughly to verify that everything operates correctly. Step 7: Enable Microsoft Entra‑only & disable local auth Once all your connections & applications are built to use managed identity, you can disable the SQL Authentication, by turning the Entra-only authentication via Azure portal, or using the APIs. Step 8: Enforce at scale (Azure Policy) Additionally, after successful migration and validation, it is recommended to deploy the built-in Azure Policy across your subscriptions to ensure that all SQL resources do not use local authentication. During resource creation, Azure SQL instances will be required to have Microsoft Entra-only authentication enabled. This requirement can be enforced through Azure policies. Best Practices for Entra-Enabled Azure SQL Applications Use exponential backoff with decorrelated jitter for retrying transient SQL errors, and set a max retry cap to avoid resource drain. Separate retry logic for connection setup and query execution. Cache and proactively refresh Entra tokens before expiration. Use Microsoft.Data.SqlClient v3.0+ with Azure.Identity for secure token management. Enable connection pooling and use consistent connection strings. Set appropriate timeouts to prevent hanging operations. Handle token/auth failures with targeted remediation, not blanket retries. Apply least-privilege identity principles; avoid global/shared tokens. Monitor retry counts, failures, and token refreshes via telemetry. Maintain auditing for compliance and security. Enforce TLS 1.2+ (Encrypt=True, TrustServerCertificate=False). Prefer pooled over static connections. Log SQL exception codes for precise error handling. Keep libraries and drivers up to date for latest features and resilience. References Use this resource to troubleshoot issues with Entra authentication (previously known as Azure AD Authentication): Troubleshooting problems related to Azure AD authentication with Azure SQL DB and DW | Microsoft Community Hub To add Entra users from an external tenant, invite them as guest users to the Azure SQL Database's Entra administrator tenant. For more information on adding Entra guest users: Quickstart: Add a guest user and send an invitation - Microsoft Entra External ID | Microsoft Learn Conclusion Migrating to Microsoft Entra password-less authentication for Azure SQL Database is a strategic investment in security, compliance, and operational efficiency. By following this guide and adopting best practices, organizations can reduce risk, improve resilience, and future-proof their data platform in alignment with Microsoft’s Secure Future Initiative.751Views1like2CommentsAlternatives After the Deprecation of the Azure SQL Migration Extension in Azure Data Studio
The Azure SQL Migration extension for Azure Data Studio is being deprecated and will be retired by February 28, 2026. As part of our unified and streamlined migration strategy for Azure SQL, we are consolidating all migration experiences into a consistent, scalable platform. If you are currently using the Azure SQL Migration extension, this blog will guide you through recommended replacement options for every phase of migration, whether you are moving to Azure SQL Managed Instance, SQL Server on Azure Virtual Machines, or Azure SQL Database. What is happening to the Azure SQL Migration extension in ADS? As you already know, Azure data studio will officially retire on February 28, 2026. The Azure SQL Migration extension in Azure Data Studio will also retire along with Azure Data Studio on February 28, 2026. The Azure SQL Migration extension will no longer be available in the marketplace of Azure Data Studio. What should you use instead? Below is the updated guidance for the migration tool categorized by migration phase and target. 1) Pre‑Migration: Discovery & Assessments Prior to migration, it is advisable to evaluate the SQL Server environment for readiness and to determine the right-sized Azure SQL SKU. Below are the recommended options: A) SQL Server enabled by Azure Arc Use the SQL Server migration experience in the Azure Arc portal for: Instance discovery at scale Migration assessments at scale, including: Readiness assessment for all Azure SQL targets. Performance-based, right-sized target recommendations. Projected Azure costs with the recommended target configuration. Reference: Steps to get started with the Azure Arc assessments- Deploy Azure Arc on your servers. SQL Server instances on Arc-enabled servers are automatically connected to Azure Arc. See options to optimize this. B) Automated assessments at scale using Azure DMS PowerShell and Azure CLI The Azure DataMigration modules in Azure PowerShell and Azure CLI can be used to automate assessments at scale. Learn more about how to do this. Here are the sample templates to automate the assessment workflow: Azure PowerShell DataMigration cmdlets DMS Azure CLI commands C) Azure Migrate For scenarios where assessments are required at data center level including different types of workloads like Applications, VM Servers and databases, use Azure Migrate to perform discovery and assessments at scale. Learn more about Azure Migrate. References: Review inventory Create SQL Assessment Review SQL Assessment 2) Migrations Based on the migration targets, here are the recommended tools you can use to carry out the migration: A. To Azure SQL Managed Instance The following options are available for migrating data to Azure SQL Managed Instance: 1. SQL Migration experience in Azure Arc For migrations to Azure SQL MI, leverage the streamlined SQL Migration experience in Azure Arc which lets you complete the end-to-end migration journey in a single experience. This experience provides: Evergreen assessments and right-fit Azure SQL target recommendation. Inline Azure SQL Target creation. Free Azure SQL MI Next generation General Purpose service that lets you experience the power of Azure SQL MI for free for 12 months. Near zero downtime migration using Managed Instance link powered by Distributed Availability Group technology. Secure connectivity. Reference blog: SQL Server migration in Azure Arc 2. Automated migration at scale using Azure DMS PowerShell and Azure CLI To Orchestrate migrations to Azure SQL MI at scale programmatically, use: DMS PowerShell cmdlets DMS Azure CLI commands Learn more about how to do this. B. To SQL Server on Azure Virtual Machines To migrate to SQL Server on Azure Virtual Machines, use: 1. Azure Database Migration Service (DMS) DMS supports migrating to SQL Server on Azure Virtual Machines using both online and offline methods. Your SQL Server backups can be in Azure Blob Storage or on a network SMB file share. For details on each option, see: Backups stored in Azure Blob Storage Backups maintained on network SMB file shares Note: The migration experience from SQL Server on-premises to SQL Server on Azure VM will soon be available in SQL Server enabled by Azure Arc. 2. Automated migration at scale using Azure DMS PowerShell and Azure CLI For programmatic migrations to Azure SQL Virtual Machines: DMS PowerShell cmdlets DMS Azure CLI commands Learn more about how to do this. 3. SSMS option: SQL Server Management Studio (SSMS) migration component If you can connect to both SQL Server on-premises and SQL Server running on Azure VM using SQL Server Management Studio, the migration component in SSMS can help you to migrate to SQL Server on Azure VM. For details, see SSMS Migration component. C. To Azure SQL Database Migrating a SQL Server database to Azure SQL Database typically involves migrating schema and data separately. Here are the options to perform offline and online migration to Azure SQL Database: 1. Offline migration to Azure SQL Database a. Azure Database Migration Service (DMS) portal experience Use Azure DMS portal to migrate both schema and data. Azure DMS uses Azure Data Factory and leverages the Self-hosted Integration Runtime (SHIR). Installation steps are here. b. Automated migration at scale using Azure DMS PowerShell and Azure CLI Use Azure DMS PowerShell and Azure CLI command line to orchestrate the schema and data migration to Azure SQL Database at scale: DMS PowerShell cmdlets DMS Azure CLI commands Learn more about how to do this. 2. Online migration to Azure SQL Database Using Striim To enable online migration of your mission critical databases to Azure SQL Database leverage Striim. Microsoft and Striim have entered a strategic partnership to enable continuous data replication from off-Azure SQL Servers to Azure SQL Database with near-zero downtime. For more details, refer to: Zero downtime migration from SQL Server to Azure SQL Database | Microsoft Community Hub Removing barriers to migrating databases to Azure with Striim’s Unlimited Database Migration program... To leverage the Striim program for migrations, please reach out to your Microsoft contact or submit the below feedback to get started. Summary The table below provides a summary of the available alternatives for each migration scenario. Migration Scenario Guided experience Automation experience Pre-Migration (Discovery + Assessment) SQL Migration experience in Azure Arc / Azure Migrate DMS PowerShell / Azure CLI To Azure SQL Managed Instance SQL Migration experience in Azure Arc DMS PowerShell / Azure CLI To SQL Server on Azure Virtual Machine DMS Azure Portal / SSMS migration component DMS PowerShell / Azure CLI To Azure SQL Database DMS Azure portal (offline & schema migration) / Striim (online migration) DMS PowerShell / Azure CLI (offline & schema migration) Final Thoughts Simplify your SQL migration journey and improve migration velocity to all Azure SQL targets, leverage the connected migration experiences in SQL Server enabled by Azure Arc, DMS, and SSMS. For SSMS, as a first step we brought the capabilities to perform assessment and migration to higher versions of SQL Server including to SQL Server on Azure Virtual Machines. As a next step, we are bringing cloud migration capabilities as well into SSMS. Feedback We love hearing from our customers. If you have feedback or suggestions for the product group, please use the following form: Feedback form As you begin your migration to Azure, we welcome your feedback. If you do not see suitable alternatives for any migration phases, use the feedback form to let us know so we can update the options accordingly.1.3KViews1like0CommentsMultiple secondaries for failover groups is now in public preview
Failover groups for Azure SQL Database is a business continuity solution that lets you manage the replication and failover of databases to another Azure SQL logical server. With failover groups, you get automatic endpoint redirection, so you don't have to change the connection string for your application after a geo-failover—connections are automatically routed to the current primary. Until now, Azure SQL failover groups have only supported one secondary. We're excited to announce that Azure SQL Database failover groups support for up to four secondaries is now available in public preview. This enhancement gives you greater flexibility for disaster recovery, regional read scale-out, and complex high-availability scenarios. What's New? Create up to four secondaries for each failover group, deployed across the same or different Azure regions. Use the additional secondaries to add read scale-out capabilities to additional regions, adding flexibility for read-only workloads. Greater flexibility for disaster recovery planning with multiple failover targets. Improved resilience by distributing secondaries across multiple geographic regions. Facilitate migration to another region without sacrificing existing disaster recovery protection. How to Get Started Getting started with multiple secondaries in Azure SQL failover groups is straightforward. In the Azure Portal, the process to create a failover group remains the same. You can add additional secondaries using the process below. Adding Additional Secondary Servers to a Failover Group in the Portal Go to your Azure SQL Database logical server in the Azure portal. Open the "Failover groups" blade under "Data management". Select an existing failover group. Click the "Add server" menu item to add additional secondary servers. A side panel opens displaying the list of secondary servers and a dropdown to select which server should operate as the read-only listener endpoint target. The additional secondary server can be in the same or different Azure region as the primary. NOTE: The read-only listener endpoint dropdown lists all existing secondary servers as well as the secondary server being added. This allows you to designate which secondary server should receive read-only traffic routed through the `<fog-name>.secondary.database.windows.net` endpoint. However, the server selected as the read-only listener endpoint target should not be in the same region as the primary server if you intend to serve read workloads with that endpoint. After selecting the additional secondary and specifying your read-only listener endpoint target, click "Select" on the side panel and click "Save" in the main menu to apply your failover group configuration. The additional secondary will be added and seeding of databases in the failover group will begin to that additional secondary. You can modify your read-only listener endpoint target with the "Edit configuration" menu option. TIP: If you want zone redundancy enabled for the secondary databases, ensure that the secondary servers are in regions that support availability zones and configure the zone redundancy setting appropriately. Using PowerShell Creating a failover group with multiple secondaries can also be done with PowerShell. Example - Create a failover group with multiple secondaries: New-AzSqlDatabaseFailoverGroup ` -ResourceGroupName "myrg" ` -ServerName "primaryserver" ` -PartnerServerName "secondaryserver1" ` -FailoverGroupName "myfailovergroup" ` -FailoverPolicy "Manual" ` -PartnerServerList @("secondary_uri_1", "secondary_uri_2", "secondary_uri_3", "secondary_uri_4") ` -ReadOnlyEndpointTargetServer "secondary_uri_1" where "secondary_uri_n" is in the form below and secondaryserver1 is also included in the list "/subscriptions/your_sub_guid/resourceGroups/your_resource_group/providers/Microsoft.Sql/servers/your_server_name" Example - Add additional secondary servers to an existing failover group: Set-AzSqlDatabaseFailoverGroup ` -ResourceGroupName "myrg" ` -ServerName "primaryserver" ` -FailoverGroupName "myfailovergroup" ` -FailoverPolicy "Manual" ` -PartnerServerList @("secondary_uri_1", "secondary_uri_2", "secondary_uri_3", "secondary_uri_4") ` -ReadOnlyEndpointTargetServer "secondary_uri_1" where "secondary_uri_n" is in the form below and secondaryserver1 is also included in the list "/subscriptions/your_sub_guid/resourceGroups/your_resource_group/providers/Microsoft.Sql/servers/your_server_name" Performing a Failover With multiple secondaries, you can choose which secondary to promote to primary during a failover. Using the Portal Navigate to your SQL server's Failover groups blade. Select the failover group you want to fail over. In the servers list, locate the secondary server you want to promote. Click the ellipsis menu (...) next to the server. Select Failover for a planned failover (with full data synchronization) or Forced failover for an unplanned failover (potential data loss). TIP: The ellipsis menu also includes a Remove server option, allowing you to remove a secondary server from the failover group. Using PowerShell For PowerShell, use the `Switch-AzSqlDatabaseFailoverGroup` cmdlet to perform a failover. Example: Switch-AzSqlDatabaseFailoverGroup ` -ResourceGroupName "myrg" ` -ServerName "secondaryserver1" ` -FailoverGroupName "myfailovergroup" Key Benefits Enhanced Disaster Recovery - Multiple geo-secondaries provide additional failover targets, reducing the risk of total service disruption. Regional Read Scale-Out - Distribute read-only workloads across multiple regions. Flexible HA/DR Architecture - Design your high-availability architecture based on your specific business requirements. Ease migrations to another region - Leverage the additional secondary to migrate to a different Azure region while maintaining DR protection. Limitations & Notes You can create up to four secondaries per failover group. Each secondary must be hosted on a different logical server from the primary. Secondary servers can be in the same region as the primary or in different regions. The read-only listener endpoint target must be in a different region from the primary if you want to make use of the read-only listener for read workloads. The failover group name must be globally unique within the `.database.windows.net` domain. Chaining (creating a geo-replica of a geo-replica) is not supported. Secondary databases in a failover group inherit the backup storage redundancy and zone redundancy configuration from the primary, depending on the service tier. For non-Hyperscale databases: Secondary databases will not have high availability (zone redundancy) enabled by default. Enable it after the failover group is created. For Hyperscale databases: Secondary databases inherit the high availability settings from their respective primary databases. Best Practices Use paired regions when possible—failover groups in paired regions have better performance compared to unpaired regions. Test your failover procedures regularly using planned failovers to ensure your disaster recovery plan works as expected. Monitor replication lag using `sys.dm_geo_replication_link_status` or the Replication Lag metric in Azure Monitor to ensure your secondaries are synchronized. Consider your RTO and RPO requirements when designing your failover group architecture. Use the read-write listener (`<fog-name>.database.windows.net`) for write workloads and the read-only listener (`<fog-name>.secondary.database.windows.net`) for read workloads to take advantage of the automatic endpoint redirection after failovers. Use customer-managed failover group policy to ensure your RTO and RPO are in your control. Frequently Asked Questions What services tiers are supported for multiple secondaries in failover group? The following service tiers are supported: Standard General Purpose Premium Business Critical Hyperscale When there is more than one secondary, how does read only endpoint work? While creating a failover group with more than one secondary you must designate one of the secondaries as the read only endpoint target. All read only connections will be routed to the designated secondary. If a failover group is created with just one secondary, then the read only endpoint will default to the only available secondary. If I have created multiple secondaries for failover group, can I update the read only endpoint at any time? Yes, you can "Edit configuration" in the portal or use PowerShell to change the read-only listener endpoint target. How does Auto DR work when multiple secondaries exist for a failover group? The primary server (read write endpoint) and secondary server (designated as read only endpoint) will be used as a pair for Auto DR failover and endpoints will be swapped upon failover. Learn More Failover groups overview & best practices - Azure SQL Database | Microsoft Learn Configure a failover group for Azure SQL Database | Microsoft Learn Active Geo-Replication - Azure SQL Database | Microsoft Learn Business continuity overview - Azure SQL Database | Microsoft Learn PowerShell - New Failover Group PowerShell - Modify Failover Group PowerShell - Perform a failover779Views0likes0Comments