sql server
408 TopicsAzure Data Studio Retirement
We’re announcing the upcoming retirement of Azure Data Studio (ADS) on February 6, 2025, as we focus on delivering a modern, streamlined SQL development experience. ADS will remain supported until February 28, 2026, giving developers ample time to transition. This decision aligns with our commitment to simplifying SQL development by consolidating efforts on Visual Studio Code (VS Code) with the MSSQL extension, a powerful and versatile tool designed for modern developers. Why Retire Azure Data Studio? Azure Data Studio has been an essential tool for SQL developers, but evolving developer needs and the rise of more versatile platforms like VS Code have made it the right time to transition. Here’s why: Focus on innovation VS Code, widely adopted across the developer community, provides a robust platform for delivering advanced features like cutting-edge schema management and improved query execution. Streamlined tools Consolidating SQL development on VS Code eliminates duplication, reduces engineering maintenance overhead, and accelerates feature delivery, ensuring developers have access to the latest innovations. Why Transition to Visual Studio Code? VS Code is the #1 developer tool, trusted by millions worldwide. It is a modern, versatile platform that meets the evolving demands of SQL and application developers. By transitioning, you gain access to cutting-edge tools, seamless workflows, and an expansive ecosystem designed to enhance productivity and innovation. We’re committed to meeting developers where they are, providing a modern SQL development experience within VS Code. Here’s how: Modern development environment VS Code is a lightweight, extensible, and community-supported code editor trusted by millions of developers. It provides: Regular updates. An active extension marketplace. A seamless cross-platform experience for Windows, macOS, and Linux. Comprehensive SQL features With the MSSQL extension in VS Code, you can: Execute queries faster with filtering, sorting, and export options for JSON, Excel, and CSV. Manage schemas visually with Table Designer, Object Explorer, and support for keys, indexes, and constraints. Connect to SQL Server, Azure SQL (all offerings), and SQL database in Fabric using an improved Connection Dialog. Streamline development with scripting, object modifications, and a unified SQL experience. Optimize performance with an enhanced Query Results Pane and execution plans. Integrate with DevOps and CI/CD pipelines using SQL Database Projects. Stay tuned for upcoming features—we’re continuously building new experiences based on feedback from the community. Make sure to follow the MSSQL repository on GitHub to stay updated and contribute to the project! Streamlined workflow VS Code supports cloud-native development, real-time collaboration, and thousands of extensions to enhance your workflows. Transitioning to Visual Studio Code: What You Need to Know We understand that transitioning tools can raise concerns, but moving from Azure Data Studio (ADS) to Visual Studio Code (VS Code) with the MSSQL extension is designed to be straightforward and hassle-free. Here’s why you can feel confident about this transition: No Loss of Functionality If you use ADS to connect to Azure SQL databases, SQL Server, or SQL database in Fabric, you’ll find that the MSSQL extension supports these scenarios seamlessly. Your database projects, queries, and scripts created in ADS are fully compatible with VS Code and can be opened without additional migration steps. Familiar features, enhanced experience VS Code provides advanced tools like improved query execution, modern schema management, and CI/CD integration. Additionally, alternative tools and extensions are available to replace ADS capabilities like SQL Server Agent and Schema Compare. Cross-Platform and extensible Like ADS, VS Code runs on Windows, macOS, and Linux, ensuring a consistent experience across operating systems. Its extensibility allows you to adapt it to your workflow with thousands of extensions. If you have further questions or need detailed guidance, visit the ADS Retirement page. The page includes step-by-step instructions, recommended alternatives, and additional resources. Continued Support With the Azure Data Studio retirement, we’re committed to supporting you during this transition: Documentation: Find detailed guides, tutorials, and FAQs on the ADS Retirement page. Community Support: Engage with the active Visual Studio Code community for tips and solutions. You can also explore forums like Stack Overflow. GitHub Issues: If you encounter any issues, submit a request or report bugs on the MSSQL extension’s GitHub repository. Microsoft Support: For critical issues, reach out to Microsoft Support directly through your account. Transitioning to VS Code opens the door to a more modern and versatile SQL development experience. We encourage you to explore the new possibilities and start your journey today! Conclusion Azure Data Studio has served the SQL community well,but the Azure Data Studio retirement marks an opportunity to embrace the modern capabilities of Visual Studio Code. Transitioning now ensures you’re equipped with cutting-edge tools and a future-ready platform to enhance your SQL development experience. For a detailed guide on ADS retirement , visit aka.ms/ads-retirement. To get started with the MSSQL extension, check out the official documentation. We’re excited to see what you build with VS Code!33KViews4likes25CommentsSQL Server has encountered I/O requests taking longer than 15 seconds
Most of us have seen these errors in the SQL Server error log pointing to a IO subsystem problem: 2015/06/15 12:43:01 spid8s SQL Server has encountered 1016 occurrence(s) of I/O requests taking longer than 15 seconds to complete on file [N:\ABCDATA8\ [ABCDATA8.ndf] in database [ABC] (5). The OS file handle is 0x000000000000172C. The offset of the latest long I/O is: 0x0000073234 For a customer case this message has to be analyzed in more detail, because the storage group at the customer insists to get more information as they don’t see these high numbers on the storage system at that time. A deeper investigation reveals these findings: the counter is collected and reported for each file separately the message is printed by the checkpoint, the background process that kicks in every 5 seconds it is printed in a minimum interval of 300 seconds (5 minutes) per file, means only one message per 5 minutes per file it checks the amount of ticks (milliseconds) IO requests were active and if this exceeds 15000 (15 seconds) this IO request is counted it is doing this for all parallel and asynchronously IO requests (per file) The problem is that these are not the amount of IO requests that are counted, but the amount of times IO request were seen during the reporting time. Imagine you have a very long IO request going on (only one). The reporting kicks in after 5 seconds (by the checkpoint) and after 3 times (15 seconds) the IO request is the first time counted, but no message is printed yet as the five-minute minimum per file is not reached yet. The very same IO request is counted at each reporting cycle (every 5 seconds) until the 300 seconds threshold is reached and the message is printed (and the counter is reset). Until then this one IO request is counted 57 times (300 seconds / 5 seconds reporting interval = 60 occurrences – 3 time free (first 15 seconds)). So if you have 1000 occurrences means that in this 60 reporting cycles of the 5 minutes reporting interval 1000 IO requests were seen that needed more than 15 seconds. That might were only a handful very long running IO requests (e.g. minimum is (1000 / 57) = 17.5 requests) or many shorter request (e.g. 1000 requests of 15 – 19 seconds). The message is misleading as it is talking of occurrences and not of IO requests, but it still points to a storage problem. Note well, however, when you see even one such message it means you had at least one event where an I/O took more than 15 SECONDs to complete. This is VERY SLOW . This indicates a problematic I/O event. But there are not so many IO requests hanging as stated in the message, but at least there are some. I hope this sheds some light on it.SQL Agent Jobs in an AlwaysOn / Mirroring Environment
Running SQL Agent Jobs in an AlwaysOn or Database Mirroring environment can be a challenge, as the database might be not accessible at the time when the job starts. This is especially the case for all SAP collector and backup jobs. On the other hand there might be jobs that should not run on primary server (e.g. DBCC Checks), but on a readable secondary or readonly standby database (Log Shipping). In the worst case either the jobs fail or run on the wrong server. To prevent this, SAP created a small stored procedure to control the run of these kind of jobs. To use the stored procedure, use this script use master; GO IF (OBJECT_ID(N'dbo.CheckAccess') IS NOT NULL ) DROP PROCEDURE [dbo].CheckAccess GO CREATE PROCEDURE dbo.CheckAccess @DBName AS NVARCHAR(128) = N'master', @RunOnPrimary AS TINYINT = 1, @RunOnSecondary AS TINYINT = 0, @RunOnReadonly AS TINYINT = 0, RC AS TINYINT = 0 OUTPUT AS BEGIN DECLARE @IsReadOnly TINYINT; DECLARE @State TINYINT; DECLARE @StandBy TINYINT; DECLARE @UserAccess TINYINT; DECLARE @AOState TINYINT; DECLARE @AORole TINYINT; DECLARE @Message NVARCHAR(50); SET RC = 0; SET @Message = N'Everything is OK.'; IF EXISTS(SELECT TOP 1 database_id FROM sys.databases WHERE database_id = DB_ID(@DBName)) BEGIN -- Check for the normal database state SELECT @IsReadOnly = is_read_only, @State = state, @StandBy = is_in_standby, @UserAccess = user_access FROM sys.databases WHERE name = @DBName; IF NOT ((@State + @UserAccess) = 0 AND (@IsReadOnly <= @RunOnReadonly) AND (@StandBy <= @RunOnReadonly)) BEGIN SET RC = 1; SET @Message = N'Database is NOT accessible.'; END; END ELSE BEGIN SET RC = 2; SET @Message = N'Database doesn''t exists.'; END -- Check if we are running in an AlwaysOn Configuration IF EXISTS(SELECT TOP 1 database_id FROM sys.dm_hadr_database_replica_states WHERE database_id = DB_ID(@DBName)) BEGIN SELECT @AOState = db.database_state, @AORole = rs.role FROM sys.dm_hadr_database_replica_states AS db INNER JOIN sys.dm_hadr_availability_replica_states AS rs ON db.group_id = rs.group_id WHERE db.is_local = 1 AND rs.is_local = 1 AND db.database_id = DB_ID(@DBName) IF NOT(@AOState = 0 AND ((@RunOnPrimary != 0 AND @RunOnPrimary = @AORole) OR (@RunOnSecondary != 0 AND @RunOnSecondary = @AORole - 1))) BEGIN SET RC = 3; SET @Message = N'The database is not in the required state for Always-ON access.'; END; END; PRINT @Message END; GO and run it on all nodes of the affected system (either AlwaysOn, Database Mirroring or Log Shipping). It will create a stored procedure CheckAccess in the master database. The procedure has 5 parameters: Parameter Data Type Default Value Description @DBName NVARCHAR(128) N'master' Database to check. @RunOnPrimary TINYINT 1 Should the job run on the primary database ? @RunOnSecondary TINYINT 0 Should the job run on the secondary database ? @RunOnReadonly TINYINT 0 Should the job run on the read only databases (Log Shipping) ? RC TINYINT 0 Return code (0 = Success, 1 = Failure) With the parameters @RunOnPrimary, @RunOnSecondary and @RunOnReadonly you can control in which state of the AlwaysOn database the Job should start. A combination of these parameters is possible, so the job can run on primary and secondary databases if you set both to 1. Even if you do not use AlwaysOn or readonly databases or if you set all the options to 0 the stored procedure will check for the availability of the database. It will return a failure (1) if the database is absent, offline, suspect, in recovery and all the other states that won't let you access the database. A valid call looks like this (Check the TST database if it is accessible and check if it is the primary database in an AlwaysOn configuration): exec dbo.CheckAccess N'TST',1,0,0,@RC OUTPUT To enable the CheckAccess stored procedure for jobs, we have to change the job a bit. In this example it is an SAP Collector Job (SAP_SCU_SAPMonitor_TST_MSSQL_COLLECTOR) for the TST SAP database. Open the properties for the job (right click -> Properties) Switch to the Steps branch on the left and edit the first step with the name DBCOLL. Change the database to master (as the stored procedure lives there) and change the script from EXECUTE sap_dbcoll to DECLARE @Msg NVARCHAR(128) DECLARE RC TINYINT exec dbo.CheckAccess N'TST',1,0,0,@RC OUTPUT IF RC = 0 exec( 'use [TST] EXECUTE sap_dbcoll' ) Ensure that you change the database accordingly (here TST) for your system. The step then should look like: On the advanced tab change the 'On failure action' field to 'Quit the job reporting failure' and save and exit the job definition. This will ensure that the DBCOLL part will only start when the database is accessible and is the primary database in the AlwaysOn Configuration. You can use this stored procedure for any job, when you follow this generic approach: DECLARE @Msg NVARCHAR(128) DECLARE RC TINYINT exec dbo.CheckAccess N'<Database in Question>',1,0,0,@RC OUTPUT IF RC = 0 exec( 'use [<Database in Question>] <Step definition>' ) Regards ClasAlways On - Synchronize SAP login, jobs and objects
SQL Server AlwaysOn is one of the High Availability solutions available for an SAP system. It consists of two or more computers each hosting a SQL Server with a copy of the SAP database. A listener points to the actual primary copy and is used from the SAP system as the only connection point. For details how to setup and configure an SAP system together with SQL Server AlwaysOn see this blog post and its referenced blog posts. During the setup the SAP System is configured from the current primary node and all non-database related objects such as SQL Server Agent Jobs, logins etc. are created only on the current primary database. In a case of a (automatic) failover to one of the secondary nodes of AlwaysOn these objects are then missing. Jürgen has introduced a script (sap_helprevlogin) in his initial blog post about the database load after setting up AlwaysOn. This script will transfer only the logins, but will fall short on transferring jobs, server level permissions and other assignments. One of the SAP developers working in our team has built a comprehensive PowerShell script (sap_synchronize_always_on.ps1) to perform all these tasks and to transfer all the SAP objects from the initial installation to all the other nodes of the AlwaysOn system. The script connects to the primary instance, reads the configuration of the secondary nodes and then synchronizes the objects and jobs with these nodes. The script must be executed by a domain administrator which has SQL Server sysadmin privileges on all AlwaysOn instances. The script uses up to three input variables: The server name of the SQL Server instance or the listener name of the High-Availability group. The default is (local) The name of the SAP database, which must be in an High-Availability group on the given server Single login (optional): Only one login gets copied along with SAP CCMS jobs owned by the login. By default all logins mapped to the database are copied. The script will execute: Create a procedure CheckAccess in the master database (see this blog about the details about it) Discover which logins are mapped to the database Discover which SAP CCMS jobs belong to those logins If the job does not use CheckAccess then change the job step to use CheckAccess and run the job step in master Open a connection to each secondary and: Create procedure CheckAccess in the master database Create the logins if they don't exist already using the same sid. Create the jobs if they don't exist already. If a job exists and if the job does not CheckAccess then change the job step to use CheckAccess and run in master If new SAP CCMS jobs are added because of remote monitoring from a different SAP system using DBACOCKPIT, the script can be re-executed. It will then copy only new objects which have not been copied before. You can find this useful script attached, which makes the synchronization of the SAP Systems in an AlwaysOn environment so much easier. Please ensure that you test the execution in your test environment first, before you run it in production. Neither SAP nor Microsoft takes any responsibility from using this script, you run it on your own risk. Update January 2017: New script version that copies the sid<adm> and SAPService<SID> logins from the SAP System as well. Best regards | Bless! Clas & GuðmundurDeadlock Analysis (Examples)
As promised in my former blog about Deadlock Analysis we proceed today with the deadlock serie with examples and their solutions. Update Locks The first example is a Deadlock with two identical statements, doing deletes on table BC_MON_AGREGATES The statements are: DELETE FROM BC_MON_AGREGATES WHERE AGR_INTERVAL = @P0 AND START_TIME < ; As you can see in the deadlock graph both threads are having (at least) a X locks on a index named PK__BC_MON_A__3214EC272B5F6B28, the primary key on this table. And both are requesting an U lock on the same index. As the U locks are only used for checking if the row matches the WHERE condition, this constellation is very often seen when the threads are scanning the table or index instead of seeking through an index to find the rows. In this case the solution was to create an fitting index for the WHERE clause to get the two thread to seek instead to scan the index. RID Locks In the next case we see, that both threads are requesting RID locks on table GENSETC: The statements were: UPDATE GENSETC SET NAME = ,COMPONENT = ,GENSTATUS = @P3 WHERE PGMID = @P4 AND OBJECT = @P5 AND OBJNAME = P6 RID locks are only used if there is no clustered index on that table. If there is an clustered index, we will see KEY locks. The root cause for this deadlock was the missing clustered index as defined in the SAP dictionary (PGMID, OBJECT, OBJNAME). PAGE locks Here we see PAGE locks on table /1OT/FDA12IT2001: Both statements were running a DELETE on that table: DELETE FROM /1OT/FDA12IT2001 WHERE CLIENT = AND ORDID IN ( ,.....,@P101 ) PAGE locks are used when the SQL Server Optimizer expects a lot of rows to process. The PAGE locks are used from the beginning, means this is not an lock escalation, as there are only lock escalations from PAGE to TABLE and from ROW to TABLE, but not from ROW to PAGE. In this case the solution can be to disallow PAGE locks on the table, to reduce the IN clause or to create an fitting index for the WHERE clause (index on ORDID). Deadlock with a SELECT In the next deadlock we have an UPDATE and a SELECT deadlocking each other: The SELECT is easy to identify by the S locks it is using, the right one is the UPDATE using an fitting index (no scanning). How can a SELECT be part of a deadlock, as SAP is reading most of the data in "READ UNCOMMITED" ? The answer is, that this SELECT runs in "READ COMMITED" mode, means it only reads data that is is already committed and it will be blocked through U and X locks. The application that issues the UPDATE and SELECT statement has to analyze this deadlock and to avoid the parallel execution of these two statement by e.g. semaphores. Multiple Tables A deadlock not only happens between two threads, the SQL Server can detect deadlocks in any depth. A deadlock between three threads is looking like: The SQL Server will kill the thread with the least amount of log used, in this case the left one. In some case the remaining threads will deadlock again as a two node deadlock, but most of the time the deadlock is resolved by kill one out of the chain. The most impressive deadlock I saw was this one: This is a deadlock with 103 threads, the left end with the victim looks like: The victim was an UPDATE on BC_JMSQUEUE and all the other waiting ones were a DELETE on the same table. The root cause was, that there was no fitting index for the DELETE statement, which causes the DELETE to scan the table instead using an index. After creating the index, the problem was resolved. Different tables Deadlock can happen not only on the same table, but between different tables as well: We see a deadlock between table CBS_ACCESSQUEUE and CBS_LOCK. The statements are UPDATE statements on each table. These kind of deadlock are not easy to understand as it only shows the end of the deadlock. The two statements that let to the two initial locks (the KEY lock ACCESSQUEUE from the victim and the KEY lock on CBS_LOCK of the winner) are not part of the deadlock, but might be part of the problem. How to analyse such deadlocks will be part of my third post of this deadlock serie.Synchronize SQL database between two servers
Questions (to make right the synchronize SQL database between two servers) 1. Is required to have SQL Enterprise in both servers, for to be able the synchronization? Or will be enough to have SQL Enterprise in server1. And in server2 with only SQL Express, please? 2. Which requirements are required to fulfil or prepare for to be able synchronize SQL database between two servers, please? 3. Which possibilities (when are more possible solutions) have existed for synchronize SQL database (totally awaiting between four servers), please? 4. Exist a manual for reading an synchronization of SQL server - settings and steps? Thanks in advance.105Views0likes5CommentsSQL Server AG Failover - Automatic Failover
Hello, I am looking for a straight and definitive answer that I was hoping someone could answer for me. I want to trust what Copilot says, but I really need to hear it from Microsoft and I can't find any documentation from Microsoft confirming my question. My Environment: 2 replicas in datacenter 1 1 replica in datacenter 2 All three (3) replicas are set to synchronous-commit mode with automatic failover. I tested the failover manually between all three (3) replicas without issue. When I test the automatic failover - I take down both replicas in datacenter 1 at the same time to simulate a datacenter outage. I look at the replica in datacenter 2 and it is just says (Resolving...) next to the replica name. The replica does not come online and the DB is not moved. When I was searching I couldn't find out why. So I turned to Copilot not solve the issue, but to see if it could point me in the right direction. I tell Copilot my setup and what happened. Copilot responded stating that by design from Microsoft you cannot have more than two (2) replicas set to synchronous-commit mode with automatic failover in a SQL Server AG instance. That if more than two (2) are set for automatic failover. The SQL Server AG will use the first two (2) replicas it sees in its metadata and ignore the rest. Copilot went into detail about why this is designed this way, but the amount of information would make this post longer than it already is. If this is true - then when I took down both replicas in datacenter 1, SQL Server AG only saw those two (2) replicas in datacenter 1 as the available replicas to use for an automatic failover and thus why the replica in datacenter 2 did not come online and the DB not being moved So let's do a test. I brought back up the two (2) replicas in datacenter 1. Then I made a change in the AG proprieties. I set the 2nd replica in datacenter 1 to manual. So 1 replica is set to automatic failover and 1 replica is set to manual failover in datacenter 1. The replica in datacenter 2 is set to automatic failover I then take down both replicas in datacenter 1 again to simulate the "outage" and the replica in datacenter 2 comes online and the DB is moved. So is Copilot right? Can there only be two (2) replicas allowed to have/use automatic failover? I cannot find a definitive answer confirming this. Or is my configuration wrong/missing something and if it is, could you please point me in the right direction on how to get this resolved?121Views0likes4CommentsSecuring Azure SQL Database with Microsoft Entra Password-less Authentication: Migration Guide
The Secure Future Initiative is Microsoft’s strategic framework for embedding security into every layer of the data platform—from infrastructure to identity. As part of this initiative, Microsoft Entra authentication for Azure SQL Database offers a modern, password less approach to access control that aligns with Zero Trust principles. By leveraging Entra identities, customers benefit from stronger security postures through multifactor authentication, centralized identity governance, and seamless integration with managed identities and service principals. Onboarding Entra authentication enables organizations to reduce reliance on passwords, simplify access management, and improve auditability across hybrid and cloud environments. With broad support across tools and platforms, and growing customer adoption, Entra authentication is a forward-looking investment in secure, scalable data access. Migration Steps Overview Organizations utilizing SQL authentication can strengthen database security by migrating to Entra Id-based authentication. The following steps outline the process. Identify your logins and users – Review the existing SQL databases, along with all related users and logins, to assess what’s needed for migration. Enable Entra auth on Azure SQL logical servers by assigning a Microsoft Entra admin. Identify all permissions associated with the SQL logins & Database users. Recreate SQL logins and users with Microsoft Entra identities. Upgrade application drivers and libraries to min versions & update application connections to SQL Databases to use Entra based managed identities. Update deployments for SQL logical server resources to have Microsoft Entra-only authentication enabled. For all existing Azure SQL Databases, flip to Entra‑only after validation. Enforce Entra-only for all Azure SQL Databases with Azure Policies (deny). Step 1: Identify your logins and users - Use SQL Auditing Consider using SQL Audit to monitor which identities are accessing your databases. Alternatively, you may use other methods or skip this step if you already have full visibility of all your logins. Configure server‑level SQL Auditing. For more information on turning the server level auditing: Configure Auditing for Azure SQL Database series - part1 | Microsoft Community Hub SQL Audit can be enabled on the logical server, which will enable auditing for all existing and new user databases. When you set up auditing, the audit log will be written to your storage account with the SQL Database audit log format. Use sys.fn_get_audit_file_v2 to query the audit logs in SQL. You can join the audit data with sys.server_principals and sys.database_principals to view users and logins connecting to your databases. The following query is an example of how to do this: SELECT (CASE WHEN database_principal_id > 0 THEN dp.type_desc ELSE NULL END) AS db_user_type , (CASE WHEN server_principal_id > 0 THEN sp.type_desc ELSE NULL END) AS srv_login_type , server_principal_name , server_principal_sid , server_principal_id , database_principal_name , database_principal_id , database_name , SUM(CASE WHEN succeeded = 1 THEN 1 ELSE 0 END) AS sucessful_logins , SUM(CASE WHEN succeeded = 0 THEN 1 ELSE 0 END) AS failed_logins FROM sys.fn_get_audit_file_v2( '<Storage_endpoint>/<Container>/<ServerName>', DEFAULT, DEFAULT, '2023-11-17T08:40:40Z', '2023-11-17T09:10:40Z') -- join on database principals (users) metadata LEFT OUTER JOIN sys.database_principals dp ON database_principal_id = dp.principal_id -- join on server principals (logins) metadata LEFT OUTER JOIN sys.server_principals sp ON server_principal_id = sp.principal_id -- filter to actions DBAF (Database Authentication Failed) and DBAS (Database Authentication Succeeded) WHERE (action_id = 'DBAF' OR action_id = 'DBAS') GROUP BY server_principal_name , server_principal_sid , server_principal_id , database_principal_name , database_principal_id , database_name , dp.type_desc , sp.type_desc Step 2: Enable Microsoft Entra authentication (assign admin) Follow this to enable Entra authentication and assign a Microsoft Entra admin at the server. This is mixed mode; existing SQL auth continues to work. WARNING: Do NOT enable Entra‑only (azureADOnlyAuthentications) yet. That comes in Step 7. Entra admin Recommendation: For production environments, it is advisable to utilize an PIM Enabled Entra group as the server administrator for enhanced access control. Step 3: Identity & document existing permissions (SQL Logins & Users) Retrieve a list of all your SQL auth logins. Make sure to run on the master database.: SELECT * FROM sys.sql_logins List all SQL auth users, run the below query on all user Databases. This would list the users per Database. SELECT * FROM sys.database_principals WHERE TYPE = 'S' Note: You may need only the column ‘name’ to identify the users. List permissions per SQL auth user: SELECT database_principals.name , database_principals.principal_id , database_principals.type_desc , database_permissions.permission_name , CASE WHEN class = 0 THEN 'DATABASE' WHEN class = 3 THEN 'SCHEMA: ' + SCHEMA_NAME(major_id) WHEN class = 4 THEN 'Database Principal: ' + USER_NAME(major_id) ELSE OBJECT_SCHEMA_NAME(database_permissions.major_id) + '.' + OBJECT_NAME(database_permissions.major_id) END AS object_name , columns.name AS column_name , database_permissions.state_desc AS permission_type FROM sys.database_principals AS database_principals INNER JOIN sys.database_permissions AS database_permissions ON database_principals.principal_id = database_permissions.grantee_principal_id LEFT JOIN sys.columns AS columns ON database_permissions.major_id = columns.object_id AND database_permissions.minor_id = columns.column_id WHERE type_desc = 'SQL_USER' ORDER BY database_principals.name Step 4: Create SQL users for your Microsoft Entra identities You can create users(preferred) for all Entra identities. Learn more on Create user The "FROM EXTERNAL PROVIDER" clause in TSQL distinguishes Entra users from SQL authentication users. The most straightforward approach to adding Entra users is to use a managed identity for Azure SQL and grant the required three Graph API permissions. These permissions are necessary for Azure SQL to validate Entra users. User.Read.All: Allows access to Microsoft Entra user information. GroupMember.Read.All: Allows access to Microsoft Entra group information. Application.Read.ALL: Allows access to Microsoft Entra service principal (application) information. For creating Entra users with non-unique display names, use Object_Id in the Create User TSQL: -- Retrieve the Object Id from the Entra blade from the Azure portal. CREATE USER [myapp4466e] FROM EXTERNAL PROVIDER WITH OBJECT_ID = 'aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb' For more information on finding the Entra Object ID: Find tenant ID, domain name, user object ID - Partner Center | Microsoft Learn Alternatively, if granting these API permissions to SQL is undesirable, you may add Entra users directly using the T-SQL commands provided below. In these scenarios, Azure SQL will bypass Entra user validation. Create SQL user for managed identity or an application - This T-SQL code snippet establishes a SQL user for an application or managed identity. Please substitute the `MSIname` and `clientId` (note: use the client id, not the object id), variables with the Display Name and Client ID of your managed identity or application. -- Replace the two variables with the managed identity display name and client ID declare @MSIname sysname = '<Managed Identity/App Display Name>' declare @clientId uniqueidentifier = '<Managed Identity/App Client ID>'; -- convert the guid to the right type and create the SQL user declare @castClientId nvarchar(max) = CONVERT(varchar(max), convert (varbinary(16), @clientId), 1); -- Construct command: CREATE USER [@MSIname] WITH SID = @castClientId, TYPE = E; declare nvarchar(max) = N'CREATE USER [' + @MSIname + '] WITH SID = ' + @castClientId + ', TYPE = E;' EXEC (@cmd) For more information on finding the Entra Client ID: Register a client application in Microsoft Entra ID for the Azure Health Data Services | Microsoft Learn Create SQL user for Microsoft Entra user - Use this T-SQL to create a SQL user for a Microsoft Entra account. Enter your username and object Id: -- Replace the two variables with the MS Entra user alias and object ID declare sysname = '<MS Entra user alias>'; -- (e.g., username@contoso.com) declare uniqueidentifier = '<User Object ID>'; -- convert the guid to the right type declare @castObjectId nvarchar(max) = CONVERT(varchar(max), convert (varbinary(16), ), 1); -- Construct command: CREATE USER [@username] WITH SID = @castObjectId, TYPE = E; declare nvarchar(max) = N'CREATE USER [' + + '] WITH SID = ' + @castObjectId + ', TYPE = E;' EXEC (@cmd) Create SQL user for Microsoft Entra group - This T-SQL snippet creates a SQL user for a Microsoft Entra group. Set groupName and object Id to your values. -- Replace the two variables with the MS Entra group display name and object ID declare @groupName sysname = '<MS Entra group display name>'; -- (e.g., ContosoUsersGroup) declare uniqueidentifier = '<Group Object ID>'; -- convert the guid to the right type and create the SQL user declare @castObjectId nvarchar(max) = CONVERT(varchar(max), convert (varbinary(16), ), 1); -- Construct command: CREATE USER [@groupName] WITH SID = @castObjectId, TYPE = X; declare nvarchar(max) = N'CREATE USER [' + @groupName + '] WITH SID = ' + @castObjectId + ', TYPE = X;' EXEC (@cmd) For more information on finding the Entra Object ID: Find tenant ID, domain name, user object ID - Partner Center | Microsoft Learn Validate SQL user creation - When a user is created correctly, the EntraID column in this query shows the user's original MS Entra ID. select CAST(sid as uniqueidentifier) AS EntraID, * from sys.database_principals Assign permissions to Entra based users – After creating Entra users, assign them SQL permissions to read or write by either using GRANT statements or adding them to roles like db_datareader. Refer to your documentation from Step 3, ensuring you include all necessary user permissions for new Entra SQL users and that security policies remain enforced. Step 5: Update Programmatic Connections Change your application connection strings to managed identities for SQL authentication and test each app for Microsoft Entra compatibility. Upgrade your drivers to these versions or newer. JDBC driver version 7.2.0 (Java) ODBC driver version 17.3 (C/C++, COBOL, Perl, PHP, Python) OLE DB driver version 18.3.0 (COM-based applications) Microsoft.Data.SqlClient 5.2.2+ (ADO.NET) Microsoft.EntityFramework.SqlServer 6.5.0 (Entity Framework) System.Data.SqlClient(SDS) doesn't support managed identity; switch to Microsoft.Data.SqlClient(MDS). If you need to port your applications from SDS to MDS the following cheat sheet will be helpful: https://github.com/dotnet/SqlClient/blob/main/porting-cheat-sheet.md. Microsoft.Data.SqlClient also takes a dependency on these packages & most notably the MSAL for .NET (Version 4.56.0+). Here is an example of Azure web application connecting to Azure SQL, using managed identity. Step 6: Validate No Local Auth Traffic Be sure to switch all your connections to managed identity before you redeploy your Azure SQL logical servers with Microsoft Entra-only authentication turned on. Repeat the use of SQL Audit, just as you did in Step 1, but now to confirm that every connection has moved away from SQL authentication. Once your server is up and running with only Entra authentication, any connections still based on SQL authentication will not work, which could disrupt services. Test your systems thoroughly to verify that everything operates correctly. Step 7: Enable Microsoft Entra‑only & disable local auth Once all your connections & applications are built to use managed identity, you can disable the SQL Authentication, by turning the Entra-only authentication via Azure portal, or using the APIs. Step 8: Enforce at scale (Azure Policy) Additionally, after successful migration and validation, it is recommended to deploy the built-in Azure Policy across your subscriptions to ensure that all SQL resources do not use local authentication. During resource creation, Azure SQL instances will be required to have Microsoft Entra-only authentication enabled. This requirement can be enforced through Azure policies. Best Practices for Entra-Enabled Azure SQL Applications Use exponential backoff with decorrelated jitter for retrying transient SQL errors, and set a max retry cap to avoid resource drain. Separate retry logic for connection setup and query execution. Cache and proactively refresh Entra tokens before expiration. Use Microsoft.Data.SqlClient v3.0+ with Azure.Identity for secure token management. Enable connection pooling and use consistent connection strings. Set appropriate timeouts to prevent hanging operations. Handle token/auth failures with targeted remediation, not blanket retries. Apply least-privilege identity principles; avoid global/shared tokens. Monitor retry counts, failures, and token refreshes via telemetry. Maintain auditing for compliance and security. Enforce TLS 1.2+ (Encrypt=True, TrustServerCertificate=False). Prefer pooled over static connections. Log SQL exception codes for precise error handling. Keep libraries and drivers up to date for latest features and resilience. References Use this resource to troubleshoot issues with Entra authentication (previously known as Azure AD Authentication): Troubleshooting problems related to Azure AD authentication with Azure SQL DB and DW | Microsoft Community Hub To add Entra users from an external tenant, invite them as guest users to the Azure SQL Database's Entra administrator tenant. For more information on adding Entra guest users: Quickstart: Add a guest user and send an invitation - Microsoft Entra External ID | Microsoft Learn Conclusion Migrating to Microsoft Entra password-less authentication for Azure SQL Database is a strategic investment in security, compliance, and operational efficiency. By following this guide and adopting best practices, organizations can reduce risk, improve resilience, and future-proof their data platform in alignment with Microsoft’s Secure Future Initiative.678Views1like2CommentsTLS 1.3 in SQL Server 2025: What It Really Changes for Security
Why TLS 1.3 matters TLS (Transport Layer Security) is the protocol that encrypts traffic between clients and servers. For many years, most SQL Server environments have relied on TLS 1.2, which dates back to 2008. TLS 1.3, finalized in 2018, is a significant evolution. It brings three main benefits: Better performance With fewer roundtrips and faster negotiation, secure connections are established much more quickly in practice Stronger security Old and weak algorithms are removed (RC4, SHA1, MD5, DES, 3DES…) Perfect Forward Secrecy (PFS) is mandatory. Even if a private key is compromised later, past sessions remain protected Simpler cipher suites: less configuration complexity, fewer chances to misconfigure Easier compliance Recommended by NIST Strongly encouraged by PCIDSS 4.0 Fewer exceptions to justify during audits What SQL Server 2025 adds SQL Server 2022 introduced TLS 1.3 through the new TDS 8.0 (Tabular Data Stream) protocol. And SQL Server 2025 makes it broadly usable across the SQL Server ecosystem, this is not limited to client connections only. Components that can use TLS 1.3 Client connections (ODBC, OLE DB, .NET) SQL Server Agent bcp utility sqlcmd utility Always On Availability Groups Always On Failover Cluster Instances Log Shipping Transactional replication Merge replication Peer-to-peer replication Snapshot replication Linked Servers PolyBase SQL Writer Service Prerequisites Valid certificates on all endpoints Selfsigned certificates are acceptable for testing In production, use certificates from a trusted CA or an internal PKI Updated drivers ODBC Driver 18+ for SQL Server OLE DB Driver 19+ for SQL Server Microsoft.Data.SqlClient 5.0+ Older drivers will either fall back to older TLS versions or fail to connect SQL Server configuration You can enforce encryption on all connections (Force Encryption) You can configure TDS 8.0 as the minimum protocol version (Force Strict Encryption) How Copilot can help I'm setting up log shipping between two SQL Server 2025 instances with TLS 1.3 enabled. What certificate requirements should I verify ? The MSSQL extension with GitHub Copilot can assist you when planning and rolling out TLS 1.3. Auditing the current state Analyze my current SQL Server encryption configuration. What TLS version are my connections using ? Are there any connections using deprecated protocols ? Copilot can generate and explain the queries that inspect the relevant DMVs and interpret the results. Generating configuration scripts Generate a script to configure SQL Server 2025 to require TLS 1.3 for all incoming connections, including the certificate configuration steps. Diagnosing compatibility issues My application fails to connect after enabling TDS 8.0. The error mentions "SSL Provider". What should I check and how do I fix it ? Reviewing linked servers Review my linked server configurations and identify which onesmight have issues with TLS 1.3 requirements. Documenting the changes Generate documentation for our security team explainingthe TLS configuration changes we're implementing for PCI-DSS compliance. It is possible that the AI generated content is incorrect. You remain responsible for reviewing, validating, and approving it before any use. Do not rely on this output without thorough human verification. Not intended for production use. Things to watch out for Linked Servers to legacy systems You may have linked servers that point to: Older SQL Server versions (2016 and earlier) ODBC sources that do not support TLS 1.2/1.3 Thirdparty servers with outdated TLS stacks If you enforce TLS 1.3 everywhere, these connections can fail. Possible approaches: Upgrade the remote systems Use temporary exceptions (not recommended longterm in production) Introduce a proxy or gateway that can handle protocol translation Replication to older versions Transactional or merge replication to subscribers running SQL Server 2019 or earlier may require mixed configurations. Legacy applications Applications using: .NET Framework with older System.Data.SqlClient versions Old ODBC/OLE DB drivers Thirdparty tools that are not updated may need to be upgraded or reconfigured before you enforce TLS 1.3. Suggested deployment strategy These steps are indicative; adapt them to your environment and change management process. Phase 1: Audit Inventory all inbound and outbound connections Identify client types and driver versions Document linked servers and replication paths Phase 2: Testing Enable TLS 1.3 in a staging or preproduction environment Test all critical applications and administrative tools Identify and resolve compatibility issues Phase 3: Progressive rollout Enable TLS 1.3 as an option without forcing it Monitor which connections still negotiate older TLS versions Once all key systems are compatible, enforce TLS 1.3 References TDS 8.0 - SQL Server Configure TLS 1.3 on SQL Server 2022 and later - SQL Server Encrypt Connections by Importing a Certificate - SQL Server Database Engine: Breaking Changes - SQL Server 2025 Certificate Requirements for SQL Server - SQL Server