sqlserver2025
53 Topicsmssql-python 1.6: Unblocking Your Threads
The last two mssql-python releases shipped big features: Bulk Copy in 1.4 for high-throughput data loading, and Apache Arrow in 1.5 for zero-copy analytics. Version 1.6 is about what happens next: you take those features into production, scale up your thread pool, and find out where the driver was quietly holding you back. This release unblocks your threads during connection setup, fixes crashes and incorrect results in common cursor patterns, and hardens security for passwords with special characters and log file paths. pip install --upgrade mssql-python Your threads can run while connections are opening If you're running mssql-python behind Flask, FastAPI, Django, or any WSGI/ASGI server with thread-based workers, this one matters. Opening a database connection is slow. There's DNS resolution, a TCP handshake, TLS negotiation, and SQL Server authentication. In previous versions, every other Python thread in your process was frozen while that happened, because the driver held the Global Interpreter Lock (GIL) during the entire operation. One thread opening a connection meant no other thread could serve requests, process data, or do anything at all. Version 1.6 releases the GIL during connect and disconnect. Your other threads keep running while the network round-trip completes. If you have a multi-threaded web server handling concurrent requests, this removes a serialization bottleneck you may not have realized you had. The connection pool was also reworked to stay safe under this change. Previously, the pool held an internal lock while calling connect, which would have created a deadlock now that connect releases the GIL. The pool now reserves a slot first, connects outside the lock, and rolls back the reservation if the connection fails. Decimal parameters work with setinputsizes If you use cursor.setinputsizes() to declare parameter types for performance-sensitive batch inserts, you may have hit a crash when specifying SQL_DECIMAL or SQL_NUMERIC. This is fixed. Decimal values now bind correctly whether you're using execute() or executemany(): cursor.setinputsizes([ (mssql_python.SQL_WVARCHAR, 100, 0), (mssql_python.SQL_INTEGER, 0, 0), (mssql_python.SQL_DECIMAL, 18, 2), ]) cursor.executemany( "INSERT INTO Products (Name, CategoryID, Price) VALUES (?, ?, ?)", [ ("Widget", 1, Decimal("19.99")), ("Gadget", 2, Decimal("29.99")), ], ) Iterating catalog results with fetchone() If you've used cursor.tables(), cursor.columns(), or other catalog methods and tried to walk the results with fetchone(), you may have gotten incorrect data. Row tracking was broken for catalog result sets. This now works the way you'd expect: cursor.tables(tableType="TABLE") while True: row = cursor.fetchone() if row is None: break print(row.table_name) This also applies to primaryKeys(), foreignKeys(), statistics(), procedures(), and getTypeInfo(). Reusing prepared statements without reset If you call cursor.execute() with reset_cursor=False to reuse a prepared statement across calls, this no longer raises an "Invalid cursor state" error. Passwords with special characters stay masked in logs If your SQL Server password contains semicolons, braces, or other ODBC-special characters (e.g., PWD={Top;Secret}), previous versions could accidentally leak part of it in sanitized log output. The password masking logic has been rewritten to correctly handle all ODBC connection string formats. If the connection string can't be parsed at all, the entire string is now redacted rather than partially exposed. The logging system also now rejects log file paths that attempt directory traversal, preventing setup_logging(log_file_path="../../somewhere/else.log") from writing outside the intended directory. Better type checker support for executemany If your type checker flagged executemany() when you passed dictionaries as parameter rows, that warning is gone. The type annotations now correctly accept Mapping types, matching the DB API 2.0 spec for named parameters. Get started pip install --upgrade mssql-python For questions or issues, file them on GitHub or email mssql-python@microsoft.com.122Views0likes0Commentsmssql-django 1.7.1: Microsoft Fabric Support and Migration Fixes
We just shipped mssql-django 1.7.1 with two fixes that matter if you're running Django on Microsoft Fabric or using descending indexes in your migrations. JSONField Now Works on Microsoft Fabric SQL Database in Microsoft Fabric reports itself as EngineEdition 12, which our backend didn't previously recognize. The result: JSONField queries, hash functions, collation introspection, and test teardown all broke on Fabric because the backend couldn't correctly identify the server capabilities. In 1.7.1, we added full detection for Fabric's engine edition. The backend now correctly treats Fabric as an Azure SQL-class database, which means JSONField, MD5, SHA1, SHA224, SHA256, SHA384, SHA512, and collation-dependent lookups all work as expected. We also combined the ProductVersion and EngineEdition queries into a single round trip, so connection setup is faster too. If you've been waiting to use Django with SQL Database in Microsoft Fabric, this is the release that makes it work. Descending Index Migrations No Longer Crash If you had a model with a descending index and ran an AlterField migration on one of the indexed columns, Django would crash with FieldDoesNotExist. The issue was in how our schema editor looked up fields during index reconstruction: it was reading index.fields (which only contains field names for simple indexes) instead of index.fields_orders (which correctly handles the (field_name, order) tuples that descending indexes use). This was a one-line fix, but it blocked anyone whose migrations touched fields covered by descending indexes. If you've been working around this, upgrade and your migrations will run cleanly. SQL Server 2025 in CI We upgraded our Windows CI pipeline to run against SQL Server 2025, so every commit is now tested against the latest version. Combined with our existing coverage across SQL Server 2016-2022, Azure SQL Database, Azure SQL Managed Instance, and now Microsoft Fabric, you can be confident the backend works across the full Microsoft data platform. Upgrade pip install --upgrade mssql-django Full compatibility: Component Supported Django 3.2, 4.0, 4.1, 4.2, 5.0, 5.1, 5.2, 6.0 Python 3.8 - 3.14 (Django 6.0 requires 3.12+) SQL Server 2016, 2017, 2019, 2022, 2025 Azure SQL Database, Managed Instance, SQL Database in Fabric ODBC Driver Microsoft ODBC Driver 17 or 18 Questions, bugs, or contributions? Find us on GitHub. mssql-django is open source under the BSD license. Built and maintained by Microsoft.42Views0likes0CommentsIntroducing Pacemaker HA Agent v2 for SQL Server on Linux (In Preview)
We are excited to introduce the next generation of high availability (HA) Agent for SQL Server on Linux: Pacemaker HA Agent v2. This release is a major step forward, designed to reduce planned and unplanned failover times, compared to the previous agent, based on internal engineering improvements. Why Pacemaker Is Required for SQL Server HA on Linux For users new to Linux, it’s important to understand how high availability works on this platform. On Windows Server, Always On availability groups use an underlying Windows Server Failover Cluster (WSFC) to: Monitor node health Detect failures Orchestrate automatic failovers Always On availability groups on Linux rely on an external cluster orchestrator for health monitoring and failover coordination, with Pacemaker HA Agent being one of the cluster orchestrators, responsible for: Monitoring node and application health Coordinating failover decisions Helping mitigate split‑brain scenarios through improved write‑lease evaluation Managing resources such as availability groups and listeners The Pacemaker HA Agent is the integration layer that allows Pacemaker to understand SQL Server health and manage availability groups safely. Evolution of the SQL Server Pacemaker HA Agent With SQL Server 2025 CU3 and later, Pacemaker HA Agent v2 is available in preview for Red Hat Enterprise Linux and Ubuntu through the mssql-server-ha package. Pacemaker HA agent v2 uses a service‑based architecture. The agent runs as a dedicated system service named mssql-pcsag, which is responsible for handling SQL Server–specific high availability operations and communication with Pacemaker. You can manage mssql-pcsag service by using standard system service controls to start, restart, status and stop this service by using the operating system's service manager (for example, systemctl). # Start the mssql-pcsag service sudo systemctl start mssql-pcsag # Restart the mssql-pcsag service sudo systemctl restart mssql-pcsag # Check the status of the mssql-pcsag service sudo systemctl status mssql-pcsag # Stop the mssql-pcsag service sudo systemctl stop mssql-pcsag Limitations of Pacemaker HA Agent v1 While the original agent enabled SQL Server HA on Linux, customers running production workloads encountered several challenges: Failover delays of 30 seconds to 2 minutes during planned or unplanned events Limited health detection, missing conditions such as I/O stalls and memory pressure Rigid failover behavior, unlike the flexible policies available on Windows (WSFC) Incomplete write‑lease handling, requiring custom logic No support for TLS1.3 for Pacemaker and SQL Server communications How Pacemaker HA Agent v2 Addresses These Gaps Pacemaker HA Agent v2 is a ground‑up improvement, designed to improve the reliability characteristics of SQL Server HA on Linux. 1. Faster & Smarter Failover Decisions The new agent introduces a service‑based health monitoring architecture, moving beyond basic polling. This allows SQL Server to report detailed diagnostic signals - improving detection speed and helping reduce failover delays in supported configurations. 2. Flexible Automatic Failover Policies inspired by the WSFC health model Pacemaker HA Agent v2 supports failure‑condition levels (1–5) and health‑check timeout model aligned with those available in Always On availability groups on Windows. This provides: Fine‑grained control over failover sensitivity, allowing administrators to tune when failover should occur. Improved detection of internal SQL Server conditions, such as memory pressure, internal deadlocks, orphaned spinlocks, and other engine‑level failures. Failover decisions are now driven by detailed diagnostics from sp_server_diagnostics, enabling faster and more accurate response to unhealthy states and providing enhanced resiliency capabilities for SQL Server AG on Linux. You can configure the failure condition level and health check timeout using the following commands: -- Setting failure condition level ALTER AVAILABILITY GROUP pacemakerag SET (FAILURE_CONDITION_LEVEL = 2); -- Setting health check timeout ALTER AVAILABILITY GROUP pacemakerag SET (HEALTH_CHECK_TIMEOUT = 60000); After applying the configuration, validate the setting using the sys.availability_groups DMV: 3. Robust Write Lease Validity Handling To prevent split‑brain scenarios, SQL Server on Linux uses an external write‑lease mechanism. In v1, lease information was not fully integrated into failover decisions. In v2, the agent actively evaluates the write-lease validity, before initiating transitions. This supports controlled role changes and improved data consistency behavior during failover events, depending on cluster configuration. 4. TLS 1.3 Support Pacemaker HA agent v2 includes design updates to support TLS 1.3–based communication for health checks and failover operations, when TLS 1.3 is enabled. Supported Versions & Distributions Pacemaker HA Agent v2 supports: SQL Server 2025 CU3 or later RHEL 9 or later Ubuntu 22.04 or higher. Preview upgrade & migration guidance for non-production environments New or existing non-prod deployments running SQL Server 2025 (17.x) can migrate from Pacemaker HA Agent v1 to v2 using following approach: Drop the existing AG resource sudo pcs resource delete <NameForAGResource> This temporarily pauses AG synchronization but does not delete the availability group (AG). After the resource is recreated, Pacemaker resumes management and AG synchronization automatically. Create a new AG resource using the v2 agent (ocf:mssql:agv2) sudo pcs resource create <NameForAGResource> ocf:mssql:agv2 ag_name=<AGName> meta failure-timeout=30s promotable notify=true Validate cluster health sudo pcs status Resume normal operations References Create and Configure an Availability Group for SQL Server on Linux - SQL Server | Microsoft Learn Thank You, Engineering: David Liao Attinder Pal Singh302Views2likes3CommentsCreating a Contained Availability Group and Enabling Database Creation via CAG Listener
A Contained Availability Group (CAG) is designed to simplify high availability and disaster recovery by encapsulating system databases (master, msdb) within the availability group itself. This means that logins, jobs, credentials, and other metadata are automatically replicated across replicas, eliminating the need for manual synchronization and reducing operational complexity. Starting with SQL Server 2025 CU1, you can create or restore databases directly through the CAG listener - without connecting to the physical instance - by enabling a session context key. Why Contained AGs Matter Self-contained HA unit: Each CAG has its own copy of master and msdb, making it independent of the physical SQL instance. Simplified failover: When the AG fails over, all associated metadata moves with it, ensuring applications continue to function without manual intervention. Improved automation: Supports scenarios where direct access to the SQL instance is restricted, enabling operations through the AG listener. Enhanced security: Reduces exposure by limiting instance-level access; operations can be scoped to the AG context. Streamlined management: Eliminates the need for login and job replication scripts across replicas. Step 1: Prepare the Database for Availability Group This example uses SQL Server on Linux; however, the steps are identical for SQL Server on Windows. In this walkthrough, an existing database named CAGDB is added to a Contained Availability Group (CAG). Before adding the database to the CAG, verify that it is configured for FULL recovery mode and perform a full database backup. ALTER DATABASE CAGDB SET RECOVERY FULL; GO BACKUP DATABASE CAGDB TO DISK = N'/var/opt/mssql/backups/CAGDB.bak' WITH INIT, COMPRESSION; GO Note: On Linux, ensure that the backup target directory exists and is owned by the mssql user. sudo mkdir -p /var/opt/mssql/backups sudo chown mssql:mssql /var/opt/mssql/backups Step 2: Create the Contained Availability Group On your Linux SQL Server nodes, run: CREATE AVAILABILITY GROUP [CAGDemo] WITH ( CLUSTER_TYPE = EXTERNAL, CONTAINED ) FOR DATABASE [CAGDB] REPLICA ON N'node1' WITH ( ENDPOINT_URL = N'tcp://node1:5022', AVAILABILITY_MODE = SYNCHRONOUS_COMMIT, FAILOVER_MODE = EXTERNAL, SEEDING_MODE = AUTOMATIC ), N'node2' WITH ( ENDPOINT_URL = N'tcp://node2:5022', AVAILABILITY_MODE = SYNCHRONOUS_COMMIT, FAILOVER_MODE = EXTERNAL, SEEDING_MODE = AUTOMATIC ); GO #Connect to secondary replicas and join the AG: ALTER AVAILABILITY GROUP [CAGDemo] JOIN WITH (CLUSTER_TYPE = EXTERNAL); ALTER AVAILABILITY GROUP [CAGDemo] GRANT CREATE ANY DATABASE; Step 3: Configure Listener and Connect Create a listener and connect using SSMS or sqlcmd: ALTER AVAILABILITY GROUP [CAGDemo] ADD LISTENER N'CAGDemoListener' ( WITH IP ( (N'*.*.*.*', N'255.255.255.0') ), PORT = 1453 ); GO Step 4: Connect to CAGDemoListener and attempt Database Creation (Failure) When connected through the listener: CREATE DATABASE TestCAGDB; Result:\ Msg 262, Level 14, State 1: CREATE DATABASE is not allowed in this context.\ This happens because database creation is blocked by default in a contained AG session. Step 5: Enable Database Creation in CAG Session or Instance master EXEC sp_set_session_context key = N'allow_cag_create_db', @value = 1; This enables database creation for your current session. Step 6: Retry Database Creation (Success) CREATE DATABASE TestCAGDB; Result: Database created successfully within the contained AG context. Users with dbcreator role in the CAG context can perform this action. Step 7: Backup the database. ALTER DATABASE TestCAGDB SET RECOVERY FULL; BACKUP DATABASE TestCAGDB TO DISK = N'/var/opt/mssql/data/backups/TestCAGDB.bak'; Step 8: Add database to the CAG ALTER AVAILABILITY GROUP [CAGDemo] ADD DATABASE [TestCAGDB]; Optional: To automate Steps 4 - 8, you can create a stored procedure (for example: [dbo].[sp_cag_create_db]) and execute it while connected through the Contained Availability Group (CAG) listener context. CREATE OR ALTER PROCEDURE [dbo].[sp_cag_create_db] @database_name sysname, @createdb_sql NVARCHAR(MAX) = NULL AS BEGIN SET NOCOUNT ON DECLARE @fIsContainedAGSession int EXECUTE @fIsContainedAGSession = sys.sp_MSIsContainedAGSession if (@fIsContainedAGSession = 1) BEGIN DECLARE @SQL NVARCHAR(MAX); EXEC sp_set_session_context = N'allow_cag_create_db', @value = 1; IF @createdb_sql IS NULL SET @SQL = 'CREATE DATABASE ' + QUOTENAME(@database_name); ELSE SET @SQL = @createdb_sql; PRINT @SQL EXEC sp_executesql @SQL; SET @SQL = 'ALTER DATABASE ' + QUOTENAME(@database_name) + ' SET RECOVERY FULL'; PRINT @SQL EXEC sp_executesql @SQL; SET @SQL = 'BACKUP DATABASE ' + QUOTENAME(@database_name) + ' TO DISK = N''NUL'''; PRINT @SQL EXEC sp_executesql @SQL; DECLARE _Name sysname; set _Name = (SELECT name FROM sys.availability_groups ags INNER JOIN sys.dm_exec_sessions des ON ags.group_id = des.contained_availability_group_id WHERE @@SPID = des.session_id); SET @SQL = 'use master; ALTER AVAILABILITY GROUP ' + QUOTENAME(@AG_Name) + ' add DATABASE ' + QUOTENAME (@database_name) PRINT @SQL EXEC sp_executesql @SQL; EXEC sp_set_session_context key = N'allow_cag_create_db', @value = 0; END ELSE BEGIN RAISERROR('This can only be used with a contained availability group connection.', 16, 1); END END GO At a high level, this procedure simplifies database onboarding into a CAG by orchestrating the following actions as a single workflow: Creating (or restoring) the database Setting the recovery model to FULL Taking the initial backup required for availability group seeding Adding the database to the target Contained Availability Group Example: Create a database Execute the following command to create NewTestDB and add it to the target CAG. EXEC [dbo].sp_cag_create_db @database_name = [NewTestDB]; Example: Restore a database The stored procedure also supports an optional parameter, @createdb_sql, which allows you to provide a custom SQL statement to create or restore a database (for example, restoring from a backup). Once the database backup file exists and is accessible to SQL Server, you can use this parameter to perform the operation. Important: When @createdb_sql is used, the procedure executes that SQL statement directly. Ensure the SQL statement is from a secured and trusted source. DECLARE @restoreSql NVARCHAR(MAX); SET @restoreSql = N' RESTORE DATABASE AdventureWorks2022 FROM DISK = ''/var/opt/mssql/backups/AdventureWorks2022.bak'' WITH MOVE ''AdventureWorks2022'' TO ''/var/opt/mssql/data/AdventureWorks2022.mdf'', MOVE ''AdventureWorks2022_log'' TO ''/var/opt/mssql/data/AdventureWorks2022_log.ldf'', RECOVERY; '; EXEC dbo.sp_cag_create_db @database_name = N'AdventureWorks2022', @createdb_sql = @restoreSql; Conclusion Contained Availability Groups represent a major step forward in simplifying high availability for SQL Server. By encapsulating system databases within the AG context, they eliminate the complexity of synchronizing logins, jobs, and credentials across replicas. With the new capability in SQL Server 2025 CU1 to create or restore databases directly through the AG listener using sp_set_session_context, organizations can streamline automation and reduce operational overhead. References What Is a Contained Availability Group Deploy a Pacemaker Cluster for SQL Server on Linux748Views1like0CommentsCumulative Update #4 for SQL Server 2025 RTM
The 4th cumulative update release for SQL Server 2025 RTM is now available for download at the Microsoft Downloads site. Please note that registration is no longer required to download Cumulative updates. To learn more about the release or servicing model, please visit: CU4 KB Article: https://learn.microsoft.com/troubleshoot/sql/releases/sqlserver-2025/cumulativeupdate4 Starting with SQL Server 2017, we adopted a new modern servicing model. Please refer to our blog for more details on Modern Servicing Model for SQL Server Microsoft® SQL Server® 2025 RTM Latest Cumulative Update: https://www.microsoft.com/download/details.aspx?id=108540 Update Center for Microsoft SQL Server: https://learn.microsoft.com/en-us/troubleshoot/sql/releases/download-and-install-latest-updates251Views0likes1CommentSecurity Update for SQL Server 2025 RTM CU3
The Security Update for SQL Server 2025 RTM CU3 is now available for download at the Microsoft Download Center and Microsoft Update Catalog sites. This package cumulatively includes all previous security fixes for SQL Server 2025 RTM CUs, plus it includes the new security fixes detailed in the KB Article. Security Bulletins: CVE-2026-32176 - Security Update Guide - Microsoft - Microsoft SQL Server Denial of Service Vulnerability Security Update of SQL Server 2025 RTM CU3 KB Article: KB5083245 Microsoft Download Center: https://www.microsoft.com/download/details.aspx?familyid=7c9659b3-52aa-4693-9221-6e720ecafc01 Microsoft Update Catalog: https://www.catalog.update.microsoft.com/Search.aspx?q=5083245 Latest Updates for Microsoft SQL Server: https://learn.microsoft.com/en-us/troubleshoot/sql/releases/download-and-install-latest-updates148Views0likes0CommentsSecurity Update for SQL Server 2025 RTM
The Security Update for SQL Server 2025 RTM GDR is now available for download at the Microsoft Download Center and Microsoft Update Catalog sites. This package cumulatively includes all previous security fixes for SQL Server 2025 RTM, plus it includes the new security fixes detailed in the KB Article. Security Bulletins: CVE-2026-32176 - Security Update Guide - Microsoft - Microsoft SQL Server Denial of Service Vulnerability Security Update of SQL Server 2025 RTM GDR KB Article: KB5084814 Microsoft Download Center: https://www.microsoft.com/download/details.aspx?familyid=023f8cbe-945a-463e-aaf9-a9154037fa62 Microsoft Update Catalog: https://www.catalog.update.microsoft.com/Search.aspx?q=5084814 Latest Updates for Microsoft SQL Server: https://learn.microsoft.com/en-us/troubleshoot/sql/releases/download-and-install-latest-updates88Views0likes0Commentsmssql-python 1.5: Apache Arrow, sql_variant, and Native UUIDs
We're excited to announce the release of mssql-python 1.5.0, the latest version of Microsoft's official Python driver for SQL Server, Azure SQL Database, and SQL databases in Fabric. This release delivers Apache Arrow fetch support for high-performance data workflows, first-class sql_variant and native UUID support, and a collection of important bug fixes. pip install --upgrade mssql-python Apache Arrow fetch support If you're working with pandas, Polars, DuckDB, or any Arrow-native data framework, this release changes how you get data out of SQL Server. The new Arrow fetch API returns query results as native Apache Arrow structures, using the Arrow C Data Interface for zero-copy handoff directly from the C++ layer to Python. This is a significant performance improvement over the traditional fetchall() path, which converts every value through Python objects. With Arrow, columnar data stays in columnar format end-to-end, and your data framework can consume it without any intermediate copies. Three methods for different workflows cursor.arrow() fetches the entire result set as a PyArrow Table: import mssql_python conn = mssql_python.connect( "SERVER=myserver.database.windows.net;" "DATABASE=AdventureWorks;" "UID=myuser;PWD=mypassword;" "Encrypt=yes;" ) cursor = conn.cursor() cursor.execute("SELECT * FROM Sales.SalesOrderDetail") # Get the full result as a PyArrow Table table = cursor.arrow() # Convert directly to pandas - zero-copy where possible df = table.to_pandas() # Or to Polars - also zero-copy import polars as pl df = pl.from_arrow(table) cursor.arrow_batch() fetches a single RecordBatch of a specified size, useful when you want fine-grained control over memory: cursor.execute("SELECT * FROM Production.TransactionHistory") # Process in controlled chunks while True: batch = cursor.arrow_batch(batch_size=10000) if batch.num_rows == 0: break # Process each batch individually process(batch.to_pandas()) cursor.arrow_reader() returns a streaming RecordBatchReader, which integrates directly with frameworks that accept readers: cursor.execute("SELECT * FROM Production.TransactionHistory") reader = cursor.arrow_reader(batch_size=8192) # Write directly to Parquet with streaming - no need to load everything into memory import pyarrow.parquet as pq pq.write_table(reader.read_all(), "output.parquet") # Or iterate batches manually for batch in reader: process(batch) How it works under the hood The Arrow integration is built directly into the C++ pybind11 layer. When you call any Arrow fetch method, the driver: Allocates columnar Arrow buffers based on the result set schema Fetches rows from SQL Server in batches using bound column buffers Converts and packs values directly into the Arrow columnar format Exports the result via the Arrow C Data Interface as PyCapsule objects PyArrow imports the capsules with zero copy Every SQL Server type maps to the appropriate Arrow type: INT to int32, BIGINT to int64, DECIMAL(p,s) to decimal128(p,s), DATE to date32, TIME to time64[ns], DATETIME2 to timestamp[us], UNIQUEIDENTIFIER to large_string, VARBINARY to large_binary, and so on. LOB columns (large VARCHAR(MAX), NVARCHAR(MAX), VARBINARY(MAX), XML, UDTs) are handled transparently by falling back to row-by-row GetData fetching while still assembling the result into Arrow format. Community contribution The Arrow fetch support was contributed by @ffelixg. This is a substantial contribution spanning the C++ pybind layer, the Python cursor API, and comprehensive tests. Thank you, Felix Graßl, for an outstanding contribution that brings high-performance data workflows to mssql-python. sql_variant type support SQL Server's sql_variant type stores values of various data types in a single column. It's commonly used in metadata tables, configuration stores, and EAV (Entity-Attribute-Value) patterns. Version 1.5 adds full support for reading sql_variant values with automatic type resolution. The driver reads the inner type tag from the sql_variant wire format and returns the appropriate Python type: cursor.execute(""" CREATE TABLE #config ( key NVARCHAR(50) PRIMARY KEY, value SQL_VARIANT ) """) cursor.execute("INSERT INTO #config VALUES ('max_retries', CAST(5 AS INT))") cursor.execute("INSERT INTO #config VALUES ('timeout', CAST(30.5 AS FLOAT))") cursor.execute("INSERT INTO #config VALUES ('app_name', CAST('MyApp' AS NVARCHAR(50)))") cursor.execute("INSERT INTO #config VALUES ('start_date', CAST('2026-01-15' AS DATE))") cursor.execute("SELECT value FROM #config ORDER BY key") rows = cursor.fetchall() # Each value comes back as the correct Python type assert rows[0][0] == "MyApp" # str assert rows[1][0] == 5 # int assert rows[2][0] == date(2026, 1, 15) # datetime.date assert rows[3][0] == 30.5 # float All 23+ base types are supported, including int, float, Decimal, bool, str, date, time, datetime, bytes, uuid.UUID, and None. Native UUID support Previously, UNIQUEIDENTIFIER columns were returned as strings, requiring manual conversion to uuid.UUID. Version 1.5 changes the default: UUID columns now return native uuid.UUID objects. import uuid cursor.execute("SELECT NEWID() AS id") row = cursor.fetchone() # Native uuid.UUID object - no manual conversion needed assert isinstance(row[0], uuid.UUID) print(row[0]) # e.g., UUID('550e8400-e29b-41d4-a716-446655440000') UUID values also bind natively as input parameters: my_id = uuid.uuid4() cursor.execute("INSERT INTO Users (id, name) VALUES (?, ?)", my_id, "Alice") Migration compatibility If you're migrating from pyodbc and your code expects string UUIDs, you can opt out at three levels: # Module level - affects all connections mssql_python.native_uuid = False # Connection level - affects all cursors on this connection conn = mssql_python.connect(conn_str, native_uuid=False) When native_uuid=False, UUID columns return strings as before. Row class export The Row class is now publicly exported from the top-level mssql_python module. This makes it easy to use in type annotations and isinstance checks: from mssql_python import Row cursor.execute("SELECT 1 AS id, 'Alice' AS name") row = cursor.fetchone() assert isinstance(row, Row) print(row[0]) # 1 (index access) print(row.name) # "Alice" (attribute access) Bug fixes Qmark false positive fix The parameter style detection logic previously misidentified? characters inside SQL comments, string literals, bracketed identifiers, and double-quoted identifiers as qmark parameter placeholders. A new context-aware scanner correctly skips over these SQL quoting contexts: # These no longer trigger false qmark detection: cursor.execute("SELECT [is this ok?] FROM t") cursor.execute("SELECT 'what?' AS col") cursor.execute("SELECT /* why? */ 1") NULL VARBINARY parameter fix Fixed NULL parameter type mapping for VARBINARY columns, which previously could fail when passing None as a binary parameter. Bulkcopy auth fix Fixed stale authentication fields being retained in the bulk copy context after token acquisition. This could cause Entra ID-authenticated bulk copy operations to fail on subsequent calls. Explicit module exports Added explicit __all__ exports from the main library module to prevent import resolution issues in tools like mypy and IDE autocompletion. Credential cache fix Fixed the credential instance cache to correctly reuse and invalidate cached credential objects, preventing unnecessary re-authentication. datetime.time microseconds fix Fixed datetime.time values incorrectly having their microseconds component set to zero when fetched from TIME columns. The road to 1.5 Release Date Highlights 1.0.0 November 2025 GA release - DDBC architecture, Entra ID auth, connection pooling, DB API 2.0 compliance 1.1.0 December 2025 Parameter dictionaries, Connection.closed property, Copilot prompts 1.2.0 January 2026 Param-as-dict, non-ASCII path handling, fetchmany fixes 1.3.0 January 2026 Initial BCP implementation (internal), SQLFreeHandle segfault fix 1.4.0 February 2026 BCP public API, spatial types, Rust core upgrade, encoding & stability fixes 1.5.0 April 2026 Apache Arrow fetch, sql_variant, native UUIDs, qmark & auth fixes Get started today pip install --upgrade mssql-python Documentation: github.com/microsoft/mssql-python/wiki Release notes: github.com/microsoft/mssql-python/releases Roadmap: github.com/microsoft/mssql-python/blob/main/ROADMAP.md Report issues: github.com/microsoft/mssql-python/issues Contact: mssql-python@microsoft.com We'd love your feedback. Try the new Arrow fetch API with your data workflows, let us know how it performs, and file issues for anything you run into. This driver is built for the Python data community, and your input directly shapes what comes next.360Views1like0CommentsMicrosoft.Data.SqlClient 7.0 Is Here: A Leaner, More Modular Driver for SQL Server
Today we're shipping the general availability release of Microsoft.Data.SqlClient 7.0, a major milestone for the .NET data provider for SQL Server. This release tackles the single most requested change in the repository's history, introduces powerful new extensibility points for authentication, and adds protocol-level features for Azure SQL Hyperscale, all while laying the groundwork for a more modular driver architecture. If you take away one thing from this post: the core SqlClient package is dramatically lighter now. Azure dependencies have been extracted into a separate package, and you only pull them in if you need them. dotnet add package Microsoft.Data.SqlClient --version 7.0.0 The #1 Request: A Lighter Package For years, the most upvoted issue in the SqlClient repository asked the same question: "Why does my console app that just talks to SQL Server pull in Azure.Identity, MSAL, and WebView2?" With 7.0, it doesn't anymore. We've extracted all Azure / Microsoft Entra authentication functionality into a new Microsoft.Data.SqlClient.Extensions.Azure package. The core driver no longer carries Azure.Core, Azure.Identity, Microsoft.Identity.Client, or any of their transitive dependencies. If you connect with SQL authentication or Windows integrated auth, your bin folder just got dramatically smaller. For teams that do use Entra authentication, the migration is straightforward. Add one package reference and you're done: dotnet add package Microsoft.Data.SqlClient.Extensions.Azure No code changes. No configuration changes. You can also now update Azure dependency versions on your own schedule, independent of driver releases. This is something library authors and enterprise teams have been asking for. Pluggable Authentication with SspiContextProvider Integrated authentication in containers and cross-domain environments has always been a pain point. Kerberos ticket management, sidecar processes, domain trust configuration: the workarounds were never simple. Version 7.0 introduces a new public SspiContextProvider API on SqlConnection that lets you take control of the authentication handshake. You provide the token exchange logic; the driver handles everything else. var connection = new SqlConnection(connectionString); connection.SspiContextProvider = new MyKerberosProvider(); connection.Open(); This opens the door to scenarios the driver never natively supported: authenticating across untrusted domains, using NTLM with explicit credentials, or implementing custom Kerberos negotiation in Kubernetes pods. A sample implementation is available in the repository. Async Read Performance: Packet Multiplexing (Preview) One of the most community-driven features in 7.0 is packet multiplexing, a change to how the driver processes TDS packets during asynchronous reads. Originally contributed by community member Wraith2, this work delivers a significant leap in async read performance for large result sets. Packet multiplexing was first introduced in 6.1 and has been refined across the 7.0 preview cycle with additional bug fixes and stability improvements. In 7.0, it ships behind two opt-in feature switches so we can gather broader real-world feedback before making it the default: AppContext.SetSwitch("Switch.Microsoft.Data.SqlClient.UseCompatibilityAsyncBehaviour", false); AppContext.SetSwitch("Switch.Microsoft.Data.SqlClient.UseCompatibilityProcessSni", false); Setting both switches to false enables the new async processing path. By default, the driver uses the existing (compatible) behavior. We need your help. If your application performs large async reads (ExecuteReaderAsync with big result sets, streaming scenarios, or bulk data retrieval), please try enabling these switches and let us know how it performs in your environment. File your results on GitHub Issues to help us move this toward on-by-default in a future release. Enhanced Routing for Azure SQL Azure SQL environments with named read replicas and gateway-based load balancing can now take advantage of enhanced routing, a TDS protocol feature that lets the server redirect connections to a specific server and database during login. This is entirely transparent to your application. No connection string changes, no code changes. The driver negotiates the capability automatically when the server supports it. .NET 10 Ready SqlClient 7.0 compiles and tests against the .NET 10 SDK, so you're ready for the next major .NET release on day one. Combined with continued support for .NET 8, .NET 9, .NET Framework 4.6.2+, and .NET Standard 2.0 (restored in 6.1), the driver covers the full spectrum of active .NET runtimes. ActiveDirectoryPassword Is Deprecated: Plan Your Migration As Microsoft moves toward mandatory multifactor authentication across its services, we've deprecated SqlAuthenticationMethod.ActiveDirectoryPassword (the ROPC flow). The method still works in 7.0, but it's marked [Obsolete] and will generate compiler warnings. Now is the time to move to a stronger alternative: Scenario Recommended Authentication Interactive / desktop apps Active Directory Interactive Service-to-service Active Directory Service Principal Azure-hosted workloads Active Directory Managed Identity Developer / CI environments Active Directory Default Quality of Life Improvements Beyond the headline features, 7.0 includes a collection of improvements that make the driver more reliable and easier to work with in production. Better retry logic. The new SqlConfigurableRetryFactory.BaselineTransientErrors property exposes the built-in transient error codes, so you can extend the default list with your own application-specific codes instead of copy-pasting error numbers from source. More app context switches. You can now set MultiSubnetFailover=true globally, ignore server-provided failover partners in Basic Availability Groups, and control async multi-packet behavior, all without modifying connection strings. Better diagnostics on .NET Framework. SqlClientDiagnosticListener is now enabled for SqlCommand on .NET Framework, closing a long-standing observability gap. Connection performance fix. A regression where SPN generation was unnecessarily triggered for SQL authentication connections on the native SNI path has been resolved. Performance improvements. Allocation reductions across Always Encrypted scenarios, SqlStatistics timing, and key store providers. Upgrading from 6.x For most applications, upgrading is a package version bump: dotnet add package Microsoft.Data.SqlClient --version 7.0.0 If you use Microsoft Entra authentication, also add: dotnet add package Microsoft.Data.SqlClient.Extensions.Azure If you use ActiveDirectoryPassword, you'll see a compiler warning. Start planning your migration to a supported auth method. Review the full release notes in release-notes/7.0 for the complete list of changes across all preview releases. Thank You to Our Contributors Open-source contributions are central to SqlClient's development. We'd like to recognize the community members who contributed to the 7.0 release: edwardneal · ErikEJ · MatthiasHuygelen · ShreyaLaxminarayan · tetolv · twsouthwick · Wraith2 What's Next We're continuing to invest in performance, modularity, and modern .NET alignment. Stay tuned for updates on the roadmap, and keep the feedback coming. Your issues and discussions directly shape what we build. NuGet: Microsoft.Data.SqlClient 7.0.0 GitHub: dotnet/SqlClient Issues & Feedback: github.com/dotnet/SqlClient/issues Docs: Microsoft.Data.SqlClient on Microsoft Learn2.4KViews2likes5CommentsAnnouncing Preview of bulkadmin role support for SQL Server on Linux
Bulk data import using operations like BULK INSERT and OPENROWSET(BULK…) BULK INSERT (Transact-SQL) - SQL Server | Microsoft Learn is fundamental to ETL and data ingestion workflows. On SQL Server running on Linux, these operations have traditionally required sysadmin privileges, making it difficult to follow least‑privilege security practices. With the preview of BULKADMIN role support for SQL Server on Linux, this gap is addressed. Starting with SQL Server 2025 (17.x) CU3 and SQL Server 2022 CU24, administrators can grant the bulkadmin role or the ADMINISTER BULK OPERATIONS permission to enable bulk imports without full administrative access. This capability has long been available on SQL Server on Windows and is now extended to Linux, bringing consistent and more secure bulk data operations across platforms. Bulk import operation Bulk import operations enable fast, large‑volume data loading into SQL Server tables by reading data directly from external files instead of row‑by‑row inserts. Learn more Use BULK INSERT or OPENROWSET (BULK...) to Import Data to SQL Server - SQL Server | Microsoft Learn Who this is for DBAs, Data engineers, ETL developers and application engineers who want to perform bulk data imports without over‑privileging users. Why this matters Improved security posture: Eliminates the need for sysadmin access for bulk operations, enforcing least‑privilege security principle and reducing security risk. Better operational flexibility: Allows DBAs to safely delegate bulk data ingestion to application, ETL, and operational teams without expanding the attack surface. Parity with SQL Server on Windows: Closes a long‑standing gap between SQL Server on Windows and Linux, simplifying cross‑platform administration. Designed with layered security controls on Linux Bulk operations on Linux continue to enforce additional security checks beyond SQL permissions. Administrators must explicitly configure: Linux file system permissions for the SQL Server service account Approved bulk load directories using mssql-conf (by configuring the path through bulkadmin.allowedpathslist setting in mssql-conf) This ensures, SQL Server can only read data from explicitly allowed locations, reducing the risk of unauthorized file access. Learn more: For a quick overview, the typical flow to enable bulk import operations on SQL Server on Linux looks like this: Install SQL Server 2025 (17.x) CU3 Grant BULKADMIN role or ADMINISTER BULK OPERATIONS permission Configure allowed directories and required filesystem permissions Run bulk import operations using BULK INSERT or OPENROWSET (BULK...) For detailed guidance and example, refer to the official documentation: 👉 Configure bulk import operations for SQL Server on Linux Summary With BULKADMIN role support on SQL Server for Linux, customers can now enable bulk data imports without compromising security. This enhancement delivers better role separation, security best practices, and a smoother operational experience for SQL Server on Linux. We encourage customers to explore this capability and adopt least privileged bulk data workflows in their Linux environments.224Views0likes0Comments