sqltools
59 Topicsmssql-python 1.6: Unblocking Your Threads
The last two mssql-python releases shipped big features: Bulk Copy in 1.4 for high-throughput data loading, and Apache Arrow in 1.5 for zero-copy analytics. Version 1.6 is about what happens next: you take those features into production, scale up your thread pool, and find out where the driver was quietly holding you back. This release unblocks your threads during connection setup, fixes crashes and incorrect results in common cursor patterns, and hardens security for passwords with special characters and log file paths. pip install --upgrade mssql-python Your threads can run while connections are opening If you're running mssql-python behind Flask, FastAPI, Django, or any WSGI/ASGI server with thread-based workers, this one matters. Opening a database connection is slow. There's DNS resolution, a TCP handshake, TLS negotiation, and SQL Server authentication. In previous versions, every other Python thread in your process was frozen while that happened, because the driver held the Global Interpreter Lock (GIL) during the entire operation. One thread opening a connection meant no other thread could serve requests, process data, or do anything at all. Version 1.6 releases the GIL during connect and disconnect. Your other threads keep running while the network round-trip completes. If you have a multi-threaded web server handling concurrent requests, this removes a serialization bottleneck you may not have realized you had. The connection pool was also reworked to stay safe under this change. Previously, the pool held an internal lock while calling connect, which would have created a deadlock now that connect releases the GIL. The pool now reserves a slot first, connects outside the lock, and rolls back the reservation if the connection fails. Decimal parameters work with setinputsizes If you use cursor.setinputsizes() to declare parameter types for performance-sensitive batch inserts, you may have hit a crash when specifying SQL_DECIMAL or SQL_NUMERIC. This is fixed. Decimal values now bind correctly whether you're using execute() or executemany(): cursor.setinputsizes([ (mssql_python.SQL_WVARCHAR, 100, 0), (mssql_python.SQL_INTEGER, 0, 0), (mssql_python.SQL_DECIMAL, 18, 2), ]) cursor.executemany( "INSERT INTO Products (Name, CategoryID, Price) VALUES (?, ?, ?)", [ ("Widget", 1, Decimal("19.99")), ("Gadget", 2, Decimal("29.99")), ], ) Iterating catalog results with fetchone() If you've used cursor.tables(), cursor.columns(), or other catalog methods and tried to walk the results with fetchone(), you may have gotten incorrect data. Row tracking was broken for catalog result sets. This now works the way you'd expect: cursor.tables(tableType="TABLE") while True: row = cursor.fetchone() if row is None: break print(row.table_name) This also applies to primaryKeys(), foreignKeys(), statistics(), procedures(), and getTypeInfo(). Reusing prepared statements without reset If you call cursor.execute() with reset_cursor=False to reuse a prepared statement across calls, this no longer raises an "Invalid cursor state" error. Passwords with special characters stay masked in logs If your SQL Server password contains semicolons, braces, or other ODBC-special characters (e.g., PWD={Top;Secret}), previous versions could accidentally leak part of it in sanitized log output. The password masking logic has been rewritten to correctly handle all ODBC connection string formats. If the connection string can't be parsed at all, the entire string is now redacted rather than partially exposed. The logging system also now rejects log file paths that attempt directory traversal, preventing setup_logging(log_file_path="../../somewhere/else.log") from writing outside the intended directory. Better type checker support for executemany If your type checker flagged executemany() when you passed dictionaries as parameter rows, that warning is gone. The type annotations now correctly accept Mapping types, matching the DB API 2.0 spec for named parameters. Get started pip install --upgrade mssql-python For questions or issues, file them on GitHub or email mssql-python@microsoft.com.175Views0likes0Commentsmssql-django 1.7.1: Microsoft Fabric Support and Migration Fixes
We just shipped mssql-django 1.7.1 with two fixes that matter if you're running Django on Microsoft Fabric or using descending indexes in your migrations. JSONField Now Works on Microsoft Fabric SQL Database in Microsoft Fabric reports itself as EngineEdition 12, which our backend didn't previously recognize. The result: JSONField queries, hash functions, collation introspection, and test teardown all broke on Fabric because the backend couldn't correctly identify the server capabilities. In 1.7.1, we added full detection for Fabric's engine edition. The backend now correctly treats Fabric as an Azure SQL-class database, which means JSONField, MD5, SHA1, SHA224, SHA256, SHA384, SHA512, and collation-dependent lookups all work as expected. We also combined the ProductVersion and EngineEdition queries into a single round trip, so connection setup is faster too. If you've been waiting to use Django with SQL Database in Microsoft Fabric, this is the release that makes it work. Descending Index Migrations No Longer Crash If you had a model with a descending index and ran an AlterField migration on one of the indexed columns, Django would crash with FieldDoesNotExist. The issue was in how our schema editor looked up fields during index reconstruction: it was reading index.fields (which only contains field names for simple indexes) instead of index.fields_orders (which correctly handles the (field_name, order) tuples that descending indexes use). This was a one-line fix, but it blocked anyone whose migrations touched fields covered by descending indexes. If you've been working around this, upgrade and your migrations will run cleanly. SQL Server 2025 in CI We upgraded our Windows CI pipeline to run against SQL Server 2025, so every commit is now tested against the latest version. Combined with our existing coverage across SQL Server 2016-2022, Azure SQL Database, Azure SQL Managed Instance, and now Microsoft Fabric, you can be confident the backend works across the full Microsoft data platform. Upgrade pip install --upgrade mssql-django Full compatibility: Component Supported Django 3.2, 4.0, 4.1, 4.2, 5.0, 5.1, 5.2, 6.0 Python 3.8 - 3.14 (Django 6.0 requires 3.12+) SQL Server 2016, 2017, 2019, 2022, 2025 Azure SQL Database, Managed Instance, SQL Database in Fabric ODBC Driver Microsoft ODBC Driver 17 or 18 Questions, bugs, or contributions? Find us on GitHub. mssql-django is open source under the BSD license. Built and maintained by Microsoft.55Views0likes0CommentsIntroducing Pacemaker HA Agent v2 for SQL Server on Linux (In Preview)
We are excited to introduce the next generation of high availability (HA) Agent for SQL Server on Linux: Pacemaker HA Agent v2. This release is a major step forward, designed to reduce planned and unplanned failover times, compared to the previous agent, based on internal engineering improvements. Why Pacemaker Is Required for SQL Server HA on Linux For users new to Linux, it’s important to understand how high availability works on this platform. On Windows Server, Always On availability groups use an underlying Windows Server Failover Cluster (WSFC) to: Monitor node health Detect failures Orchestrate automatic failovers Always On availability groups on Linux rely on an external cluster orchestrator for health monitoring and failover coordination, with Pacemaker HA Agent being one of the cluster orchestrators, responsible for: Monitoring node and application health Coordinating failover decisions Helping mitigate split‑brain scenarios through improved write‑lease evaluation Managing resources such as availability groups and listeners The Pacemaker HA Agent is the integration layer that allows Pacemaker to understand SQL Server health and manage availability groups safely. Evolution of the SQL Server Pacemaker HA Agent With SQL Server 2025 CU3 and later, Pacemaker HA Agent v2 is available in preview for Red Hat Enterprise Linux and Ubuntu through the mssql-server-ha package. Pacemaker HA agent v2 uses a service‑based architecture. The agent runs as a dedicated system service named mssql-pcsag, which is responsible for handling SQL Server–specific high availability operations and communication with Pacemaker. You can manage mssql-pcsag service by using standard system service controls to start, restart, status and stop this service by using the operating system's service manager (for example, systemctl). # Start the mssql-pcsag service sudo systemctl start mssql-pcsag # Restart the mssql-pcsag service sudo systemctl restart mssql-pcsag # Check the status of the mssql-pcsag service sudo systemctl status mssql-pcsag # Stop the mssql-pcsag service sudo systemctl stop mssql-pcsag Limitations of Pacemaker HA Agent v1 While the original agent enabled SQL Server HA on Linux, customers running production workloads encountered several challenges: Failover delays of 30 seconds to 2 minutes during planned or unplanned events Limited health detection, missing conditions such as I/O stalls and memory pressure Rigid failover behavior, unlike the flexible policies available on Windows (WSFC) Incomplete write‑lease handling, requiring custom logic No support for TLS1.3 for Pacemaker and SQL Server communications How Pacemaker HA Agent v2 Addresses These Gaps Pacemaker HA Agent v2 is a ground‑up improvement, designed to improve the reliability characteristics of SQL Server HA on Linux. 1. Faster & Smarter Failover Decisions The new agent introduces a service‑based health monitoring architecture, moving beyond basic polling. This allows SQL Server to report detailed diagnostic signals - improving detection speed and helping reduce failover delays in supported configurations. 2. Flexible Automatic Failover Policies inspired by the WSFC health model Pacemaker HA Agent v2 supports failure‑condition levels (1–5) and health‑check timeout model aligned with those available in Always On availability groups on Windows. This provides: Fine‑grained control over failover sensitivity, allowing administrators to tune when failover should occur. Improved detection of internal SQL Server conditions, such as memory pressure, internal deadlocks, orphaned spinlocks, and other engine‑level failures. Failover decisions are now driven by detailed diagnostics from sp_server_diagnostics, enabling faster and more accurate response to unhealthy states and providing enhanced resiliency capabilities for SQL Server AG on Linux. You can configure the failure condition level and health check timeout using the following commands: -- Setting failure condition level ALTER AVAILABILITY GROUP pacemakerag SET (FAILURE_CONDITION_LEVEL = 2); -- Setting health check timeout ALTER AVAILABILITY GROUP pacemakerag SET (HEALTH_CHECK_TIMEOUT = 60000); After applying the configuration, validate the setting using the sys.availability_groups DMV: 3. Robust Write Lease Validity Handling To prevent split‑brain scenarios, SQL Server on Linux uses an external write‑lease mechanism. In v1, lease information was not fully integrated into failover decisions. In v2, the agent actively evaluates the write-lease validity, before initiating transitions. This supports controlled role changes and improved data consistency behavior during failover events, depending on cluster configuration. 4. TLS 1.3 Support Pacemaker HA agent v2 includes design updates to support TLS 1.3–based communication for health checks and failover operations, when TLS 1.3 is enabled. Supported Versions & Distributions Pacemaker HA Agent v2 supports: SQL Server 2025 CU3 or later RHEL 9 or later Ubuntu 22.04 or higher. Preview upgrade & migration guidance for non-production environments New or existing non-prod deployments running SQL Server 2025 (17.x) can migrate from Pacemaker HA Agent v1 to v2 using following approach: Drop the existing AG resource sudo pcs resource delete <NameForAGResource> This temporarily pauses AG synchronization but does not delete the availability group (AG). After the resource is recreated, Pacemaker resumes management and AG synchronization automatically. Create a new AG resource using the v2 agent (ocf:mssql:agv2) sudo pcs resource create <NameForAGResource> ocf:mssql:agv2 ag_name=<AGName> meta failure-timeout=30s promotable notify=true Validate cluster health sudo pcs status Resume normal operations References Create and Configure an Availability Group for SQL Server on Linux - SQL Server | Microsoft Learn Thank You, Engineering: David Liao Attinder Pal Singh315Views2likes3CommentsWriting a great session abstract for FabCon & SQLCon
Important Dates FabCon/SQLCon Europe in Barcelona, runs 2026 September 28 - Oct 1, 2026 Workshop Call for Content open Feb 17, 2026 to March 23, 2026 Breakout Session Call for Content open Feb 17, 2026 to April 17, 2026 When submitting a session to a conference, consider the following: Title: The title should answer the attendee's question what's in it for me? Why should I attend this session? Is it going to make me better at my job? Will it save my company money? Will it make my reports more organized or my database faster, more secure or modern? The title needs to make sense; it needs to inform. It doesn't need to be funny or contain a dad joke. It CAN, but that's secondary. It shouldn't just be the name of a product or even "Learn [product name]" because you can't teach me everything about it in 60 minutes or less. Abstract: The abstract should contain 3 things: It should define the problem you're trying to solve. It should introduce the solution. It should briefly describe what they'll learn about the solution. In the time allotted. The problem can be that the attendee doesn’t know how to create great visuals, performance tune, connect to Lakehouse, etc. If so, consider titles like “Creating Great Visuals Using...”, “Cutting Costs by Optimizing Your...”, or “Understanding Governance in…” The abstract can introduce a new feature or concept. If so, then the problem is "there's a new thing that you don't know about yet." The solution is "this feature does X, Y & Z." Then tell them that they'll learn everything about X or a little bit about X, Y & Z. If you include an acronym, make sure you spell it out first. Not everyone is going to know what it means, and you don't want only a room full of attendees that already know everything. Avoid using many buzz words in you title or abstract. Drop it into Copilot for a final check and use prompts to help improve your work. Use "make it more precise" if it's over the character count. Tey "make it more professional" if you think it's more casual than you intended. Final check: Before you hit submit, run your abstract by a friend. Does it make sense to the technical and/or non-technical? Do the grammar & spelling check out? With allowances for English not being your 1st language, the abstract should show that you can effectively communicate the topic to an audience. Does the level assigned match the abstract? Is it in the right track? Most calls for content provide definitions on what's a good fit for content in each topic. You can find the topic definitions for FabCon/SQLCon Europe here. Can you do ALL THIS in 60 minutes? Does it fit within the character count? Lastly, a few "don'ts" Don't include your name in the abstract if you know the review is done blind. Don't add it to every category or track. Be mindful. Don't email organizers to say you've submitted a session. They know. Don't demand feedback the very day you're declined. Well, never demand it, but waiting a week to politely ask never hurts. Respect the decision if they aren’t able to offer you feedback. Remember that 800 people might be asking the same question. Don't be entitled. No one *stole* your spot. It was never yours to begin with. Don't use AI to write your entire abstract. Reviewers typically know when it's not written by a human or there are tools that help them check. If you can't convey a concept on your own in 400 characters, how can we trust that you can speak on the concept for a full hour?704Views3likes0CommentsSSMA Copilot for SAP ASE (Sybase)
Introduction SAP ASE (formerly Sybase) continues to power mission‑critical workloads across financial services, telecom, and large-scale enterprise applications. However, as modernization accelerates, more customers are actively seeking reliable and automated paths to migrate Sybase workloads to Microsoft SQL Server and Azure SQL Targets. While Sybase’s T-SQL dialect shares several similarities with Microsoft SQL Server, its procedural code contains deep complexities—extended syntax, non-standard constructs, legacy system tables, and database‑scoped behaviors that often break traditional rule-based converters. These nuances make stored procedures, triggers, and packages some of the hardest assets to migrate. To address these challenges, the SQL Migration team is expanding Copilot-based code conversion capabilities to SSMA SAP ASE, following the same intuitive user flow as the Oracle-to-SQL Copilot released earlier. This new AI-assisted experience dramatically reduces manual fix-up effort, boosts conversion accuracy, and empowers users to modernize complex Sybase assets with confidence. Why We Built This SSMA’s rule engine already auto-converts a significant portion of Sybase code—typically around 70% for standard workloads. But rule-based systems hit limitations when faced with: Proprietary Sybase syntax variations Conditional logic or cursors expressed in non-standard forms System-level commands not supported in SQL Server Ambiguous constructs requiring contextual interpretation These gaps often force users into tedious manual rewriting. By bringing agentic AI into the equation, Copilot attempts to fill the missing 30%—providing syntactically correct, context-aware, and fully explained code conversions. Instead of relying solely on static rules, Copilot understands intent, identifies root-causes of failures, and generates SQL Server-compatible alternatives with transparent reasoning. This combination of deterministic rule engine + adaptive AI unlocks a far more complete, scalable, and user-friendly migration experience. Authentication Methods SSMA for SAP ASE offers two simple ways to authenticate with Copilot, giving customers flexibility based on their security and infrastructure needs. Option 1: Bring Your Own Key (Azure OpenAI) Connect SSMA to your own Azure OpenAI resource using your deployment details and key. This option is ideal for organizations that already manage Azure OpenAI or require strict control over their AI environment. Option 2: Microsoft‑Managed Endpoint (Preview) A new, seamless experience where no API key is needed. Users simply sign in with Microsoft Entra ID, and SSMA handles authentication through a secure browser-based flow. For detailed setup steps and prerequisites for both authentication options, refer to the SSMA Copilot Learn documentation. What Copilot Offers When the “Fix with Copilot” button is triggered, SSMA opens a structured tri-pane experience designed for clarity and trust: 1. Errors to Fix Shows issues that the rule engine could not convert—whether due to unsupported syntax, parse failures, or ambiguous constructs. This helps users quickly understand where the rule engine struggled. 2. Explanation Provides a detailed, human-readable breakdown of: Why the conversion failed What the Copilot-generated fix means How the logic differs from Sybase to SQL Server This section builds trust, making AI-generated code fully interpretable. 3. Code Review Window Displays a side-by-side diff: Left: SSMA-generated output Right: Copilot-converted SQL code Changes are highlighted so users can validate improvements, understand transformations, and decide whether to apply the Copilot output. From an implementation and architecture point of view, this is similar to SSMA Oracle to SQL Code Conversion Copilot. To know more about how the AI model has been trained, refer to this blog. Sample Use Case (as illustrated in the GIF) In the example shown in the blog GIF, a Sybase stored procedure fails conversion because: It uses set switch on drop_system_tables with override, no_info—a Sybase-only command unsupported in SQL Server. The procedure definition contains create or replace procedure, which is not valid T‑SQL syntax. The rule engine cannot parse the affected block, causing SSMA to output the original Sybase procedure as a commented fallback. When Copilot is invoked: It identifies the unsupported keywords Suggests correct SQL Server equivalents (e.g., translating create or replace into IF EXISTS ... DROP + CREATE PROCEDURE) Generates a complete, runnable T‑SQL procedure Explains why each fix was made This allows users to resolve previously conversion‑blocking issues instantly. To learn more, Copilot in SSMA Real‑World Impact With SSMA’s SAP ASE Copilot, teams can migrate Sybase workloads with significantly less manual effort. Developers, DBAs, and architects gain: Faster conversion cycles Higher code accuracy Clear explanations that improve learning and long-term maintainability Independence from long, multi-step manual rewrite processes This Copilot experience transforms complex procedural conversions into guided, high-confidence workflows—making modernization more accessible for organizations of all sizes.242Views0likes1CommentJoin us at SQLCon 2026
SQLCon 2026, the premier Microsoft SQL Community Conference, is co-located with the Microsoft Fabric Community Conference (FabCon) from March 16-20, 2026! Register today! This year, your Azure Data Community worlds collide, bringing the SQL and Fabric communities together for a brand-new experience. If you're passionate about SQL Server, Azure SQL, SQL in Fabric, SQL Tools, migration and modernization, database security, or building AI-powered apps with SQL, SQLCon 2026 is the place for you. We're bringing you an amazing keynote in the State Farm Arena, 50+ breakout sessions, and 4 expert-led workshops designed to help you optimize, innovate, and connect just on SQL alone! Join us in Atlanta where we're going to renew the SQL Community spirit - rebuild & reconnect with your networks, friendships, and career-building opportunities. SQLCon is just one part of our renewed commitment to connect at conferences, user groups, online, and at regional events. In a few words, we miss you. Why are SQLCon + FabCon better together? One registration, double the value: Register for either conference and get full access to both - mix and match sessions, keynotes, and community events to fit your interests. Shared spaces, shared energy: Enjoy the same expo hall, registration desk, conference app, and community lounge. Network with peers across the data platform spectrum. Unforgettable experiences: Join us for both keynotes at the State Farm Arena and celebrate at the legendary attendee party at the Georgia Aquarium. Register today and save $200 with code SQLCMTY200. Register Now! Let's build the future of data together. See you at SQLCon + FabCon!811Views0likes0Comments