azure sql database
499 Topics2025 Year in Review: What’s new across SQL Server, Azure SQL and SQL database in Fabric
What a year 2025 has been for SQL! ICYMI and are looking for some hype, might I recommend you start with this blog from Priya Sathy, the product leader for all of SQL at Microsoft: One consistent SQL: The launchpad from legacy to innovation. In this blog post, Priya explains how we have developed and continue to develop one consistent SQL which “unifies your data estate, bringing platform consistency, performance at scale, advanced security, and AI-ready tools together in one seamless experience and creates one home for your SQL workloads in the era of AI.” For the FIFTH(!!) year in a row (my heart is warm with the number, I love SQL and #SQLfamily, and time is flying), I am sharing my annual Year in Review blog with all the SQL Server, Azure SQL and SQL database in Fabric news this year. Of course, you can catch weekly episodes related to what’s new and diving deeper on the Azure SQL YouTube channel at aka.ms/AzureSQLYT. This year, in addition to Data Exposed (52 new episodes and over 70K views!). We saw many new series related to areas like GitHub Copilot, SSMS, VS Code, and Azure SQL Managed Instance land in the channel, in addition to Data Exposed. Microsoft Ignite announcements Of course, if you’re looking for the latest announcements from Microsoft Ignite, Bob Ward and I compiled this slide of highlights. Comprehensive list of 2025 updates You can read this blog (or use AI to reference it later) to get all the updates and references from the year (so much happened at Ignite but before it too!). Here’s all the updates from the year: SQL Server, Arc-enabled SQL Server, and SQL Server on Azure VMs Generally Available SQL Server 2025 is Now Generally Available Backup/Restore capabilities in SQL Server 2025 SQL Server 2025: Deeply Integrated and Feature-rich on Linux Resource Governor for Standard Edition Reimagining Data Excellence: SQL Server 2025 Accelerated by Pure Storage Security Update for SQL Server 2022 RTM CU21 Cumulative Update #22 for SQL Server 2022 RTM Backup/Restore enhancements in SQL Server 2025 Unified configuration and governance Expanding Azure Arc for Hybrid and Multicloud Management US Government Virginia region support I/O Analysis for SQL Server on Azure VMs NVIDIA Nemotron RAG Integration Preview Azure Arc resource discovery in Azure Migrate Multicloud connector support for Google Cloud Migrations Generally Available SQL Server migration in Azure Arc Azure Database Migration Service Hub Experience SQL Server Migration Assistant (SSMA) v10.3, including Db2 SKU recommendation (preview) Database Migration Service: PowerShell, Azure CLI, and Python SDK SQL Server Migration Assistant (SSMA) v10.4, including SQL Server 2025 support, Oracle conversion Copilot Schema migration support in Azure Database Migration Service Preview Azure Arc resource discovery in Azure Migrate Azure SQL Managed Instance Generally Available Next-gen General Purpose Service Tier Improved connectivity types in Azure SQL Managed Instance Improved resiliency with zone redundancy for general purpose, improved log rate for business critical Apply reservation discount for zone redundant Business Critical databases Free offer Windows principals use to simplify migrations Data exfiltration improvements Preview Windows Authentication for Cloud-Native Identities New update policy for Azure SQL Managed Instance Azure SQL Database Generally Available LTR Backup Immutability Free Azure SQL Database Offer updates Move to Hyperscale while preserving existing geo-replication or failover group settings Improve redirect connection type to require only port 1433 and promote to default Bigint support in DATEADD for extended range calculations Restart your database from the Azure portal Replication lag metric Enhanced server audit and server audit action groups Read-access geo-zone redundant storage (RA-GZRS) as a backup storage type for non-Hyperscale Improved cutover experience to Hyperscale SLA-compliant availability metric Use database shrink to reduced allocated space for Hyperscale databases Identify causes of auto-resuming serverless workloads Preview Multiple geo-replicas for Azure SQL Hyperscale Backup immutability for Azure SQL Database LTR backups Updates across SQL Server, Azure SQL and Fabric SQL database Generally Available Regex Support and fuzzy-string matching Geo-replication and Transparent Data Encryption key management Optimized locking v2 Azure SQL hub in the Azure portal UNISTR intrinsic function and ANSI SQL concatenation operator (||) New vector data type JSON index JSON data type and aggregates Preview Stream data to Azure Event Hubs with Change Event Streaming (Azure SQL DB Public Preview/Fabric SQL Private Preview) DiskANN vector indexing SQL database in Microsoft Fabric and Mirroring Generally Available Fabric Databases SQL database in Fabric Unlocking Enterprise ready SQL database in Microsoft Fabric: ALM improvements, Backup customizations and retention, Copilot enhancements & more update details Mirroring for SQL Server Mirroring for Azure SQL Managed Instance in Microsoft Fabric Connect to your SQL database in Fabric using Python Notebook Updates to database development tools for SQL database in Fabric Using Fast Copy for data ingestion Copilot for SQL analytics endpoint Any updates across Microsoft Fabric that apply to the SQL analytics endpoint are generally supported in mirrored databases and Fabric SQL databases via the SQL analytics endpoint. This includes many exciting areas, like Data Agents. See the Fabric blog to get inspired Preview Data virtualization support Workspace level Private Link support (Private Preview) Customer-managed keys in Fabric SQL Database Auditing for Fabric SQL Database Fabric CLI: Create a SQL database in Fabric SQL database workload in Fabric with Terraform Spark Connector for SQL databases Tools and developer Blog to Read: How the Microsoft SQL team is investing in SQL tools and experiences SQL Server Management Studio (SSMS) 22.1 GitHub Copilot Walkthrough (Preview): Guided onboarding from the Copilot badge. Copilot right-click actions (Preview): Document, Explain, Fix, and Optimize. Bring your own model (BYOM) support in Copilot (Preview). Copilot performance: improved response time after the first prompt in a thread. Fixes: addressed Copilot “Run ValidateGeneratedTSQL” loop and other stability issues. SQL Server Management Studio (SSMS) 22 Support for SQL Server 2025 Modern connection dialog as default + Fabric browsing on the Browse tab. Windows Arm64 support (initial) for core scenarios (connect + query). GitHub Copilot in SSMS (Preview) is available via the AI Assistance workload in the VS Installer. T-SQL/UX improvements: open execution plan in new tab, JSON viewer, results grid zooms. New index support: create JSON and Vector indexes from Object Explorer SQL Server Management Studio (SSMS) 21 Installation and automatic updates via Visual Studio Installer. Workloads/components model: smaller footprint + customizable install. Git integration is available via the Code tools workload. Modern connection dialog experience (Preview). New customization options (e.g., vertical tabs, tab coloring, results in grid NULL styling). Always Encrypted Assessment in the Always Encrypted Wizard. Migration assistance via the Hybrid and Migration workload. mssql-python Driver ODBC: Microsoft ODBC Driver 18.5.2.1 for SQL Server OLE DB: Microsoft OLE DB Driver 19.4.1 for SQL Server JDBC (latest train): Microsoft JDBC Driver for SQL Server 13.2.1 Also updated in 2025: supported JDBC branches received multiple servicing updates (including Oct 13, 2025, security fixes). See the same JDBC release notes for the full list. .NET: Microsoft.Data.SqlClient 6.0.2 Related - some notes on drivers released/updated in 2025 (recap): MSSQL extension for VS Code 1.37.0 GitHub Copilot integration : Ask/Agent modes, slash commands, onboarding. Edit Data : interactive grid for editing table data (requires mssql.enableExperimentalFeatures: true). Data-tier Application dialog : deploy/extract .dacpac and import/export .bacpac (requires mssql.enableExperimentalFeatures: true). Publish SQL Project dialog : deploy .sqlproj to an existing DB or a local SQL dev container. Added “What’s New” panel + improved query results grid stability/accessibility. MSSQL extension for VS Code 1.36.0 Fabric connectivity : browse Fabric workspaces and connect to SQL DBs / SQL analytics endpoints. SQL database in Fabric provisioning : create Fabric SQL databases from Deployments. GitHub Copilot slash commands : connection, schema exploration, query tasks. Schema Compare extensibility: new run command for external extensions/SQL Projects (incl. Update Project from Database support). Query results in performance/reliability improvements (incremental streaming, fewer freezes, better settings handling). SqlPackage 170.0.94 release notes (April 2025) Vector: support for vector data type in Azure SQL Database target platform (import/export/extract/deploy/build). SQL projects: default compatibility level for Azure SQL Database and SQL database in Fabric set to 170. Parquet: expanded supported types (including json, xml, and vector) + bcp fallback for unsupported types. Extract: unpack a .dacpac to a folder via /Action:Extract. Platform: Remove .NET 6 support; .NET Framework build updated to 4.7.2. SqlPackage 170.1.61 release notes (July 2025) Data virtualization (Azure SQL DB): added support for data virtualization objects in import/export/extract/publish. Deployment: new publishing properties /p:IgnorePreDeployScript and /p:IgnorePostDeployScript. Permissions: support for ALTER ANY EXTERNAL MIRROR (Azure SQL DB + SQL database in Fabric) for exporting mirrored tables. SQL Server 2025 permissions: support for CREATE ANY EXTERNAL MODEL, ALTER ANY EXTERNAL MODEL, and ALTER ANY INFORMATION PROTECTION. Fixes: improved Fabric compatibility (e.g., avoid deploying unsupported server objects; fixes for Fabric extraction scripting). SqlPackage 170.2.70 release notes (October 2025) External models: support for external models in Azure SQL Database and SQL Server 2025. AI functions: support for AI_GENERATE_CHUNKS and AI_GENERATE_EMBEDDINGS. JSON: support for JSON indexes + functions JSON_ARRAYAGG, JSON_OBJECTAGG, JSON_QUERY. Vector: vector indexes + VECTOR_SEARCH and expanded vector support for SQL Server 2025. Regex: support for REGEXP_LIKE. Microsoft.Build.Sql 1.0.0 (SQL database projects SDK) Breaking: .NET 8 SDK required for dotnet build (Visual Studio build unchanged). Globalization support. Improved SDK/Templates docs (more detailed README + release notes links). Code analyzer template defaults DevelopmentDependency. Build validation: check for duplicate build items. Microsoft.Build.Sql 2.0.0 (SQL database projects SDK) Added SQL Server 2025 target platform (Sql170DatabaseSchemaProvider). Updated DacFx version to 170.2.70. .NET SDK targets imported by default (includes newer .NET build features/fixes; avoids full rebuilds with no changes Azure Data Studio retirement announcement (retirement February 28, 2026) Anna’s Pick of the Month Year It’s hard to pick a highlight representative of the whole year, so I’ll take the cheesy way out: people. I get to work with great people working on a great set of products for great people (like you) solving real world problems for people. So, thank YOU and you’re my pick of the year 🧀 Until next time… That’s it for now! We release new episodes on Thursdays and new #MVPTuesday episodes on the last Tuesday of every month at aka.ms/azuresqlyt. The team has been producing a lot more video content outside of Data Exposed, which you can find at that link too! Having trouble keeping up? Be sure to follow us on twitter to get the latest updates on everything, @AzureSQL. And if you lose this blog, just remember aka.ms/newsupdate2025 We hope to see you next YEAR, on Data Exposed! --Anna and Marisa278Views0likes1CommentIdentify causes of auto-resuming serverless workloads in Azure SQL Database
We are pleased to announce that telemetry is now available in Azure Monitor activity log to identify the causes of auto-resuming serverless workloads in Azure SQL Database. Prior to exposing this telemetry, the correlation of specific auto-resume causes with database activity could be time consuming and imperfect with no programmatic solution. Serverless auto-pausing and auto-resuming Serverless in SQL Database automatically scales compute based on workload demand and bills for compute used per second. In the General Purpose tier, serverless also provides an option to automatically pause the database during idle usage periods when only storage related costs are billed. The more a database is idle, the more auto-pausing can help reduce compute costs. Automatic resuming occurs when database activity returns or certain management related or system operations are performed. Some examples of auto-resume triggers include logins, vulnerability assessment, modification to security settings like data masking rules, and service updates. A comprehensive description of auto-resume triggers is documented in the learning reference for serverless. Activity log for auto-pause and auto-resume events The Azure Monitor activity log keeps a record of all auto-pause and auto-resume events for serverless databases. Auto-resume causes are reported in activity log for "Resume Databases" operations under the “Caller” property of the "Succeeded" event, and latencies for each event are reported under “EventProperties”. This event can be monitored to quickly and deterministically identify auto-resume causes without resorting to inefficient guesswork. Example of Activity log in Azure portal showing an auto-resume event including the cause and latency In this example, the serverless database is auto-resumed in around 38 seconds in order to perform a security related vulnerability assessment. Understanding the causes of auto-resuming can help in optimizing database access patterns to minimize auto-resume occurrences, keep the database paused for longer, and reduce compute costs even further. Learn more For more information, please see Azure SQL Database serverless and Azure Monitor activity log.304Views0likes0CommentsWindows Authentication for Cloud-Native Identities: Modernizing Azure SQL Managed Instance (Preview)
Organizations moving to the cloud often face a critical challenge: maintaining seamless authentication for legacy applications without compromising security or user experience. Today, we’re excited to announce support for Windows Authentication for Microsoft Entra principals on Azure SQL Managed Instance, enabling cloud-native identities to authenticate using familiar Windows credentials. Why This Matters Traditionally, Windows Authentication relied on on-premises Active Directory, making it difficult for businesses adopting a cloud-only strategy to preserve existing authentication models. With this new capability: Hybrid Identity Support: Users synchronized between on-premises AD DS and Microsoft Entra ID can continue using a single set of credentials for both environments. Cloud-Only Identity (Preview): Identities that exist only in Microsoft Entra ID can now leverage Kerberos-based Windows Authentication for workloads like Azure SQL Managed Instance—without requiring domain controllers. This means organizations can modernize infrastructure while maintaining compatibility with legacy apps, reducing friction during migration. Key Benefits Seamless Migration: Move legacy applications to Azure SQL Managed Instance without rewriting authentication logic. Passwordless Security: Combine Windows Authentication with modern credentials like Windows Hello for Business or FIDO2 keys, enabling MFA and reducing password-related risks. Cloud-Native Integration: Microsoft Entra Kerberos acts as a cloud-based Key Distribution Center (KDC), issuing Kerberos tickets for cloud resources such as Azure SQL Managed Instance and Azure Files Breaking Barriers to Cloud Migration Many enterprises hesitate to migrate legacy apps because they depend on Windows Authentication. By extending this capability to cloud-native identities, we remove a major barrier—allowing customers to modernize at their own pace while leveraging familiar authentication models. Learn More https://learn.microsoft.com/en-us/azure/azure-sql/managed-instance/winauth-azuread-overview?view=azuresql Microsoft Entra Kerberos Overview373Views0likes0CommentsLesson Learned #526: How to Identify the REST API Version Used by Your PowerShell Commands?
A few days ago, I wanted to understand which REST API version was being used behind each PowerShell command I was running to create Azure SQL Database servers. To investigate, I picked a simple command: Get-AzSqlServer -ResourceGroupName "ResourceGroupName" -ServerName "servername". Remember that Get-AzSqlServer cmdlet is a PowerShell wrapper over the Azure REST API for the Microsoft.Sql/servers resource. Internally, it makes a call to the ARM endpoint documented here, passing the required api-version. The actual version used depends on the installed Az.Sql module and may vary from one environment to another. I found that setting the variable $DebugPreference = "Continue" in my PowerShell Script , PowerShell prints detailed internal debug output, including the exact REST API call sent to Azure Resource Manager (ARM). Checking the output I've found the section called: Absolute Uri: https://management.azure.com/subscriptions/xxx-xxxx--613ccd2df306/resourceGroups/ResourceGroupName/providers/Microsoft.Sql/servers/servername?api-version=2023-02-01-preview So, it seems that running this command we could see this info. Even though you don’t explicitly define the api-version when using cmdlets like Get-AzSqlServer, the Azure platform requires it under the hood. The version used determines, which properties are available or supported, what operations behave differently across versions, whether the call will succeed once older versions are retired, etc.. For example, by using Azure CLI from the Portal, I was able to see the most recent API versions. It’s also important to note that, if your organization has .NET code managing Azure SQL Database environments, the underlying REST API calls might still be using an outdated preview version.Step-by-Step Guide: Route Azure SQL Audit Logs to Multiple Log Analytics Workspaces
Scenario: Many organizations need to route audit logs from Azure SQL Database to more than one Log Analytics workspace. For example, your security team may use Microsoft Sentinel in one workspace, while your application team analyzes logs in another. Azure now makes this possible—here’s how to set it up, and what to watch out for. Why Send Audit Logs to Multiple Workspaces? Separation of Duties: Security and application teams can access the logs they need, independently. Integration with Different Tools: Sentinel may use one workspace for SIEM, while app teams use another for analytics. Compliance and Regional Needs: Some organizations must store logs in different regions or workspaces for regulatory reasons. Step-by-Step Guide Enable Auditing to Log Analytics Workspace Go to your Azure SQL Server in the Azure Portal. Under Security, select Auditing. Set the audit destination to your primary Log Analytics workspace, Click Save. Tip: Enabling auditing here automatically creates a diagnostic setting for the selected workspace. Add Diagnostic Settings for Additional Workspaces In azure portal search for Diagnostic settings. Search for your subscription and master database of SQL Server to create diagnostics setting at server level Click + Add diagnostic setting. Name your setting (e.g., “AuditToAppWorkspace”). Under Log, select audit, select SQLSecurityAuditEvents (uncheck “DevOpsAudit” if not needed). Choose an additional Log Analytics workspace as the destination. Click Save. create new setting Note: You can repeat this step to send audit logs to as many workspaces as needed. Example Use Case A customer uses: Workspace A for Microsoft Sentinel (security monitoring) Workspace B for application analytics By configuring multiple diagnostic settings, both teams receive the audit data they need—no manual exports required. Summary Configuring multiple diagnostic settings allows you to send Azure SQL Database audit logs to several Log Analytics workspaces. This is essential for organizations with different teams or compliance needs. Remember: Enable auditing first Add diagnostic settings for each workspace Monitor for cost and avoid duplicate logs References: https://learn.microsoft.com/en-us/azure/azure-sql/database/auditing https://learn.microsoft.com/en-us/azure/azure-monitor/essentials/diagnostic-settings227Views0likes0CommentsABORT_QUERY_EXECUTION query hint - public preview
We are pleased to announce the public preview of a new query hint, ABORT_QUERY_EXECUTION. The hint is intended to be used as a Query Store hint to let administrators block future execution of known problematic queries, for example non-essential queries causing high resource consumption and affecting application workloads. The hint is now available in Azure SQL Database for all databases without restrictions. The hint will later be available in Azure SQL Managed Instance with the always-up-to-date update policy, as well as in a future version of SQL Server. For more information, see Block future execution of problematic queries in documentation. Update 2025-10-06: The ABORT_QUERY_EXECUTION hint is now generally available. Frequently Asked Questions Is this supported by Microsoft Support during public preview? Yes, just like other query hints. How do I use this? Use Query Store catalog views or the Query Store UI in SSMS to find the query ID of the query you want to block and execute sys.sp_query_store_set_hints specifying that query ID as a parameter. For example: EXEC sys.sp_query_store_set_hints @query_id = 17, @query_hints = N'OPTION (USE HINT (''ABORT_QUERY_EXECUTION''))'; What happens when a query with this hint is executed? This hint is intended to be used as a Query Store hint but can be specified directly as well. In either case, the query fails immediately with error 8778, severity 16: Query execution has been aborted because the ABORT_QUERY_EXECUTION hint was specified. How do I unblock a query? Remove the hint by executing sys.sp_query_store_clear_hints with the query ID value of the query you want to unblock passed via the @query_id parameter. Can I block a query that is not in Query Store? No. At least one execution of the query must be recorded in Query Store. That query execution does not have to be successful. This means that a query that started executing but was canceled or timed out can be blocked too. When I add the hint, does it abort any currently executing queries? No. The hint only aborts future query executions. You can use KILL to abort currently executing queries. What permissions are required to use this? As with all other Query Store hints, the ALTER permission on the database is required to set and clear the hint. Can I block all queries matching a query hash? Not directly. As with all other Query Store hints, you must use a query ID to set and clear a hint. However, you can create automation that will periodically find all new query IDs matching a given query hash and block them. Can I find all blocked queries in Query Store? Yes, by executing the following query: SELECT qsh.query_id, q.query_hash, qt.query_sql_text FROM sys.query_store_query_hints AS qsh INNER JOIN sys.query_store_query AS q ON qsh.query_id = q.query_id INNER JOIN sys.query_store_query_text AS qt ON q.query_text_id = qt.query_text_id WHERE UPPER(qsh.query_hint_text) LIKE '%ABORT[_]QUERY[_]EXECUTION%' Where do I send feedback about this hint? The preferred feedback channel is via https://aka.ms/sqlfeedback. Feedback sent that way is public and can be voted and commented on by other SQL community members. You can also leave comments on this blog post or email us at intelligentqp@microsoft.com.1.5KViews1like0CommentsStream data in near real time from SQL to Azure Event Hubs - Public preview
If near-real time integration is something you are looking to implement and you were looking for a simpler way to get the data out of SQL, keep reading. SQL is making it easier to integrate and Change Event Streaming is a feature continuing this trend. Modern applications and analytics platforms increasingly rely on event-driven architectures and real-time data pipelines. As the businesses speed up, real time decisioning is becoming especially important. Traditionally, capturing changes from a relational database requires complex ETL jobs, periodic polling, or third-party tools. These approaches often consume significant cycles of the data source, introduce operational overhead, and pose challenges with scalability, especially if you need one data source to feed into multiple destinations. In this context, we are happy to release Change Event Streaming ("CES") feature into Public Preview for Azure SQL Database. This feature enables you to stream row-level changes - inserts, updates, and deletes - from your database directly to Azure Event Hubs in near real time. Change Event Streaming addresses the above challenges by: Reducing latency: Changes are streamed (pushed by SQL) as they happen. This is in contrast with traditional CDC (change data capture) or CT (change tracking) based approaches, where an external component needs to poll SQL at regular intervals. Traditional approaches allow you to increase polling frequency, but it gets difficult to find a sweet spot between minimal latency and minimal overhead due to too frequent polls. Simplifying architecture: No need for Change Data Capture (CDC), Change Tracking, custom polling or external connectors - SQL streams directly to configured destination. This means simpler security profile (fewer authentication points), fewer failure points, easier monitoring, lower skill bar to deploy and run the service. No need to worry about cleanup jobs, etc. SQL keeps track of which changes are successfully received by the destination, handles the retry logic and releases log truncation point. Finally, with CES you have fewer components to procure and get approved for production use. Decoupling: The integration is done on the database level. This eliminates the problem of dual writes - the changes are streamed at transaction boundaries, once your source of truth (the database) has saved the changes. You do not need to modify your app workloads to get the data streamed - you tap right onto the data layer - this is useful if your apps are dated and do not possess real-time integration capabilities. In case of some 3rd party apps, you may not even have an option to do anything other than database level integration, and CES makes it simpler. Also, the publishing database does not concern itself with the final destination for the data - Stream the data once to the common message bus, and you can consume it by multiple downstream systems, irrespective of their number or capacity - the (number of) consumers does not affect publishing load on the SQL side. Serving consumers is handled by the message bus, Azure Event Hubs, which is purpose built for high throughput data transfers. onceptually visualizing data flow from SQL Server, with an arrow towards Azure Event Hubs, from where a number of arrows point to different final destinations. Key Scenarios for CES Event-driven microservices: They need to exchange data, typically thru a common message bus. With CES, you can have automated data publishing from each of the microservices. This allows you to trigger business processes immediately when data changes. Real-time analytics: Stream operational data into platforms like Fabric Real Time Intelligence or Azure Stream Analytics for quick insights. Breaking down the monoliths: Typical monolithic systems with complex schemas, sitting on top of a single database can be broken down one piece at a time: create a new component (typically a microservice), set up the streaming from the relevant tables on the monolith database and tap into the stream by the new components. You can then test run the components, validate the results against the original monolith, and cutover when you build the confidence that the new component is stable. Cache and search index updates: Keep distributed caches and search indexes in sync without custom triggers. Data lake ingestion: Capture changes continuously into storage for incremental processing. Data availability: This is not a scenario per se, but the amount of data you can tap into for business process mining or intelligence in general goes up whenever you plug another database into the message bus. E.g. You plug in your eCommerce system to the message bus to integrate with Shipping providers, and consequently, the same data stream is immediately available for any other systems to tap into. How It Works CES uses transaction log-based capture to stream changes with minimal impact on your workload. Events are published in a structured JSON format following the CloudEvents standard, including operation type, primary key, and before/after values. You can configure CES to target Azure Event Hubs via AMQP or Kafka protocols. For details on configuration, message format, and FAQs, see the official documentation: Feature Overview CES: Frequently Asked Questions Get Started Public preview CES is available today in public preview for Azure SQL Database and as a preview feature in SQL Server 2025. Private preview CES is also available as a private preview for Azure SQL Managed Instance and Fabric SQL database: you can request to join the private preview by signing up here: https://aka.ms/sql-ces-signup We encourage you to try the feature out and start building real-time integrations on top of your existing data. We welcome your feedback—please share your experience through Azure Feedback portal or support channels. The comments below on this blog post will also be monitored, if you want to engage with us. Finally, CES team can be reached via email: sqlcesfeedback [at] microsoft [dot] com. Useful resources Free Azure SQL Database. Free Azure SQL Managed Instance.685Views0likes0CommentsAzure SQL Database LTR Backup Immutability is now Generally Available
Azure SQL Database is a fully managed, always‑up‑to‑date relational database service built for mission‑critical apps. It delivers built‑in high availability, automated backups, and elastic scale, with strong security and compliance capabilities. Today, I am very excited to announce the General Availability of immutability for Azure SQL DB LTR backups! Azure SQL Database now supports immutable long‑term retention (LTR) backups, stored in write‑once, read‑many (WORM) state for a fixed (customer configured) period. That means your LTR backups cannot be modified or deleted during the lock window—even by highly privileged identities—helping you preserve clean restore points after a cyberattack and strengthen your compliance posture. Why this matters: ransomware targets backups Modern ransomware playbooks don’t stop at encrypting production data—they also attempt to alter or delete backups to block recovery. With backup immutability, Azure SQL Database LTR backups are written to immutable storage and locked for the duration you specify, providing a resilient, tamper‑proof recovery layer so you can restore from a known‑good copy when it matters most. What we’re announcing General Availability of Backup Immutability for Long‑Term Retention (LTR) backups in Azure SQL Database. This GA applies to Azure SQL database LTR backups. What immutability does (and doesn’t) do Prevents changes and deletion of LTR backup artifacts for a defined, locked period (WORM). This protection applies even to highly privileged identities, reducing the risk from compromised admin accounts or insider misuse. Helps address regulatory WORM expectations, supporting customers who must retain non‑erasable, non‑rewritable records (for example, requirements under SEC Rule 17a‑4(f), FINRA Rule 4511(c), and CFTC Rule 1.31(c)–(d)). Always consult your legal/compliance team for your specific obligations. Complements a defense‑in‑depth strategy—it’s not a replacement for identity hygiene, network controls, threat detection, and recovery drills. See Microsoft’s broader ransomware guidance for Azure. How it works (at a glance) When you enable immutability on an LTR policy, Azure SQL Database stores those LTR backups on Azure immutable storage in a WORM state. During the lock window, the backup cannot be modified or deleted; after the lock expires, normal retention/deletion applies per your policy. Key benefits Ransomware‑resilient recovery: Preserve clean restore points that attackers can’t tamper with during the lock period. Compliance‑ready retention: Use WORM‑style retention to help meet industry and regulatory expectations for non‑erasable, non‑rewritable storage. Operational simplicity: Manage immutability alongside your existing Azure SQL Database long‑term retention policies. Get started Choose databases that require immutable LTR backups. Enable immutability on the LTR backup policy and set the retention/lock period aligned to your regulatory and risk requirements. Validate recovery by restoring from an immutable LTR backup. Documentation: Learn more about backup immutability for LTR backups in Azure SQL Database in Microsoft Learn. Tell us what you think We’d love your feedback on scenarios, guidance, and tooling that would make immutable backups even easier to adopt. Share your experiences and suggestions in the Azure SQL community forums and let us know how immutability is helping your organization raise its cyber‑resilience.340Views1like0CommentsConvert geo-replicated databases to Hyperscale
Update: On 22 October 2025 we announced the General Availability for this improvement. We’re excited to introduce the next improvement in Hyperscale conversion: a new feature that allows customers to convert Azure SQL databases to Hyperscale by keeping active geo-replication or failover group configurations intact. This builds on our earlier improvements and directly addresses one of the most requested capabilities from customers. With this improvement, customers can now modernize your database architecture with Hyperscale while maintaining business continuity. Overview We have heard feedback from customers about possible improvements we could make while converting their databases to Hyperscale. Customers complained about the complex steps they needed to perform to convert a database to Hyperscale when the database is geo-replicated by active geo-replication or failover groups. Previously, converting to Hyperscale required tearing down geo-replication links and recreating them after the conversion. Now, that’s no longer necessary. This improvement allows customers to preserve their cross-region disaster recovery or read scale-out configurations and still allows conversion to Hyperscale which helps in minimizing downtime and operational complexity. This feature is especially valuable for applications that rely on failover group endpoints for connectivity. Before this improvement, if application needs to be available during conversion, then connection string needed modifications as a part of conversion because the failover group and its endpoints had to be removed. With this new improvement, the conversion process is optimized for minimal disruption, with telemetry showing majority of cutover times under one minute. Even with a geo-replication configuration in place, you can still choose between automatic and manual cutover modes, offering flexibility in scheduling the transition. Progress tracking is now more granular, giving customers better visibility into each stage of the conversion, including the conversion of the geo-secondary to Hyperscale. Customer feedback Throughout the preview phase, we have received overwhelmingly positive feedback from several customers about this improvement. Viktoriia Kuznetcova, Senior Automation Test Engineer from Nintex says: We needed a low-downtime way to move our databases from the Premium tier to Azure SQL Database Hyperscale, and this new feature delivered perfectly; allowing us to complete the migration in our test environments safely and smoothly, even while the service remained under continuous load, without any issues and without needing to break the failover group. We're looking forward to the public release so we can use it in production, where Hyperscale’s ability to scale storage both up and down will help us manage peak loads without overpaying for unused capacity. Get started The good news is that there are no changes needed to the conversion process. The workflow automatically detects that a geo-secondary is present and converts it to Hyperscale. There are no new parameters, and the method remains the same as the existing conversion process which works for non-geo-replicated databases. All you need is to make sure that: You have only one geo-secondary replica because Hyperscale doesn't support more than one geo-secondary replica. If a chained geo-replication configuration exists, it must be removed before starting the conversion to Hyperscale. Creating a geo-replica of a geo-replica (also known as "geo-replica chaining") isn't supported in Hyperscale. Once the above requirements are satisfied, you can use any of the following methods to initiate the conversion process. Conversion to Hyperscale must be initiated starting from the primary geo-replica. The following table provides sample commands to convert a database named WideWorldImporters on a logical server called contososerver to an 8-vcore Hyperscale database with manual cutover option. Method Command T-SQL ALTER DATABASE WideWorldImporters MODIFY (EDITION = 'Hyperscale', SERVICE_OBJECTIVE = 'HS_Gen5_8') WITH MANUAL_CUTOVER; PowerShell Set-AzSqlDatabase -ResourceGroupName "ResourceGroup01" -ServerName "contososerver" -DatabaseName "WideWorldImporters" -Edition "Hyperscale" -RequestedServiceObjectiveName "HS_Gen5_8" -ManualCutover Azure CLI az sql db update --resource-group ResourceGroup01 --server contososerver --name WideWorldImporters --edition Hyperscale --service-objective HS_Gen5_8 --manual-cutover Here are some notable details of this improvement: The geo-secondary database is automatically converted to Hyperscale with the same service level objective as the primary. All database configurations such as maintenance window, zone-resiliency, backup redundancy etc. remain the same as earlier (i.e., both geo-primary and geo-secondary would inherit from their own earlier configuration). A planned failover isn't possible while the conversion to Hyperscale is in progress. A forced failover is possible. However, depending on the state of the conversion when the forced failover occurs, the new geo-primary after failover might use either the Hyperscale service tier, or its original service tier. If the geo-secondary database is in an elastic pool before conversion, it is taken out of the pool and might need to be added back to a Hyperscale elastic pool separately after the conversion. This feature has been fully deployed across all Azure regions. In case you see error (Update to service objective '<SLO name>' with source DB geo-replicated is not supported for entity '<Database Name>') while converting primary to Hyperscale, we would like to hear from you. Do send us an email to the email ID give in next section. If you don’t want to use this capability, make sure to remove any geo-replication configuration before converting your databases to Hyperscale. Conclusion This update marks a significant step forward in the Hyperscale conversion process, offering simple steps, less downtime and keeping the geo-secondary available during the conversion process. We encourage you to try this capability and provide your valuable feedback and help us refine this feature. You can contact us by commenting on this blog post and we’ll be happy to get back to you. Alternatively, you can also email us at sqlhsfeedback AT microsoft DOT com. We are eager to hear from you all!1.9KViews2likes0Comments