azure sql
698 TopicsAzure SQL Database LTR Backup Immutability is now Generally Available
Azure SQL Database is a fully managed, always‑up‑to‑date relational database service built for mission‑critical apps. It delivers built‑in high availability, automated backups, and elastic scale, with strong security and compliance capabilities. Today, I am very excited to announce the General Availability of immutability for Azure SQL DB LTR backups! Azure SQL Database now supports immutable long‑term retention (LTR) backups, stored in write‑once, read‑many (WORM) state for a fixed (customer configured) period. That means your LTR backups cannot be modified or deleted during the lock window—even by highly privileged identities—helping you preserve clean restore points after a cyberattack and strengthen your compliance posture. Why this matters: ransomware targets backups Modern ransomware playbooks don’t stop at encrypting production data—they also attempt to alter or delete backups to block recovery. With backup immutability, Azure SQL Database LTR backups are written to immutable storage and locked for the duration you specify, providing a resilient, tamper‑proof recovery layer so you can restore from a known‑good copy when it matters most. What we’re announcing General Availability of Backup Immutability for Long‑Term Retention (LTR) backups in Azure SQL Database. This GA applies to Azure SQL database LTR backups. What immutability does (and doesn’t) do Prevents changes and deletion of LTR backup artifacts for a defined, locked period (WORM). This protection applies even to highly privileged identities, reducing the risk from compromised admin accounts or insider misuse. Helps address regulatory WORM expectations, supporting customers who must retain non‑erasable, non‑rewritable records (for example, requirements under SEC Rule 17a‑4(f), FINRA Rule 4511(c), and CFTC Rule 1.31(c)–(d)). Always consult your legal/compliance team for your specific obligations. Complements a defense‑in‑depth strategy—it’s not a replacement for identity hygiene, network controls, threat detection, and recovery drills. See Microsoft’s broader ransomware guidance for Azure. How it works (at a glance) When you enable immutability on an LTR policy, Azure SQL Database stores those LTR backups on Azure immutable storage in a WORM state. During the lock window, the backup cannot be modified or deleted; after the lock expires, normal retention/deletion applies per your policy. Key benefits Ransomware‑resilient recovery: Preserve clean restore points that attackers can’t tamper with during the lock period. Compliance‑ready retention: Use WORM‑style retention to help meet industry and regulatory expectations for non‑erasable, non‑rewritable storage. Operational simplicity: Manage immutability alongside your existing Azure SQL Database long‑term retention policies. Get started Choose databases that require immutable LTR backups. Enable immutability on the LTR backup policy and set the retention/lock period aligned to your regulatory and risk requirements. Validate recovery by restoring from an immutable LTR backup. Documentation: Learn more about backup immutability for LTR backups in Azure SQL Database in Microsoft Learn. Tell us what you think We’d love your feedback on scenarios, guidance, and tooling that would make immutable backups even easier to adopt. Share your experiences and suggestions in the Azure SQL community forums and let us know how immutability is helping your organization raise its cyber‑resilience.0Views0likes0CommentsGeneral Availability announcement of Backup/Restore capabilities in SQL Server 2025
Over the past several months, we’ve heard from countless customers who are eager for more robust options to protect, compress, and safeguard their SQL Server data. Since introducing these features in public preview, organizations of all sizes have validated their value in real-world workloads and provided invaluable feedback. Today, we are thrilled to announce the General Availability (GA) of three powerful SQL Server 2025 features: Backups on Secondary for Always On Availability Groups, ZSTD Compression, and Immutable Backups for Ransomware Protection. These advancements are now ready for production use, built around the needs and requests of the SQL Server customers. Backups on Secondary for SQL Server Always On Availability Groups Previously in preview, this feature now reaches GA, allowing you to offload backup operations to secondary replicas in Always On Availability Groups. This enhancement optimizes resource utilization and minimizes overhead on primary replicas, ensuring better performance for mission-critical workloads. What’s New in GA: Improved reliability and support for production environments. Key Benefits: Comprehensive support: Full, differential, and transaction log backups are now fully supported on secondary replicas—not just COPY_ONLY and transaction logs. Reduced impact on primary replica performance. Simplified high-availability strategies. Learn more from the original announcement: Introducing Backups on Secondary for SQL Server Always On Availability Groups. ZSTD Compression in SQL Server 2025 The GA of ZSTD compression brings modern, efficient data compression to SQL Server. ZSTD compression introduces industry-leading performance and efficiency, letting you save storage and speed up workloads. What’s New in GA: Full production support for ZSTD across key workloads. Key Benefits: Faster compression and decompression compared to legacy algorithms. Lower storage footprint without sacrificing performance. Choose your algorithm: ZSTD is now a standard option right alongside MS_XPRESS for row, page, and backup compression. Tunable compression levels: Administrators can select from LOW, MEDIUM, or HIGH to balance resource use and savings. Ideal for large-scale data environments. Explore the preview details: ZSTD Compression in SQL Server 2025. Backups to immutable storage: A Powerful Shield Against Ransomware Your backups are now safer than ever. Thanks to native support for immutability with Azure Blob Storage, backup files can be rendered tamper-proof—protecting them from ransomware or even accidental deletion. Key Benefits: Strong defense against ransomware and malicious tampering. Compliance with regulatory requirements for data integrity. Peace of mind for critical backup strategies. Read the detailed use-case and how-to: Immutability: A Powerful Shield Against Ransomware in SQL Environments. These features collectively empower organizations to: Optimize performance and resource utilization. Reduce operational costs through efficient compression. Strengthen security posture against evolving threats. Get Started Today These features are available to all SQL Server 2025 customers. Ready to elevate your data protection, efficiency, and compliance posture? Access the official SQL Server 2025 documentation for step-by-step guides via visit Microsoft Learn. Review upgrade guidance and best practices. Explore real-world configurations and FAQs. Upgrade now and unlock the next level of resilience, efficiency, and security for your SQL Server workloads!18Views1like0CommentsStream data in near real time from SQL to Azure Event Hubs - Public preview
If near-real time integration is something you are looking to implement and you were looking for a simpler way to get the data out of SQL, keep reading. SQL is making it easier to integrate and Change Event Streaming is a feature continuing this trend. Modern applications and analytics platforms increasingly rely on event-driven architectures and real-time data pipelines. As the businesses speed up, real time decisioning is becoming especially important. Traditionally, capturing changes from a relational database requires complex ETL jobs, periodic polling, or third-party tools. These approaches often consume significant cycles of the data source, introduce operational overhead, and pose challenges with scalability, especially if you need one data source to feed into multiple destinations. In this context, we are happy to release Change Event Streaming ("CES") feature into Public Preview for Azure SQL Database. This feature enables you to stream row-level changes - inserts, updates, and deletes - from your database directly to Azure Event Hubs in near real time. Change Event Streaming addresses the above challenges by: Reducing latency: Changes are streamed (pushed by SQL) as they happen. This is in contrast with traditional CDC (change data capture) or CT (change tracking) based approaches, where an external component needs to poll SQL at regular intervals. Traditional approaches allow you to increase polling frequency, but it gets difficult to find a sweet spot between minimal latency and minimal overhead due to too frequent polls. Simplifying architecture: No need for Change Data Capture (CDC), Change Tracking, custom polling or external connectors - SQL streams directly to configured destination. This means simpler security profile (fewer authentication points), fewer failure points, easier monitoring, lower skill bar to deploy and run the service. No need to worry about cleanup jobs, etc. SQL keeps track of which changes are successfully received by the destination, handles the retry logic and releases log truncation point. Finally, with CES you have fewer components to procure and get approved for production use. Decoupling: The integration is done on the database level. This eliminates the problem of dual writes - the changes are streamed at transaction boundaries, once your source of truth (the database) has saved the changes. You do not need to modify your app workloads to get the data streamed - you tap right onto the data layer - this is useful if your apps are dated and do not possess real-time integration capabilities. In case of some 3rd party apps, you may not even have an option to do anything other than database level integration, and CES makes it simpler. Also, the publishing database does not concern itself with the final destination for the data - Stream the data once to the common message bus, and you can consume it by multiple downstream systems, irrespective of their number or capacity - the (number of) consumers does not affect publishing load on the SQL side. Serving consumers is handled by the message bus, Azure Event Hubs, which is purpose built for high throughput data transfers. onceptually visualizing data flow from SQL Server, with an arrow towards Azure Event Hubs, from where a number of arrows point to different final destinations. Key Scenarios for CES Event-driven microservices: They need to exchange data, typically thru a common message bus. With CES, you can have automated data publishing from each of the microservices. This allows you to trigger business processes immediately when data changes. Real-time analytics: Stream operational data into platforms like Fabric Real Time Intelligence or Azure Stream Analytics for quick insights. Breaking down the monoliths: Typical monolithic systems with complex schemas, sitting on top of a single database can be broken down one piece at a time: create a new component (typically a microservice), set up the streaming from the relevant tables on the monolith database and tap into the stream by the new components. You can then test run the components, validate the results against the original monolith, and cutover when you build the confidence that the new component is stable. Cache and search index updates: Keep distributed caches and search indexes in sync without custom triggers. Data lake ingestion: Capture changes continuously into storage for incremental processing. Data availability: This is not a scenario per se, but the amount of data you can tap into for business process mining or intelligence in general goes up whenever you plug another database into the message bus. E.g. You plug in your eCommerce system to the message bus to integrate with Shipping providers, and consequently, the same data stream is immediately available for any other systems to tap into. How It Works CES uses transaction log-based capture to stream changes with minimal impact on your workload. Events are published in a structured JSON format following the CloudEvents standard, including operation type, primary key, and before/after values. You can configure CES to target Azure Event Hubs via AMQP or Kafka protocols. For details on configuration, message format, and FAQs, see the official documentation: Feature Overview CES: Frequently Asked Questions Get Started Public preview CES is available today in public preview for Azure SQL Database and as a preview feature in SQL Server 2025. Private preview CES is also available as a private preview for Azure SQL Managed Instance and Fabric SQL database: you can request to join the private preview by signing up here: https://aka.ms/sql-ces-signup We encourage you to try the feature out and start building real-time integrations on top of your existing data. We welcome your feedback—please share your experience through Azure Feedback portal or support channels. The comments below on this blog post will also be monitored, if you want to engage with us. Finally, CES team can be reached via email: sqlcesfeedback [at] microsoft [dot] com. Useful resources Free Azure SQL Database. Free Azure SQL Managed Instance.182Views0likes0CommentsConvert geo-replicated databases to Hyperscale
Update: On 22 October 2025 we announced the General Availability for this improvement. We’re excited to introduce the next improvement in Hyperscale conversion: a new feature that allows customers to convert Azure SQL databases to Hyperscale by keeping active geo-replication or failover group configurations intact. This builds on our earlier improvements and directly addresses one of the most requested capabilities from customers. With this improvement, customers can now modernize your database architecture with Hyperscale while maintaining business continuity. Overview We have heard feedback from customers about possible improvements we could make while converting their databases to Hyperscale. Customers complained about the complex steps they needed to perform to convert a database to Hyperscale when the database is geo-replicated by active geo-replication or failover groups. Previously, converting to Hyperscale required tearing down geo-replication links and recreating them after the conversion. Now, that’s no longer necessary. This improvement allows customers to preserve their cross-region disaster recovery or read scale-out configurations and still allows conversion to Hyperscale which helps in minimizing downtime and operational complexity. This feature is especially valuable for applications that rely on failover group endpoints for connectivity. Before this improvement, if application needs to be available during conversion, then connection string needed modifications as a part of conversion because the failover group and its endpoints had to be removed. With this new improvement, the conversion process is optimized for minimal disruption, with telemetry showing majority of cutover times under one minute. Even with a geo-replication configuration in place, you can still choose between automatic and manual cutover modes, offering flexibility in scheduling the transition. Progress tracking is now more granular, giving customers better visibility into each stage of the conversion, including the conversion of the geo-secondary to Hyperscale. Customer feedback Throughout the preview phase, we have received overwhelmingly positive feedback from several customers about this improvement. Viktoriia Kuznetcova, Senior Automation Test Engineer from Nintex says: We needed a low-downtime way to move our databases from the Premium tier to Azure SQL Database Hyperscale, and this new feature delivered perfectly; allowing us to complete the migration in our test environments safely and smoothly, even while the service remained under continuous load, without any issues and without needing to break the failover group. We're looking forward to the public release so we can use it in production, where Hyperscale’s ability to scale storage both up and down will help us manage peak loads without overpaying for unused capacity. Get started The good news is that there are no changes needed to the conversion process. The workflow automatically detects that a geo-secondary is present and converts it to Hyperscale. There are no new parameters, and the method remains the same as the existing conversion process which works for non-geo-replicated databases. All you need is to make sure that: You have only one geo-secondary replica because Hyperscale doesn't support more than one geo-secondary replica. If a chained geo-replication configuration exists, it must be removed before starting the conversion to Hyperscale. Creating a geo-replica of a geo-replica (also known as "geo-replica chaining") isn't supported in Hyperscale. Once the above requirements are satisfied, you can use any of the following methods to initiate the conversion process. Conversion to Hyperscale must be initiated starting from the primary geo-replica. The following table provides sample commands to convert a database named WideWorldImporters on a logical server called contososerver to an 8-vcore Hyperscale database with manual cutover option. Method Command T-SQL ALTER DATABASE WideWorldImporters MODIFY (EDITION = 'Hyperscale', SERVICE_OBJECTIVE = 'HS_Gen5_8') WITH MANUAL_CUTOVER; PowerShell Set-AzSqlDatabase -ResourceGroupName "ResourceGroup01" -ServerName "contososerver" -DatabaseName "WideWorldImporters" -Edition "Hyperscale" -RequestedServiceObjectiveName "HS_Gen5_8" -ManualCutover Azure CLI az sql db update --resource-group ResourceGroup01 --server contososerver --name WideWorldImporters --edition Hyperscale --service-objective HS_Gen5_8 --manual-cutover Here are some notable details of this improvement: The geo-secondary database is automatically converted to Hyperscale with the same service level objective as the primary. All database configurations such as maintenance window, zone-resiliency, backup redundancy etc. remain the same as earlier (i.e., both geo-primary and geo-secondary would inherit from their own earlier configuration). A planned failover isn't possible while the conversion to Hyperscale is in progress. A forced failover is possible. However, depending on the state of the conversion when the forced failover occurs, the new geo-primary after failover might use either the Hyperscale service tier, or its original service tier. If the geo-secondary database is in an elastic pool before conversion, it is taken out of the pool and might need to be added back to a Hyperscale elastic pool separately after the conversion. This feature has been fully deployed across all Azure regions. In case you see error (Update to service objective '<SLO name>' with source DB geo-replicated is not supported for entity '<Database Name>') while converting primary to Hyperscale, we would like to hear from you. Do send us an email to the email ID give in next section. If you don’t want to use this capability, make sure to remove any geo-replication configuration before converting your databases to Hyperscale. Conclusion This update marks a significant step forward in the Hyperscale conversion process, offering simple steps, less downtime and keeping the geo-secondary available during the conversion process. We encourage you to try this capability and provide your valuable feedback and help us refine this feature. You can contact us by commenting on this blog post and we’ll be happy to get back to you. Alternatively, you can also email us at sqlhsfeedback AT microsoft DOT com. We are eager to hear from you all!1.8KViews2likes0CommentsAnnouncing Public Preview: Auditing for Fabric SQL Database
We’re excited to announce the public preview of Auditing for Fabric SQL Database—a powerful feature designed to help organizations strengthen security, ensure compliance, and gain deep operational insights into their data environments. Why Auditing Matters Auditing is a cornerstone of data governance. With Fabric SQL Database auditing, you can now easily track and log database activities—answering critical questions like who accessed what data, when, and how. This supports compliance requirements (such as HIPAA and SOX), enables robust threat detection, and provides a foundation for forensic investigations. Key Highlights Flexible Configuration: Choose from default “audit everything,” preconfigured scenarios (like permission changes, login attempts, data reads/writes, schema changes), or define custom action groups and predicate filters for advanced needs. Seamless Access: Audit logs are stored in One Lake, making them easily accessible via T-SQL or One Lake Explorer. Role-Based Access Control: Configuration and log access are governed by both Fabric workspace roles and SQL-level permissions, ensuring only authorized users can view or manage audit data. Retention Settings: Customize how long audit logs are retained to meet your organization’s policy. How It Works Audit logs are written to a secure, read-only folder in One Lake and can be queried using the sys. fn_get_audit_file_v2 T-SQL function. Workspace and artifact IDs are used as identifiers, ensuring logs remain consistent even if databases move across logical servers. Access controls at both the workspace and SQL database level ensure only the right people can configure or view audit logs. Example Use Cases Compliance Monitoring: Validate a full audit trail for regulatory requirements. Security Investigations: Track specific events like permission changes or failed login attempts. Operational Insights: Focus on specific operations (e.g., DML only) or test retention policies. Role-Based Access: Verify audit visibility across different user roles. Getting Started You can configure auditing directly from the Manage SQL Auditing blade in the Fabric Portal. Choose your preferred scenario, set retention, and (optionally) define custom filters—all through a simple, intuitive interface. Learn more about auditing for Fabric SQL database here Data exposed session with demo here86Views2likes0CommentsGenerally Available: Azure SQL Managed Instance Next-gen General Purpose
Overview Next-gen General Purpose is the evolution of General Purpose service tier that brings significantly improved performance and scalability to power up your existing Azure SQL Managed Instance fleet and helps you bring more mission-critical SQL workloads to Azure. We are happy to announce that Next-gen General Purpose is now Generally Available (GA) delivering even more scalability, flexibility, and value for organizations looking to modernize their data platform in a cost-effective way. The new #SQLMINextGen General Purpose tier delivers a built-in performance upgrade available to all customers at no extra cost. If you are an existing SQL MI General Purpose user, you get faster I/O, higher database density, and expanded storage - automatically. Summary Table: Key Improvements Capability Current GP Next-gen GP Improvement Average I/O Latency 5-10 ms 3-4 ms 2x lower Max Data IOPS 30-50k 80k 60% better Max Storage 16 TB 32 TB 2x better Max Databases/Instance 100 500 5x better Max vCores 80 128 40% better But that’s just the beginning. The new configuration sliders for additional IOPS and memory provide enhanced flexibility to tailor performance according to your requirements. Whether you require more resources for your application or seek to optimize resource utilization, you can adjust your instance settings to maximize efficiency and output. This release isn’t just about speed - It’s about giving you improved performance where it matters, and mechanisms to go further when you need them. Customer story - A recent customer case highlights how Hexure reduced processing time by up to 97.2% using Azure SQL Managed Instance on Next-gen General Purpose. What’s new in Next-gen General Purpose (Nov 2025)? 1. Improved baseline performance with the latest storage tech The SQL Managed Instance General Purpose tier is designed with full separation of compute and storage layers. The Classic GP version uses premium page blobs for the storage layer, while the Next-generation GP tier has transitioned to Azure’s latest storage solution, Elastic SAN. Azure Elastic SAN is a cloud-native storage service that offers high performance and excellent scalability, making it a perfect fit for the storage layer of a data-intensive PaaS service like Azure SQL Managed Instance. Simplified Performance Management With ESAN as the storage layer, the performance quotas for the Next-gen General Purpose tier are no longer enforced for each database file. The entire performance quota for the instance is shared across all the database files, making performance management much easier (one fewer thing to worry about). This adjustment brings the General Purpose tier into alignment with the Business Critical service tier experience. 2. Resource flexibility and cost optimization The GA of Next-gen General Purpose comes together with the GA of a transformative memory slider, enabling up to 49 memory configurations per instance. This lets you right-size workloads for both performance and cost. Memory is billed only for the additional amount beyond the default allocation. Users can independently configure vCores, memory, and IOPS for optimal efficiency. To learn more about the new option for configuring additional memory, check the article: Unlocking More Power with Flexible Memory in Azure SQL Managed Instance. 3. Enhanced resource elasticity through decoupled compute and storage scaling operations With Next-gen GP, both storage and IOPS can be resized independently of the compute infrastructure, and these changes now typically finish within five minutes - a process known as an in-place upgrade. There are three distinct types of storage upgrade experiences depending on the kind of storage upgrade performed and whether failover occurs. In-place update: same storage (no data copy), same compute (no failover) Storage re-attach: Same storage (no data copy), changed compute (with failover) Data copy: Changed storage (data copy), changed compute (with failover) The following matrix describes user experience with management operations: Operation Data copying Failover Storage upgrade type IOPS scaling No No In-place Storage scaling* No* No In-place vCores scaling No Yes** Re-attach Memory scaling No Yes** Re-attach Maintenance Window change No Yes** Re-attach Hardware change No Yes** Re-attach Update policy change Yes Yes Data copy * If scale down is >5.5TB, seeding ** In case of update operations that do not require seeding and are not completed in place (examples are scaling vCores, scaling memory, changing hardware or maintenance window), failover duration of databases on the Next-gen General Purpose service tier scales with the number of databases, up to 10 minutes. While the instance becomes available after 2 minutes, some databases might be available after a delay. Failover duration is measured from the moment when the first database goes offline, until the moment when the last database comes online. Furthermore, resizing vCores and memory is now 50% faster following the introduction of the Faster scaling operations release. No matter if you have end-of-month peak periods, or there are ups and downs of usage during the weekdays and the weekend, with fast and reliable management operations, you can run multiple configurations over your instance and respond to peak usage periods in a cost-effective way. 4. Reserved instance (RI) pricing With Azure Reservations, you can commit to using Azure SQL resources for either one or three years, which lets you benefit from substantial discounts on compute costs. When purchasing a reservation, you'll need to choose the Azure region, deployment type, performance tier, and reservation term. Reservations are only available for products that have reached general availability (GA), and with this update, next-generation GP instances now qualify as well. What's even better is that classic and next-gen GP share the same SKU, just with different remote storage types. This means any reservations you've purchased automatically apply to Next-gen GP, whether you're upgrading an existing classic GP instance or creating a new one. What’s Next? The product group has received considerable positive feedback and welcomes continued input. The initial release will not include zonal redundancy; however, efforts are underway to address this limitation. Next-generation General Purpose (GP) represents the future of the service tier, and all existing classic GP instances will be upgraded accordingly. Once upgrade plans are finalized, we will provide timely communication regarding the announcement. Conclusion Now in GA, Next-gen General Purpose sets a new standard for cloud database performance and flexibility. Whether you’re modernizing legacy applications, consolidating workloads, or building for the future, these enhancements put more power, scalability, and control in your hands - without breaking the bank. If you haven’t already, try out the Next-gen General Purpose capabilities for free with Azure SQL Managed Instance free offer. For users operating SQL Managed Instance on the General Purpose tier, it is recommended to consider upgrading existing instances to leverage the advantages of next-gen upgrade – for free. Welcome to #SQLMINextGen. Boosted by default. Tuned by you. Learn more What is Azure SQL Managed Instance Try Azure SQL Managed Instance for free Next-gen General Purpose – official documentation Analyzing the Economic Benefits of Microsoft Azure SQL Managed Instance How 3 customers are driving change with migration to Azure SQL Accelerate SQL Server Migration to Azure with Azure Arc512Views4likes0CommentsIntroducing Azure SQL Managed Instance Next-gen GP
The next generation of the general purpose service tier for Azure SQL Managed Instance is a major upgrade that will considerably improve the storage performance of your instances while keeping the same price as current general purpose tier. Key improvements in the next generation of general purpose storage include support for 32 TB of storage, support for 500 DBs, lower storage latency, improved storage performance, and the ability to configure the amount of IOPS (I/O operations per second).34KViews9likes9CommentsUnlocking More Power with Flexible Memory in Azure SQL Managed Instance
Service update - Nov 18th 2025 Additional memory in Next-gen General Purpose service tier is now generally available! As data workloads grow in complexity and scale, so does the need for more adaptable and performant database infrastructure. That’s why we’re excited to introduce a new capability in Azure SQL Managed Instance: Flexible Memory, now generally available. What Is Flexible Memory? Flexible Memory allows you to customize the memory-to-vCore ratio in your SQL Managed Instance, giving you the ability to fine-tune performance and cost based on your workload needs. This feature is part of the next-generation General Purpose tier, and it introduces a memory slider that lets you scale memory independently within defined limits. The memory slider is enabled only for premium series hardware. Why It Matters Traditionally, memory allocation in SQL Managed Instance was fixed per vCore. With Flexible Memory, you can now: Increase memory beyond the default allocation Optimize for memory-intensive workloads without overprovisioning compute Pay only for what you use — additional memory is billed per GB/hour This flexibility is especially valuable for scenarios like analytics, caching, or workloads with large buffer pool requirements. How It Works Memory scales based on the number of vCores and the selected hardware tier: Hardware Tier Memory per vCore (GB) Standard-series 5.1 Premium series 7–12 Premium series (memory-optimized) Up to 13.6 You can select from predefined memory ratios (e.g., 7, 8, 10, 12 GB per vCore) depending on your configuration. For example, a 10 vCore instance can be configured with 70 GB to 120 GB of memory. One of the most powerful aspects of the Flexible Memory feature is the ability to select from a range of memory-to-vCore ratios. These “click stops” allow you to tailor memory allocation precisely to your workload’s needs — whether you’re optimizing for performance, cost, or both. The table below outlines the available configurations for Premium Series hardware, showing how memory scales across 16 vCore sizes: vCores Available Ratios Total Memory Options (GB) 4 7, 8, 10, 12 28, 32, 40, 48 6 7, 8, 10, 12 42, 48, 60, 72 8 7, 8, 10, 12 56, 64, 80, 96 10 7, 8, 10, 12 70, 80, 100, 120 12 7, 8, 10, 12 84, 96, 120, 144 16 7, 8, 10, 12 112, 128, 160, 192 20 7, 8, 10, 12 140, 160, 200, 240 24 7, 8, 10, 12 168, 192, 240, 288 32 7, 8, 10, 12 224, 256, 320, 384 40 7, 8, 10, 12 280, 320, 400, 480 48 7, 8, 10 336, 384, 480 56 7, 8 392, 448 64 7 448 80 7 560 96 5.83 560 128 4.38 560 Pricing model Flexible Memory introduces a usage-based pricing model that ensures you only pay for the memory you actually consume beyond the default allocation. This model is designed to give you the flexibility to scale memory without overcommitting on compute resources - and without paying for unused capacity. How it works: Default memory is calculated based on the minimum memory-to-vCore ratio Billable memory is the difference between your configured memory and the default allocation. Billing is per GB/hour, so you’re charged only for the additional memory used over time. Let’s take an example of SQL Managed Instance running on premium series hardware with 4 vCores and 40GB of memory. Configuration Value vCores 4 Configured Memory 40 GB Default Memory (4 × 7 GB) 28 GB Billable Memory 12 GB Billing Unit Per GB/hour Charged For 12 GB of additional memory Management Experience Changing memory behaves just like changing vCores: Seamless updates via Azure Portal, PowerShell, SDK or API Failover group guidance remains the same Upgrade secondary first Configurations between primary and secondary should match Adjusting the memory is fully online operation, with a short failover at the very end of it. The operation will go through the process of allocating the new compute with specified configuration, which takes approximately 60 minutes, with new faster management operations. API Support Flexible Memory is fully supported via API (the minimal API version that can be used is 2024-08-01) and Azure Portal. Here’s a sample API snippet to configure memory: { "properties": { "memorySizeInGB": 96 } } Portal support Summary The new Flexible Memory capability in Azure SQL Managed Instance empowers you to scale memory independently of compute, offering greater control over performance and cost. With customizable memory-to-vCore ratios, a transparent pricing model, and seamless integration into existing management workflows, this feature is ideal for memory-intensive workloads and dynamic scaling scenarios. Whether you're optimizing for analytics, caching, or simply want more headroom without overprovisioning vCores, Flexible Memory gives you the tools to do it - efficiently and affordably. Next Steps Review the Documentation: Explore detailed configuration options, supported tiers, and API usage. Additional memory Management operations overview Management operations duration Test Your Workloads: Use the memory slider in the Azure Portal, PowerShell, SDK or API to experiment with different configurations. Learn more What is Azure SQL Managed Instance Try Azure SQL Managed Instance for free Next-gen General Purpose – official documentation Analyzing the Economic Benefits of Microsoft Azure SQL Managed Instance How 3 customers are driving change with migration to Azure SQL Accelerate SQL Server Migration to Azure with Azure Arc994Views3likes0CommentsGeneral Availability Announcement: Regex Support in SQL Server 2025 & Azure SQL
We’re excited to announce the General Availability (GA) of native Regex support in SQL Server 2025 and Azure SQL — a long-awaited capability that brings powerful pattern matching directly into T-SQL. This release marks a significant milestone in modernizing string operations and enabling advanced text processing scenarios natively within the database engine. What is Regex? The other day, while building LEGO with my 3-year-old — an activity that’s equal parts joy and chaos — I spent minutes digging for one tiny piece and thought, “If only Regex worked on LEGO.” That moment of playful frustration turned into a perfect metaphor. Think of your LEGO box as a pile of data — a colorful jumble of tiny pieces. Now imagine trying to find every little brick from a specific LEGO set your kid mixed into the pile. That’s tricky — you’d have to sift through each piece one by one. But what if you had a smart filter that instantly found exactly those pieces? That’s what Regex (short for Regular Expressions) does for your data. It’s a powerful pattern-matching tool that helps you search, extract, and transform text with precision. With Regex now natively supported in SQL Server 2025 and Azure SQL, this capability is built directly into T-SQL — no external languages or workarounds required. What can Regex help you do? Regex can help you tackle a wide range of data challenges, including: Enhancing data quality and accuracy by validating and correcting formats like phone numbers, email addresses, zip codes, and more. Extracting valuable insights by identifying and grouping specific text patterns such as keywords, hashtags, or mentions. Transforming and standardizing data by replacing, splitting, or joining text patterns — useful for handling abbreviations, acronyms, or synonyms. Cleaning and optimizing data by removing unwanted patterns like extra whitespace, punctuation, or duplicates. Meet the new Regex functions in T-SQL SQL Server 2025 introduces seven new T-SQL Regex functions, grouped into two categories: scalar functions (return a value per row) and table-valued functions (TVFs) (return a set of rows). Here’s a quick overview: Function Type Description REGEXP_LIKE Scalar Returns TRUE if the input string matches the Regex pattern REGEXP_COUNT Scalar Counts the number of times a pattern occurs in a string REGEXP_INSTR Scalar Returns the position of a pattern match within a string REGEXP_REPLACE Scalar Replaces substrings that match a pattern with a replacement string REGEXP_SUBSTR Scalar Extracts a substring that matches a pattern REGEXP_MATCHES TVF Returns a table of all matches including substrings and their positions REGEXP_SPLIT_TO_TABLE TVF Splits a string into rows using a Regex delimiter These functions follow the POSIX standard and support most of the PCRE/PCRE2 flavor of regular expression syntax, making them compatible with most modern Regex engines and tools. They support common features like: Character classes (\d, \w, etc.) Quantifiers (+, *, {n}) Alternation (|) Capture groups ((...)) You can also use Regex flags to modify behavior: 'i' – Case-insensitive matching 'm' – Multi-line mode (^ and $ match line boundaries) 's' – Dot matches newline 'c' – Case-sensitive matching (default) Examples: Regex in Action Let’s explore how these functions solve tricky real-world data tasks that were hard to do in earlier SQL versions. REGEXP_LIKE: Data Validation — Keeping data in shape Validating formats like email addresses or phone numbers used to require multiple functions or external tools. With REGEXP_LIKE, it’s now a concise query. For example, you can check whether an email contains valid characters before and after the @, followed by a domain with at least two letters like .com, .org, or .co.in. SELECT [Name], Email, CASE WHEN REGEXP_LIKE (Email, '^[A-Za-z0-9._+]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}$') THEN 'Valid Email' ELSE 'Invalid Email' END AS IsValidEmail FROM (VALUES ('John Doe', 'john@contoso.com'), ('Alice Smith', 'alice@fabrikam.com'), ('Bob Johnson', 'bob@fabrikam.net'), ('Charlie Brown', 'charlie@contoso.co.in'), ('Eve Jones', 'eve@@contoso.com')) AS e(Name, Email); We can further use REGEXP_LIKE in CHECK constraints to enforce these rules at the column level (so no invalid format ever gets into the table). For instance: CREATE TABLE Employees ( ..., Email VARCHAR (320) CHECK (REGEXP_LIKE (Email, '^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}$')), Phone VARCHAR (20) CHECK (REGEXP_LIKE (Phone, '^(\d{3})-(\d{3})-(\d{4})$')) ); This level of enforcement significantly enhances data integrity by ensuring that only correctly formatted values are accepted into the database. REGEXP_COUNT: Count JSON object keys Count how many top-level keys exist in a JSON string — no JSON parser needed! SELECT JsonData, REGEXP_COUNT(JsonData, '"[^"]+"\s*:', 1, 'i') AS NumKeys FROM (VALUES ('{"name":"Abhiman","role":"PM","location":"Bengaluru"}'), ('{"skills":["SQL","T-SQL","Regex"],"level":"Advanced"}'), ('{"project":{"name":"Regex GA","status":"Live"},"team":["Tejas","UC"]}'), ('{"empty":{}}'), ('{}')) AS t(JsonData); REGEXP_INSTR: Locate patterns in logs Find the position of the first error code (ERR-XXXX) in log messages — even when the pattern appears multiple times or in varying locations. SELECT LogMessage, REGEXP_INSTR(LogMessage, 'ERR-\d{4}', 1, 1, 0, 'i') AS ErrorCodePosition FROM (VALUES ('System initialized. ERR-1001 occurred during startup.'), ('Warning: Disk space low. ERR-2048. Retry failed. ERR-2049.'), ('No errors found.'), ('ERR-0001: Critical failure. ERR-0002: Recovery started.'), ('Startup complete. Monitoring active.')) AS t(LogMessage); REGEXP_REPLACE: Redact sensitive data Mask SSNs and credit card numbers in logs or exports — all with a single, secure query. SELECT sensitive_info, REGEXP_REPLACE(sensitive_info, '(\d{3}-\d{2}-\d{4}|\d{4}-\d{4}-\d{4}-\d{4})', '***-**-****') AS redacted_info FROM (VALUES ('John Doe SSN: 123-45-6789'), ('Credit Card: 9876-5432-1098-7654'), ('SSN: 000-00-0000 and Card: 1111-2222-3333-4444'), ('No sensitive info here'), ('Multiple SSNs: 111-22-3333, 222-33-4444'), ('Card: 1234-5678-9012-3456, SSN: 999-88-7777')) AS t(sensitive_info); REGEXP_SUBSTR: Extract and count email domains Extract domains from email addresses and group users by domain. SELECT REGEXP_SUBSTR(Email, '@(.+)$', 1, 1, 'i', 1) AS Domain, COUNT(*) AS NumUsers FROM (VALUES ('Alice', 'alice@contoso.com'), ('Bob', 'bob@fabrikam.co.in'), ('Charlie', 'charlie@example.com'), ('Diana', 'diana@college.edu'), ('Eve', 'eve@contoso.com'), ('Frank', 'frank@fabrikam.co.in'), ('Grace', 'grace@example.net')) AS e(Name, Email) WHERE REGEXP_LIKE (Email, '^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$') GROUP BY REGEXP_SUBSTR(Email, '@(.+)$', 1, 1, 'i', 1); REGEXP_MATCHES: Extract multiple emails from text Extract all email addresses from free-form text like comments or logs — returning each match as a separate row for easy parsing or analysis. SELECT * FROM REGEXP_MATCHES ('Contact us at support@example.com or sales@example.com', '[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}'); This query identifies and returns both email addresses found in the string — no need for loops, manual parsing, or external scripting. REGEXP_SPLIT_TO_TABLE: Break down structured text Split a string into rows using a Regex delimiter — ideal for parsing logs, config entries, or form data. SELECT * FROM REGEXP_SPLIT_TO_TABLE ('Name: John Doe; Email: john.doe@example.com; Phone: 123-456-7890', '; '); This query breaks the input string into rows for each field, making it easier to parse and process the data — especially when dealing with inconsistent or custom delimiters. To explore more examples, syntax options, and usage details, head over to the https://learn.microsoft.com/en-us/sql/t-sql/functions/regular-expressions-functions-transact-sql?view=sql-server-ver17. Conclusion The addition of Regex functionality in SQL Server 2025 and Azure SQL is a major leap forward for developers and DBAs. It eliminates the need for external libraries, CLR integration, or complex workarounds for text processing. With Regex now built into T-SQL, you can: Validate and enforce data formats Sanitize and transform sensitive data Search logs for complex patterns Extract and split structured content And this is just the beginning. Regex opens the door to a whole new level of data quality, text analytics, and developer productivity — all within the database engine. So go ahead and Regex away! Your feedback and partnership continue to drive innovation in Azure SQL and SQL Server — thank you for being part of it.54Views0likes0Comments