hyperscale
31 TopicsConvert geo-replicated databases to Hyperscale
Update: On 22 October 2025 we announced the General Availability for this improvement. We’re excited to introduce the next improvement in Hyperscale conversion: a new feature that allows customers to convert Azure SQL databases to Hyperscale by keeping active geo-replication or failover group configurations intact. This builds on our earlier improvements and directly addresses one of the most requested capabilities from customers. With this improvement, customers can now modernize your database architecture with Hyperscale while maintaining business continuity. Overview We have heard feedback from customers about possible improvements we could make while converting their databases to Hyperscale. Customers complained about the complex steps they needed to perform to convert a database to Hyperscale when the database is geo-replicated by active geo-replication or failover groups. Previously, converting to Hyperscale required tearing down geo-replication links and recreating them after the conversion. Now, that’s no longer necessary. This improvement allows customers to preserve their cross-region disaster recovery or read scale-out configurations and still allows conversion to Hyperscale which helps in minimizing downtime and operational complexity. This feature is especially valuable for applications that rely on failover group endpoints for connectivity. Before this improvement, if application needs to be available during conversion, then connection string needed modifications as a part of conversion because the failover group and its endpoints had to be removed. With this new improvement, the conversion process is optimized for minimal disruption, with telemetry showing majority of cutover times under one minute. Even with a geo-replication configuration in place, you can still choose between automatic and manual cutover modes, offering flexibility in scheduling the transition. Progress tracking is now more granular, giving customers better visibility into each stage of the conversion, including the conversion of the geo-secondary to Hyperscale. Customer feedback Throughout the preview phase, we have received overwhelmingly positive feedback from several customers about this improvement. Viktoriia Kuznetcova, Senior Automation Test Engineer from Nintex says: We needed a low-downtime way to move our databases from the Premium tier to Azure SQL Database Hyperscale, and this new feature delivered perfectly; allowing us to complete the migration in our test environments safely and smoothly, even while the service remained under continuous load, without any issues and without needing to break the failover group. We're looking forward to the public release so we can use it in production, where Hyperscale’s ability to scale storage both up and down will help us manage peak loads without overpaying for unused capacity. Get started The good news is that there are no changes needed to the conversion process. The workflow automatically detects that a geo-secondary is present and converts it to Hyperscale. There are no new parameters, and the method remains the same as the existing conversion process which works for non-geo-replicated databases. All you need is to make sure that: You have only one geo-secondary replica because Hyperscale doesn't support more than one geo-secondary replica. If a chained geo-replication configuration exists, it must be removed before starting the conversion to Hyperscale. Creating a geo-replica of a geo-replica (also known as "geo-replica chaining") isn't supported in Hyperscale. Once the above requirements are satisfied, you can use any of the following methods to initiate the conversion process. Conversion to Hyperscale must be initiated starting from the primary geo-replica. The following table provides sample commands to convert a database named WideWorldImporters on a logical server called contososerver to an 8-vcore Hyperscale database with manual cutover option. Method Command T-SQL ALTER DATABASE WideWorldImporters MODIFY (EDITION = 'Hyperscale', SERVICE_OBJECTIVE = 'HS_Gen5_8') WITH MANUAL_CUTOVER; PowerShell Set-AzSqlDatabase -ResourceGroupName "ResourceGroup01" -ServerName "contososerver" -DatabaseName "WideWorldImporters" -Edition "Hyperscale" -RequestedServiceObjectiveName "HS_Gen5_8" -ManualCutover Azure CLI az sql db update --resource-group ResourceGroup01 --server contososerver --name WideWorldImporters --edition Hyperscale --service-objective HS_Gen5_8 --manual-cutover Here are some notable details of this improvement: The geo-secondary database is automatically converted to Hyperscale with the same service level objective as the primary. All database configurations such as maintenance window, zone-resiliency, backup redundancy etc. remain the same as earlier (i.e., both geo-primary and geo-secondary would inherit from their own earlier configuration). A planned failover isn't possible while the conversion to Hyperscale is in progress. A forced failover is possible. However, depending on the state of the conversion when the forced failover occurs, the new geo-primary after failover might use either the Hyperscale service tier, or its original service tier. If the geo-secondary database is in an elastic pool before conversion, it is taken out of the pool and might need to be added back to a Hyperscale elastic pool separately after the conversion. This feature has been fully deployed across all Azure regions. In case you see error (Update to service objective '<SLO name>' with source DB geo-replicated is not supported for entity '<Database Name>') while converting primary to Hyperscale, we would like to hear from you. Do send us an email to the email ID give in next section. If you don’t want to use this capability, make sure to remove any geo-replication configuration before converting your databases to Hyperscale. Conclusion This update marks a significant step forward in the Hyperscale conversion process, offering simple steps, less downtime and keeping the geo-secondary available during the conversion process. We encourage you to try this capability and provide your valuable feedback and help us refine this feature. You can contact us by commenting on this blog post and we’ll be happy to get back to you. Alternatively, you can also email us at sqlhsfeedback AT microsoft DOT com. We are eager to hear from you all!1.9KViews2likes0CommentsGeneral Availability Announcement: Regex Support in SQL Server 2025 & Azure SQL
We’re excited to announce the General Availability (GA) of native Regex support in SQL Server 2025 and Azure SQL — a long-awaited capability that brings powerful pattern matching directly into T-SQL. This release marks a significant milestone in modernizing string operations and enabling advanced text processing scenarios natively within the database engine. What is Regex? The other day, while building LEGO with my 3-year-old — an activity that’s equal parts joy and chaos — I spent minutes digging for one tiny piece and thought, “If only Regex worked on LEGO.” That moment of playful frustration turned into a perfect metaphor. Think of your LEGO box as a pile of data — a colorful jumble of tiny pieces. Now imagine trying to find every little brick from a specific LEGO set your kid mixed into the pile. That’s tricky — you’d have to sift through each piece one by one. But what if you had a smart filter that instantly found exactly those pieces? That’s what Regex (short for Regular Expressions) does for your data. It’s a powerful pattern-matching tool that helps you search, extract, and transform text with precision. With Regex now natively supported in SQL Server 2025 and Azure SQL, this capability is built directly into T-SQL — no external languages or workarounds required. What can Regex help you do? Regex can help you tackle a wide range of data challenges, including: Enhancing data quality and accuracy by validating and correcting formats like phone numbers, email addresses, zip codes, and more. Extracting valuable insights by identifying and grouping specific text patterns such as keywords, hashtags, or mentions. Transforming and standardizing data by replacing, splitting, or joining text patterns — useful for handling abbreviations, acronyms, or synonyms. Cleaning and optimizing data by removing unwanted patterns like extra whitespace, punctuation, or duplicates. Meet the new Regex functions in T-SQL SQL Server 2025 introduces seven new T-SQL Regex functions, grouped into two categories: scalar functions (return a value per row) and table-valued functions (TVFs) (return a set of rows). Here’s a quick overview: Function Type Description REGEXP_LIKE Scalar Returns TRUE if the input string matches the Regex pattern REGEXP_COUNT Scalar Counts the number of times a pattern occurs in a string REGEXP_INSTR Scalar Returns the position of a pattern match within a string REGEXP_REPLACE Scalar Replaces substrings that match a pattern with a replacement string REGEXP_SUBSTR Scalar Extracts a substring that matches a pattern REGEXP_MATCHES TVF Returns a table of all matches including substrings and their positions REGEXP_SPLIT_TO_TABLE TVF Splits a string into rows using a Regex delimiter These functions follow the POSIX standard and support most of the PCRE/PCRE2 flavor of regular expression syntax, making them compatible with most modern Regex engines and tools. They support common features like: Character classes (\d, \w, etc.) Quantifiers (+, *, {n}) Alternation (|) Capture groups ((...)) You can also use Regex flags to modify behavior: 'i' – Case-insensitive matching 'm' – Multi-line mode (^ and $ match line boundaries) 's' – Dot matches newline 'c' – Case-sensitive matching (default) Examples: Regex in Action Let’s explore how these functions solve tricky real-world data tasks that were hard to do in earlier SQL versions. REGEXP_LIKE: Data Validation — Keeping data in shape Validating formats like email addresses or phone numbers used to require multiple functions or external tools. With REGEXP_LIKE, it’s now a concise query. For example, you can check whether an email contains valid characters before and after the @, followed by a domain with at least two letters like .com, .org, or .co.in. SELECT [Name], Email, CASE WHEN REGEXP_LIKE (Email, '^[A-Za-z0-9._+]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}$') THEN 'Valid Email' ELSE 'Invalid Email' END AS IsValidEmail FROM (VALUES ('John Doe', 'john@contoso.com'), ('Alice Smith', 'alice@fabrikam.com'), ('Bob Johnson', 'bob@fabrikam.net'), ('Charlie Brown', 'charlie@contoso.co.in'), ('Eve Jones', 'eve@@contoso.com')) AS e(Name, Email); We can further use REGEXP_LIKE in CHECK constraints to enforce these rules at the column level (so no invalid format ever gets into the table). For instance: CREATE TABLE Employees ( ..., Email VARCHAR (320) CHECK (REGEXP_LIKE (Email, '^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}$')), Phone VARCHAR (20) CHECK (REGEXP_LIKE (Phone, '^(\d{3})-(\d{3})-(\d{4})$')) ); This level of enforcement significantly enhances data integrity by ensuring that only correctly formatted values are accepted into the database. REGEXP_COUNT: Count JSON object keys Count how many top-level keys exist in a JSON string — no JSON parser needed! SELECT JsonData, REGEXP_COUNT(JsonData, '"[^"]+"\s*:', 1, 'i') AS NumKeys FROM (VALUES ('{"name":"Abhiman","role":"PM","location":"Bengaluru"}'), ('{"skills":["SQL","T-SQL","Regex"],"level":"Advanced"}'), ('{"project":{"name":"Regex GA","status":"Live"},"team":["Tejas","UC"]}'), ('{"empty":{}}'), ('{}')) AS t(JsonData); REGEXP_INSTR: Locate patterns in logs Find the position of the first error code (ERR-XXXX) in log messages — even when the pattern appears multiple times or in varying locations. SELECT LogMessage, REGEXP_INSTR(LogMessage, 'ERR-\d{4}', 1, 1, 0, 'i') AS ErrorCodePosition FROM (VALUES ('System initialized. ERR-1001 occurred during startup.'), ('Warning: Disk space low. ERR-2048. Retry failed. ERR-2049.'), ('No errors found.'), ('ERR-0001: Critical failure. ERR-0002: Recovery started.'), ('Startup complete. Monitoring active.')) AS t(LogMessage); REGEXP_REPLACE: Redact sensitive data Mask SSNs and credit card numbers in logs or exports — all with a single, secure query. SELECT sensitive_info, REGEXP_REPLACE(sensitive_info, '(\d{3}-\d{2}-\d{4}|\d{4}-\d{4}-\d{4}-\d{4})', '***-**-****') AS redacted_info FROM (VALUES ('John Doe SSN: 123-45-6789'), ('Credit Card: 9876-5432-1098-7654'), ('SSN: 000-00-0000 and Card: 1111-2222-3333-4444'), ('No sensitive info here'), ('Multiple SSNs: 111-22-3333, 222-33-4444'), ('Card: 1234-5678-9012-3456, SSN: 999-88-7777')) AS t(sensitive_info); REGEXP_SUBSTR: Extract and count email domains Extract domains from email addresses and group users by domain. SELECT REGEXP_SUBSTR(Email, '@(.+)$', 1, 1, 'i', 1) AS Domain, COUNT(*) AS NumUsers FROM (VALUES ('Alice', 'alice@contoso.com'), ('Bob', 'bob@fabrikam.co.in'), ('Charlie', 'charlie@example.com'), ('Diana', 'diana@college.edu'), ('Eve', 'eve@contoso.com'), ('Frank', 'frank@fabrikam.co.in'), ('Grace', 'grace@example.net')) AS e(Name, Email) WHERE REGEXP_LIKE (Email, '^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$') GROUP BY REGEXP_SUBSTR(Email, '@(.+)$', 1, 1, 'i', 1); REGEXP_MATCHES: Extract multiple emails from text Extract all email addresses from free-form text like comments or logs — returning each match as a separate row for easy parsing or analysis. SELECT * FROM REGEXP_MATCHES ('Contact us at support@example.com or sales@example.com', '[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}'); This query identifies and returns both email addresses found in the string — no need for loops, manual parsing, or external scripting. REGEXP_SPLIT_TO_TABLE: Break down structured text Split a string into rows using a Regex delimiter — ideal for parsing logs, config entries, or form data. SELECT * FROM REGEXP_SPLIT_TO_TABLE ('Name: John Doe; Email: john.doe@example.com; Phone: 123-456-7890', '; '); This query breaks the input string into rows for each field, making it easier to parse and process the data — especially when dealing with inconsistent or custom delimiters. To explore more examples, syntax options, and usage details, head over to the https://learn.microsoft.com/en-us/sql/t-sql/functions/regular-expressions-functions-transact-sql?view=sql-server-ver17. Conclusion The addition of Regex functionality in SQL Server 2025 and Azure SQL is a major leap forward for developers and DBAs. It eliminates the need for external libraries, CLR integration, or complex workarounds for text processing. With Regex now built into T-SQL, you can: Validate and enforce data formats Sanitize and transform sensitive data Search logs for complex patterns Extract and split structured content And this is just the beginning. Regex opens the door to a whole new level of data quality, text analytics, and developer productivity — all within the database engine. So go ahead and Regex away! Your feedback and partnership continue to drive innovation in Azure SQL and SQL Server — thank you for being part of it.129Views0likes0Comments🔐 Public Preview: Backup Immutability for Azure SQL Database LTR Backups
The Ransomware Threat Landscape Ransomware attacks have become one of the most disruptive cybersecurity threats in recent years. These attacks typically follow a destructive pattern: Attackers gain unauthorized access to systems. They encrypt or delete critical data. They demand ransom in exchange for restoring access. Organizations without secure, tamper-proof backups are often left with no choice but to pay the ransom or suffer significant data loss. This is where immutable backups play a critical role in defense. 🛡️ What Is Backup Immutability? Backup immutability ensures that once a backup is created, it cannot be modified or deleted for a specified period. This guarantees: Protection against accidental or malicious deletion. Assurance that backups remain intact and trustworthy. Compliance with regulatory requirements for data retention and integrity. 🚀 Azure SQL Database LTR Backup Immutability (Public Preview) Microsoft has introduced backup immutability for Long-Term Retention (LTR) backups in Azure SQL Database, now available in public preview. This feature allows organizations to apply Write Once, Read Many (WORM) policies to LTR backups stored in Azure Blob Storage. Key Features: Time-based immutability: Locks backups for a defined duration (e.g., 30 days). Legal hold immutability: Retains backups indefinitely until a legal hold is explicitly removed. Tamper-proof storage: Backups cannot be deleted or altered, even by administrators. This ensures that LTR backups remain secure and recoverable, even in the event of a ransomware attack. 📜 Regulatory Requirements for Backup Immutability Many global regulations mandate immutable storage to ensure data integrity and auditability. Here are some key examples: Region Regulation Requirement USA SEC Rule 17a-4(f) Requires broker-dealers to store records in WORM-compliant systems. FINRA Mandates financial records be preserved in a non-rewriteable, non-erasable format. HIPAA Requires healthcare organizations to ensure the integrity and availability of electronic health records. EU GDPR Emphasizes data integrity and the ability to demonstrate compliance through audit trails. Global ISO 27001, PCI-DSS Require secure, tamper-proof data retention for audit and compliance purposes. Azure’s immutable storage capabilities help organizations meet these requirements by ensuring that backup data remains unchanged and verifiable. 🕒 Time-Based vs. Legal Hold Immutability ⏱️ Time-Based Immutability Locks data for a predefined period (e.g., 30 days). Ideal for routine compliance and operational recovery. Automatically expires after the retention period. 📌 Legal Hold Immutability Retains data indefinitely until the hold is explicitly removed. Used in legal investigations, audits, or regulatory inquiries. Overrides time-based policies to ensure data preservation. Both types can be applied to Azure SQL LTR backups, offering flexibility and compliance across different scenarios. 🧩 How Immutability Protects Against Ransomware Immutable backups are a critical component of a layered defense strategy: Tamper-proof: Even if attackers gain access, they cannot delete or encrypt immutable backups. Reliable recovery: Organizations can restore clean data from immutable backups without paying ransom. Compliance-ready: Meets regulatory requirements for data retention and integrity. By enabling immutability for Azure SQL LTR backups, organizations can significantly reduce the risk of data loss and ensure business continuity. ✅ Final Thoughts The public preview of backup immutability for Azure SQL Database LTR backups is a major step forward in ransomware resilience and regulatory compliance. With support for both time-based and legal hold immutability, Azure empowers organizations to: Protect critical data from tampering or deletion. Meet global compliance standards. Recover quickly and confidently from cyberattacks. Immutability is not just a feature—it’s a foundational pillar of modern data protection. Documentation is available at - Backup Immutability for Long-Term Retention Backups - Azure SQL Database | Microsoft Learn651Views4likes1CommentMultiple geo-replicas for Azure SQL Hyperscale is now in public preview
Active geo-replication for Azure SQL is a business continuity solution that lets you perform quick disaster recovery of individual databases if there is a regional disaster or a large-scale outage. Up to four geo-secondaries could be created for an Azure SQL database except for Hyperscale, until now. We’re excited to announce that Azure SQL Database Hyperscale support for up to four geo-replicas is available now in public preview. This enhancement gives you greater flexibility for disaster recovery, regional scale-out, and migration scenarios. What’s New? Create up to four read-only geo-replicas for each Hyperscale database, in the same or different Azure regions. Use the additional geo-replicas to add read scale-out capabilities to additional regions. More flexibility to facilitate database migrations and zone redundancy changes. How to Get Started Getting started with multiple geo-replicas in Azure SQL Hyperscale is straightforward. In the Azure Portal, you can add multiple geo-replicas using the same process that you currently use to add a geo-replica. Go to your Hyperscale database in the Azure portal. Open the “Replicas” blade under “Data management”. Click “Create replica”. TIP: If you want the geo-replica enabled for zone redundancy, remember to click on "Configure database" in the "Compute + storage" section during this step. The zone redundancy setting is not automatically inherited from the primary database when creating the geo-replica. Select the target server Review and create Creating a geo-replica can also be done with command line tools like PowerShell. Example: New-AzSqlDatabaseSecondary -DatabaseName mydb1 -ServerName sqlserver1 -ResourceGroupName myrg -PartnerResourceGroupName myrg -PartnerServerName sqlserver2 -AllowConnections “All” Performing a Failover Performing a failover in the portal is the same process with multiple geo-replicas. Click on the ellipses in the row of the replica to which you want to fail over and choose Failover or Forced failover. For PowerShell, use the Set-AzSqlDatabaseSecondary cmdlet. Example: Set-AzSqlDatabaseSecondary -DatabaseName mydb1 -PartnerResourceGroupName myrg -ServerName sqlserver2 -ResourceGroupName myrg -Failover Limitations & Notes You can create up to four geo-replicas per database. Geo-replicas must be created on different logical servers. Chaining (creating a geo-replica of a geo-replica) is not supported. If you would like zone redundancy enabled for the geo-replica, you must configure that setting during the create geo-replica process since the setting is not automatically inherited from the primary database. Learn More: Hyperscale secondary replicas - Azure SQL Database | Microsoft Learn Active Geo-Replication - Azure SQL Database | Microsoft Learn346Views1like0CommentsIntroducing the Azure SQL hub: A simpler, guided entry into Azure SQL
Choosing the right Azure SQL service can be challenging. To make this easier, we built the Azure SQL hub, a new home for everything related to Azure SQL in the Azure portal. Whether you’re new to Azure SQL or an experienced user, the hub helps you find the right service quickly and decide, without disrupting your existing workflows. For existing users: Your current workflows remain unchanged. The only visible update is a streamlined navigation pane where you access Azure SQL resources. For new users: Start from the Azure SQL hub home page. Get personalized recommendations by answering a few quick questions or chatting with Azure portal Copilot. Or compare services side by side and explore key resources, all without leaving the portal. This is one way to find it: Searching for "azure sql" in main search box or marketplace is also efficient way to get to Azure SQL hub Answer a few questions to get our recommendation and use Copilot to refine your requirements. Get a detailed side-by-side comparison without leaving the hub. Still deciding? Explore a selection of Azure SQL services for free. This option takes you straight to the resource creation page with a pre-applied free offer. Try the Azure SQL hub today in the Azure portal, and share your feedback in the comments!1.6KViews3likes0CommentsPublic Preview - Data Virtualization for Azure SQL Database
Data virtualization, now in public preview in Azure SQL Database, enables you to leverage all the power of Transact-SQL (T-SQL) and seamlessly query external data from Azure Data Lake Storage Gen2 or Azure Blob Storage, eliminating the need for data duplication, or ETL processes, allowing for faster analysis and insights. Integrate external data, such as CSV, Parquet, or Delta files, with your relational database while maintaining the original data format and avoiding unnecessary data movement. Present integrated data to applications and reports as a standard SQL object or through a normal SELECT command. Data Virtualization for Azure SQL Database supports SAS tokens, Managed Identity, and User identity for secure access. Data Virtualization for Azure SQL Database will introduce and expand support for: Database Scoped Credential. External Data Source. External File Format - with support for Parquet, CSV, and Delta. External Tables. OPENROWSET. Support metadata functions and JSON functions. For enhanced security and flexibility Data Virtualization for Azure SQL Database supports three authentication methods: Shared access signature. Managed identity (system assigned managed identity and user-assigned managed identity). User identity. Key Benefits Just like in SQL Server 2022 and Azure SQL Managed Instance the key benefits of Data Virtualization for Azure SQL Database are: Seamless Data Access: Query external CSV, Parquet, and Delta Lake tables using T-SQL as if they were native tables within Azure SQL Database. Allowing for off-loading cold data while keeping it easily accessible. Enhanced Productivity: Reduce the time and effort required to integrate and analyze data from multiple sources. Cost Efficiency: Minimize the need for data replication and storage costs associated with traditional data integration methods. Real-Time Insights: Enable real-time data querying and insights without delays caused by data movement or synchronization. Security: Leverage SQL Server security features for granular permissions, credential management, and control. Example Data Virtualization for Azure SQL Database is based on the same core principles as SQL Server’s PolyBase feature. With support for Azure Data Lake Gen 2, using prefix adls:// and Azure Blob Storage, using prefix abs://. For the following example we are going to use Azure Open Datasets, more specifically NYC yellow taxi trip records open data set which allows public access. For private data sources customers can leverage multiple authentication methods like SAS Tokens, Managed Identity and User Identity. -- Create Azure Blob Storage (ABS) data source CREATE EXTERNAL DATA SOURCE NYCTaxiExternalDataSource WITH ( LOCATION = 'abs://nyctlc@azureopendatastorage.blob.core.windows.net'); -- Using OPENROWSET to read Parquet files from the external data source SELECT TOP 10 * FROM OPENROWSET( BULK '/yellow/puYear=*/puMonth=*/*.parquet', DATA_SOURCE = 'NYCTaxiExternalDataSource', FORMAT = 'parquet' ) AS filerows; -- Or using External Tables CREATE EXTERNAL FILE FORMAT DemoFileFormat WITH (FORMAT_TYPE = PARQUET); --Create external table CREATE EXTERNAL TABLE tbl_TaxiRides ( vendorID VARCHAR(100) COLLATE Latin1_General_BIN2, tpepPickupDateTime DATETIME2, tpepDropoffDateTime DATETIME2, passengerCount INT, tripDistance FLOAT, puLocationId VARCHAR(8000), doLocationId VARCHAR(8000), startLon FLOAT, startLat FLOAT, endLon FLOAT, endLat FLOAT, rateCodeId SMALLINT, storeAndFwdFlag VARCHAR(8000), paymentType VARCHAR(8000), fareAmount FLOAT, extra FLOAT, mtaTax FLOAT, improvementSurcharge VARCHAR(8000), tipAmount FLOAT, tollsAmount FLOAT, totalAmount FLOAT ) WITH ( LOCATION = 'yellow/puYear=*/puMonth=*/*.parquet', DATA_SOURCE = NYCTaxiExternalDataSource, FILE_FORMAT = DemoFileFormat ); SELECT TOP 10 * FROM tbl_TaxiRides; You can also use these capabilities in combination with other metadata functions like sp_describe_first_result_set, filename(), and filepath(). Getting started Data Virtualization for Azure SQL Database is currently available in select regions, with broader availability coming soon across all Azure regions Data Virtualization for Azure SQL Database is based on the same core principles as SQL Server’s PolyBase feature. To know more and get started with Data Virtualization for Azure SQL Database.2KViews3likes0CommentsAzure SQL Database Hyperscale - Enhanced Performance Features Are Now Generally Available!
Enhanced Performance Features Now GA Increased Transaction Log Generation Rate to 150 MiB/s The transaction log generation rate for Azure SQL Database Hyperscale, now at 150 MiB/s, delivers faster data processing and better handling of write-intensive workloads. Customers who participated in the public preview reported remarkable results, including significantly faster data loads into the database. One key area of impact was the ability to smoothly transition to Hyperscale from other databases, enabling a more efficient transition while maintaining performance standards. Continuous Priming for Optimal Performance Continuous priming optimizes performance during failovers by ensuring that secondary replicas are primed with the “hot pages” of the primary replica. This innovation has drastically reduced read latencies for secondary replicas. For example, internal customer Azure DevOps has enabled continuous priming and observed substantial benefits in their RBPEX utilization, leading to enhanced operational efficiency. Customer Success Stories Throughout the public preview, customers reported faster bulk data imports, real-time data ingestion, and smoother transition to Hyperscale, thanks to the increased transaction log generation rate. Similarly, continuous priming decreased read latencies for secondary replicas, benefiting read-heavy workloads and failovers. Azure DevOps serves as a prime example of success, leveraging continuous priming to optimize RBPEX usage effectively. What does this mean for you These features, now available for all customers, mark a significant step forward in database technology. Whether you’re considering transitioning to Hyperscale or looking to optimize your current setup, these enhancements offer powerful tools to meet your growing data and performance needs. The feedback from our customers during the public preview has been invaluable, and we are excited to see how these features will continue to drive success across diverse use cases. Next Steps We invite you to explore these new capabilities and fully leverage what Azure SQL Database Hyperscale has to offer. Please note that these features are being rolled out region by region and will be available in all regions by the end of June. As always, we are here to support you every step of the way. Share your feedback, questions, or success stories by emailing us at sqlhsfeedback AT microsoft DOT com. Together, we can continue to evolve and innovate in the world of data management.
1.7KViews0likes0CommentsSimplified & lower pricing for Azure SQL Database and Azure SQL Managed Instance backup storage
Today as you deploy your Azure SQL database or Azure SQL managed instance, one of the important decisions to be made is the choice for your backup storage redundancy (BSR). I say it's important because the availability of your database depends on the availability of your backups. Here’s why. Consider the scenario where your DB has high availability configured via zone redundancy. And, let's say, your backups are configured non-zone redundant. In the event of a failure in the zone, your database fails over to another zone within the region, however your backups won't, because of their storage setting. Now, in the new zone, the backup service attempts to backup your database but cannot reach the backups in the zone where the failure happened causing the logs to become full and eventually impacting the availability of the database itself. As you create the Azure SQL database, the choices for backup storage redundancy are: Locally Redundant Storage (LRS) Zone Redundant Storage (ZRS) Geo Redundant Storage (GRS) and Geo Zone Redundant Storage (GZRS) Each of these storage types provides different levels of durability, resiliency and availability for your databases and database backups. Not surprisingly, each storage type also has different levels of pricing, and the price increases significantly as the protection level increases with GZRS storage type almost 4-5x LRS. Choosing between resilience and cost optimization is an extremely difficult choice that the DB owner must make. We are thrilled to announce that, starting from Nov 01, 2024, the backup storage pricing is now streamlined and simplified across Azure SQL database and Azure SQL Managed Instance. Bonus – we even reduced the prices 😊 The price changes apply to the Backup Storage Redundancy configuration for both Point-in-time and Long-Term Retention backups, across the following tiers of Azure SQL Database and Azure SQL Managed Instance: Product Service Tier Azure SQL Database General Purpose Business Critical Hyperscale Azure SQL Managed Instance General Purpose Business Critical Next Generation General Purpose (preview) As we made the changes, following were the principles we adhered to: No price increase BSR pricing for ZRS is reduced to match the BSR pricing for LRS BSR pricing for GZRS is reduced to match the BSR pricing of GRS BSR pricing for GRS/GZRS will be 2x that of LRS/ZRS Type of backups What is Changing PITR BSR pricing for ZRS is reduced by 20% to match pricing for LRS for all service tiers in Azure SQL Database and Azure SQL Managed Instance except for Azure SQL Database Hyperscale service tier. BSR pricing for GZRS is reduced by 41% to match pricing for GRS for all service tiers in Azure SQL Database and Azure SQL Managed Instance. LTR BSR pricing for ZRS is reduced by 20% to match pricing for LRS for all service tiers in Azure SQL Database and Azure SQL Managed Instance. BSR pricing for GZRS is reduced by 41% to match pricing for GRS for all service tiers in Azure SQL Database and Azure SQL Managed Instance. As an example, lets take East US as the region and look at the pricing for backup storage redundancy for Point in Time storage before and after the changes: For General Purpose/Business Critical service tiers the pricing would now be: Backup Storage Redundancy Current price New Price Price change LRS $0.10 $0.10 None ZRS $0.125 $0.10 20% less GRS $0.20 $0.20 None GZRS $0.34 $0.20 41% less For Hyperscale service tier, the new pricing would now be: Backup Storage Redundancy Current price New Price Price change LRS $0.08 $0.08 None ZRS $0.1 $0.10 None GRS $0.20 $0.20 None GZRS $0.34 $0.20 41% less Similarly, Backup storage redundancy prices for Long Term Retention backups in East US would be as follows: Backup Storage Redundancy Current price New Price Price change LRS $0.025 $0.025 None ZRS $0.0313 $0.025 20% less GRS $0.05 $0.05 None GZRS $0.0845 $0.05 41% less As a customer, the decision now becomes much easier for you. If you need regional resiliency: choose Zone Redundant Storage (ZRS) If you need regional and/or geo resiliency: choose Geo Zone Redundant Storage (GZRS). If the Azure region does not support Availability Zones, then choose Local Redundant Storage for regional resiliency, and Geo Redundant Storage for geo resiliency respectively. Please Note: The Azure pricing page and Azure pricing calculator will be updated with these new prices soon. The actual pricing meters have already been updated. Additionally, the LTR pricing change for Hyperscale will be in effect from January 1, 2025.1.9KViews0likes0CommentsConversion to Hyperscale: Now generally available with enhanced efficiency
We are excited to announce the general availability (GA) of the latest improvements in the Azure SQL Database conversion process to Hyperscale. These improvements bring shorter downtime, better control, and more visibility to the Hyperscale conversion process, making it easier and more efficient for our customers to switch to Hyperscale. Key enhancements We received feedback from customers about longer-than-expected downtime, lack of visibility, and unpredictable cutover time during database conversion to Hyperscale. In response, we have made key improvements in this area. 1. Shorter cutover time Prior to this improvement, the cutover time depended on the database size and workload. With the improvement we have significantly reduced the average cutover time (effective unavailability of the database for the application) from about six minutes, sometime extending to thirty minutes, to less than one minute. 2. Higher log generation rate By improving synchronizing mechanisms between source and destination while the conversion is in progress, we now support a higher log generation rate on the source database, ensuring that the conversion can complete successfully with write intensive workloads. This enhancement ensures a smoother and faster migration experience, even for high-transaction rate environments. We now support up to 50 MiB/s log generation rate on the source database during conversion. Once converted to Hyperscale, the supported log generation rate is 100 MiB/s, with higher rate of 150 MiB/s in preview. 3. Manual cutover One of the most significant improvements is the introduction of a customer-controlled cutover mode called manual cutover. This allows customers to have more control over the conversion process, enabling them to schedule and manage the cutover at the time of their choice. You can perform cutover within 24 hours once conversion process reaches “Ready to cutover” state. 4. Enhanced progress reporting Improved progress reporting capabilities provide detailed insights into the conversion process. Customers can now monitor the migration status in real-time, with clear visibility into each step of the process. Progress reporting is available via T-SQL, REST API, PowerShell, Azure CLI, or in the Azure portal. Detailed progress information about conversion phases provides greater transparency and control over the process. How to use it? All the improvements are applied automatically. Once exception is the manual cutover mode, where you need to use a new optional parameter in T-SQL, PowerShell, Azure CLI, or REST API while initiating the conversion process. The Azure portal also provides a new option to select manual cutover as shown in below image. Granular progress reporting is available irrespective of the cutover mode. One of our customers said - The migration to Hyperscale using the improvements was much easier than expected. The customer-controlled cutover and detailed progress reporting made the process seamless and efficient. For more information, see our documentation: Convert a Database to Hyperscale - Azure SQL Database | Microsoft Learn Conclusion We are thrilled to bring these enhancements to our customers and look forward to seeing how they will transform their Hyperscale conversion experience. This update marks a significant step forward in the Hyperscale conversion process, offering faster cutover time, enhanced control with a manual cutover option, and improved progress visibility. You can contact us by commenting on this blog post and we’ll be happy to get back to you. Alternatively, you can also email us at sqlhsfeedback AT microsoft DOT com. We are eager to hear from you all!2.4KViews2likes1Comment