hyperscale
29 TopicsIntroducing the Azure SQL hub: A simpler, guided entry into Azure SQL
Choosing the right Azure SQL service can be challenging. To make this easier, we built the Azure SQL hub, a new home for everything related to Azure SQL in the Azure portal. Whether you’re new to Azure SQL or an experienced user, the hub helps you find the right service quickly and decide, without disrupting your existing workflows. For existing users: Your current workflows remain unchanged. The only visible update is a streamlined navigation pane where you access Azure SQL resources. For new users: Start from the Azure SQL hub home page. Get personalized recommendations by answering a few quick questions or chatting with Azure portal Copilot. Or compare services side by side and explore key resources, all without leaving the portal. This is one way to find it: Searching for "azure sql" in main search box or marketplace is also efficient way to get to Azure SQL hub Answer a few questions to get our recommendation and use Copilot to refine your requirements. Get a detailed side-by-side comparison without leaving the hub. Still deciding? Explore a selection of Azure SQL services for free. This option takes you straight to the resource creation page with a pre-applied free offer. Try the Azure SQL hub today in the Azure portal, and share your feedback in the comments!1.3KViews3likes0CommentsPublic Preview - Data Virtualization for Azure SQL Database
Data virtualization, now in public preview in Azure SQL Database, enables you to leverage all the power of Transact-SQL (T-SQL) and seamlessly query external data from Azure Data Lake Storage Gen2 or Azure Blob Storage, eliminating the need for data duplication, or ETL processes, allowing for faster analysis and insights. Integrate external data, such as CSV, Parquet, or Delta files, with your relational database while maintaining the original data format and avoiding unnecessary data movement. Present integrated data to applications and reports as a standard SQL object or through a normal SELECT command. Data Virtualization for Azure SQL Database supports SAS tokens, Managed Identity, and User identity for secure access. Data Virtualization for Azure SQL Database will introduce and expand support for: Database Scoped Credential. External Data Source. External File Format - with support for Parquet, CSV, and Delta. External Tables. OPENROWSET. Support metadata functions and JSON functions. For enhanced security and flexibility Data Virtualization for Azure SQL Database supports three authentication methods: Shared access signature. Managed identity (system assigned managed identity and user-assigned managed identity). User identity. Key Benefits Just like in SQL Server 2022 and Azure SQL Managed Instance the key benefits of Data Virtualization for Azure SQL Database are: Seamless Data Access: Query external CSV, Parquet, and Delta Lake tables using T-SQL as if they were native tables within Azure SQL Database. Allowing for off-loading cold data while keeping it easily accessible. Enhanced Productivity: Reduce the time and effort required to integrate and analyze data from multiple sources. Cost Efficiency: Minimize the need for data replication and storage costs associated with traditional data integration methods. Real-Time Insights: Enable real-time data querying and insights without delays caused by data movement or synchronization. Security: Leverage SQL Server security features for granular permissions, credential management, and control. Example Data Virtualization for Azure SQL Database is based on the same core principles as SQL Server’s PolyBase feature. With support for Azure Data Lake Gen 2, using prefix adls:// and Azure Blob Storage, using prefix abs://. For the following example we are going to use Azure Open Datasets, more specifically NYC yellow taxi trip records open data set which allows public access. For private data sources customers can leverage multiple authentication methods like SAS Tokens, Managed Identity and User Identity. -- Create Azure Blob Storage (ABS) data source CREATE EXTERNAL DATA SOURCE NYCTaxiExternalDataSource WITH ( LOCATION = 'abs://nyctlc@azureopendatastorage.blob.core.windows.net'); -- Using OPENROWSET to read Parquet files from the external data source SELECT TOP 10 * FROM OPENROWSET( BULK '/yellow/puYear=*/puMonth=*/*.parquet', DATA_SOURCE = 'NYCTaxiExternalDataSource', FORMAT = 'parquet' ) AS filerows; -- Or using External Tables CREATE EXTERNAL FILE FORMAT DemoFileFormat WITH (FORMAT_TYPE = PARQUET); --Create external table CREATE EXTERNAL TABLE tbl_TaxiRides ( vendorID VARCHAR(100) COLLATE Latin1_General_BIN2, tpepPickupDateTime DATETIME2, tpepDropoffDateTime DATETIME2, passengerCount INT, tripDistance FLOAT, puLocationId VARCHAR(8000), doLocationId VARCHAR(8000), startLon FLOAT, startLat FLOAT, endLon FLOAT, endLat FLOAT, rateCodeId SMALLINT, storeAndFwdFlag VARCHAR(8000), paymentType VARCHAR(8000), fareAmount FLOAT, extra FLOAT, mtaTax FLOAT, improvementSurcharge VARCHAR(8000), tipAmount FLOAT, tollsAmount FLOAT, totalAmount FLOAT ) WITH ( LOCATION = 'yellow/puYear=*/puMonth=*/*.parquet', DATA_SOURCE = NYCTaxiExternalDataSource, FILE_FORMAT = DemoFileFormat ); SELECT TOP 10 * FROM tbl_TaxiRides; You can also use these capabilities in combination with other metadata functions like sp_describe_first_result_set, filename(), and filepath(). Getting started Data Virtualization for Azure SQL Database is currently available in select regions, with broader availability coming soon across all Azure regions Data Virtualization for Azure SQL Database is based on the same core principles as SQL Server’s PolyBase feature. To know more and get started with Data Virtualization for Azure SQL Database.2KViews3likes0CommentsImproving the conversion to Hyperscale with greater efficiency
Update: On 09 April 2025 we announced the general availability this improvement. For more details, please read the GA announcement. We are thrilled to announce the latest improvement in the Azure SQL Database conversion process to Hyperscale. This update, now available in public preview, streamlines the database conversion process, reducing downtime and offering greater control and visibility to our customers. Let’s dive into the enhancements and what they mean for you. Overview We have heard feedback from customers about possible improvements we could make while converting their databases to Hyperscale. Customers complained about longer than expected downtime during conversion, no insights into the current state of conversion, and unpredictable cutover time. We acted on the feedback and made several key improvements in this release. 1. Shorter cutover time One of the most significant enhancements in this improvement is the reduction in cutover times when converting a database to Hyperscale. This improvement ensures that the cutover process is faster and more efficient, minimizing downtime and connectivity disruptions. Based on our telemetry, cutover time 99 th percentile has been reduced from ~6 minutes to less than ~1 minute. 2. Support for higher log generation rate During the migration, the new process supports a higher log generation rate on the source database, ensuring that the conversion can handle more write intensive workloads and complete successfully. In the previous version we saw that the database conversions were not able to finish when log generation on the source was more than 8 MiBps continuously, throughout the conversion process. With this improvement we can now support up to 50 MiBps log generation on source and still succeed. This was achieved by improving synchronizing mechanisms between source and destination while conversion is in progress. 3. Manual cutover A new option introduced with this improvement is manual cutover. This allows customers to initiate the conversion process and pause it when the database is ready to cutover to Hyperscale, giving them up to 72 hours to perform the cutover at a time that best suits their operational needs. If the manual cutover is not completed within the given timeframe, the process is automatically canceled, and the database remains on the original service tier, without any data loss. If the new parameter is not passed, then the cutover experience would be the same as earlier i.e. automatic cutover as soon as Hyperscale database is ready. 4. Granular progress reporting Another improvement in the conversion process is that now customers can monitor the entire conversion process at a granular level. Whether using T-SQL, REST API, PowerShell, Azure CLI or Azure portal, detailed progress information about conversion phases is available, providing greater transparency and control over the process. Customer feedback Throughout the private preview phase, we have received overwhelmingly positive feedback from several customers about this improvement. John Nafa, Cloud Architect, Asurgent says: The new database conversion experience from Microsoft has been incredibly seamless and efficient, making the conversion process to Azure SQL Database Hyperscale smooth and straightforward. The progress reporting and manual cutover features were especially valuable, providing real-time insights and ensuring a smooth transition. It’s been a pleasure working with this improvement, and I’m excited to see it become available to a wider audience. Get started Out of the four key improvements mentioned above, most are automatic. To utilize the manual cutover option, you need to use a new optional parameter in T-SQL, PowerShell, Azure CLI, or REST API while initiating the conversion process. Azure portal also provides a new option to select manual cutover. Granular progress reporting is available irrespective of the cutover mode. Use manual cutover Let us go through new options available in various interfaces with this improvement. Azure Portal To use manual cutover, a new Cutover mode option is provided in the Azure portal. The following screenshot shows the steps to convert a database to Hyperscale. The new option is shown under step 3. If you have not seen this option in the Azure portal yet, don’t worry. We are enabling this capability now and you can expect it to be available within days. Commands A new parameter has been introduced in each interface to initiate the conversion process with the manual cutover option. The following table provides sample commands to convert a database named WideWorldImporters on a logical server called contososerver to an 8-vcore serverless Hyperscale database. Method Command T-SQL ALTER DATABASE WideWorldImporters MODIFY (EDITION = 'Hyperscale', SERVICE_OBJECTIVE = 'HS_S_Gen5_8') WITH MANUAL_CUTOVER PowerShell Set-AzSqlDatabase -ResourceGroupName "ResourceGroup01" -ServerName "contososerver" -DatabaseName "WideWorldImporters" -Edition "Hyperscale" -requestedServiceObjectiveName "HS_S_Gen5_2" -ManualCutover Azure CLI az sql db update --resource-group ResourceGroup01 --server contososerver --name WideWorldImporters --edition Hyperscale --service-objective HS_S_Gen5_2 --manual-cutover Note: Additional parameters like backup storage redundancy or zone redundancy etc. can also be added. Refer documentation Set-AzSqlDatabase (Az.Sql) | Microsoft Learn (for PowerShell) and az sql db | Microsoft Learn (for Azure CLI). REST API also has new properties manualCutover and performCutover. Refer Databases - Create Or Update - REST API (Azure SQL Database) | Microsoft Learn for more details. Monitor the conversion progress The progress during this conversion can be monitored using various interfaces. Detailed phase information is available in the Dynamic Management View (DMV) sys.dm_operation_status for those using T-SQL. Similar command options are available for PowerShell and Azure CLI users. Azure Portal Progress of the conversion process can be seen by clicking on Details hyperlink as shown in the below screenshot at step 2. Monitoring programmatically As a part of this improvement, we have introduced new columns in the sys.dm_operation_status DMV called phase_code, phase_desc and phase_info which are populated during conversion process. Refer to documentation for more details. Similarly, new output columns, phase and phaseInformation, are available if using PowerShell under OperationPhaseDetails and if using Azure CLI under operationPhaseDetails. Here is the quick reference for commands via various interfaces. Method Command T-SQL SELECT state_desc, phase_code, phase_desc, JSON_VALUE(phase_info, '$.currentStep') AS currentStep, JSON_VALUE(phase_info, '$.totalSteps') AS totalSteps, phase_info, start_time, error_code, error_desc, last_modify_time FROM sys.dm_operation_status WHERE resource_type = 0 AND operation='ALTER DATABASE' AND major_resource_id = 'WideWorldImporters' PowerShell (Get-AzSqlDatabaseActivity -ResourceGroupName "ResourceGroup01" -ServerName "contososerver" -DatabaseName "WideWorldImporters").OperationPhaseDetails Azure CLI (az sql db op list --resource-group ResourceGroup01 --server contososerver --database WideWorldImporters | ConvertFrom-Json).operationPhaseDetails.phase Perform manual cutover The manual cutover, if specified, can be performed using the Azure portal or programmatically, ensuring a smooth transition to the Hyperscale tier. Azure Portal Cutover can be initiated from the same screen where progress of conversion is reported. Commands Here is the quick reference of the commands to perform cutover. Method Command T-SQL ALTER DATABASE WideWorldImporters PERFORM_CUTOVER PowerShell Set-AzSqlDatabase -ResourceGroupName "ResourceGroup01" -ServerName "contososerver" -DatabaseName "WideWorldImporters" -PerformCutover Azure CLI az sql db update --resource-group ResourceGroup01 --server contososerver --name WideWorldImporters --perform-cutover Conclusion This update marks a significant step forward in the Hyperscale conversion process, offering faster cutover time, enhanced control with a manual cutover option, and improved progress visibility. We encourage you to try these features and provide your valuable feedback and help us refine this feature for general availability. You can contact us by commenting on this blog post and we’ll be happy to get back to you. Alternatively, you can also email us at sqlhsfeedback AT microsoft DOT com. We are eager to hear from you all!2.7KViews3likes0CommentsShrink for Azure SQL Database Hyperscale is now generally available
Today we are thrilled to announce General Availability (GA) of the database shrink in Azure SQL Database Hyperscale. This milestone marks another significant improvement in our Hyperscale service, providing our customers with more flexibility and efficiency in managing their database storage. Overview Database shrink in Azure SQL Database allows customers to reclaim unused space within their databases to optimize storage costs. Now this is available in Hyperscale service tier too. This feature has been highly anticipated by many customers, and we are excited to deliver it with robust capabilities and seamless user experience. This improvement needs no new learnings as the same DBCC SHRINKDATABASE and DBCC SHRINKFILE commands are used. Database shrink was first announced for Hyperscale in public preview during last year. During the preview phase, we received invaluable feedback from our customers, which helped us refine and enhance this capability. We are grateful for the active participation and insights shared by the customers, which played a crucial role in shaping the feature. Key features Storage Optimization: Database shrink effectively reclaims unused space, reducing the allocated storage footprint of your database. Cost Efficiency: By optimizing storage usage, customers can potentially lower their storage costs. Ease of Use: The feature uses the same syntax as what is available in other service tiers and in SQL Server. So, customers are able to seamlessly use existing scripts, minimizing disruption during adoption. How to use To help you get started with the shrink functionality, we have provided comprehensive documentation including example scripts. Here are the basic steps to implement database shrink in your Hyperscale database: Connect to your Hyperscale database through your preferred management tool such as SQL Server Management Studio, Azure Data Studio etc. Evaluate the free space available in the database. Execute the shrink command using the provided T-SQL (DBCC SHRINKDATABASE and DBCC SHRINKFILE). Optionally, you can monitor the shrink process through DMV to ensure successful completion. Review the reclaimed space and rerun by adjusting the parameters, if necessary. Conclusion The release of database shrink in Hyperscale is a testament to our commitment to continuous improvement in Hyperscale service tier. The General Availability of database shrink in Azure SQL Database Hyperscale is a major milestone, and we are excited to see the positive impact it will have on your database management. You can contact us by adding to this blog post and we’ll be happy to get back to you. Alternatively, you can also email us at sqlhsfeedback AT microsoft DOT com. We are eager to hear from you all!4.2KViews3likes2Comments🔐 Public Preview: Backup Immutability for Azure SQL Database LTR Backups
The Ransomware Threat Landscape Ransomware attacks have become one of the most disruptive cybersecurity threats in recent years. These attacks typically follow a destructive pattern: Attackers gain unauthorized access to systems. They encrypt or delete critical data. They demand ransom in exchange for restoring access. Organizations without secure, tamper-proof backups are often left with no choice but to pay the ransom or suffer significant data loss. This is where immutable backups play a critical role in defense. 🛡️ What Is Backup Immutability? Backup immutability ensures that once a backup is created, it cannot be modified or deleted for a specified period. This guarantees: Protection against accidental or malicious deletion. Assurance that backups remain intact and trustworthy. Compliance with regulatory requirements for data retention and integrity. 🚀 Azure SQL Database LTR Backup Immutability (Public Preview) Microsoft has introduced backup immutability for Long-Term Retention (LTR) backups in Azure SQL Database, now available in public preview. This feature allows organizations to apply Write Once, Read Many (WORM) policies to LTR backups stored in Azure Blob Storage. Key Features: Time-based immutability: Locks backups for a defined duration (e.g., 30 days). Legal hold immutability: Retains backups indefinitely until a legal hold is explicitly removed. Tamper-proof storage: Backups cannot be deleted or altered, even by administrators. This ensures that LTR backups remain secure and recoverable, even in the event of a ransomware attack. 📜 Regulatory Requirements for Backup Immutability Many global regulations mandate immutable storage to ensure data integrity and auditability. Here are some key examples: Region Regulation Requirement USA SEC Rule 17a-4(f) Requires broker-dealers to store records in WORM-compliant systems. FINRA Mandates financial records be preserved in a non-rewriteable, non-erasable format. HIPAA Requires healthcare organizations to ensure the integrity and availability of electronic health records. EU GDPR Emphasizes data integrity and the ability to demonstrate compliance through audit trails. Global ISO 27001, PCI-DSS Require secure, tamper-proof data retention for audit and compliance purposes. Azure’s immutable storage capabilities help organizations meet these requirements by ensuring that backup data remains unchanged and verifiable. 🕒 Time-Based vs. Legal Hold Immutability ⏱️ Time-Based Immutability Locks data for a predefined period (e.g., 30 days). Ideal for routine compliance and operational recovery. Automatically expires after the retention period. 📌 Legal Hold Immutability Retains data indefinitely until the hold is explicitly removed. Used in legal investigations, audits, or regulatory inquiries. Overrides time-based policies to ensure data preservation. Both types can be applied to Azure SQL LTR backups, offering flexibility and compliance across different scenarios. 🧩 How Immutability Protects Against Ransomware Immutable backups are a critical component of a layered defense strategy: Tamper-proof: Even if attackers gain access, they cannot delete or encrypt immutable backups. Reliable recovery: Organizations can restore clean data from immutable backups without paying ransom. Compliance-ready: Meets regulatory requirements for data retention and integrity. By enabling immutability for Azure SQL LTR backups, organizations can significantly reduce the risk of data loss and ensure business continuity. ✅ Final Thoughts The public preview of backup immutability for Azure SQL Database LTR backups is a major step forward in ransomware resilience and regulatory compliance. With support for both time-based and legal hold immutability, Azure empowers organizations to: Protect critical data from tampering or deletion. Meet global compliance standards. Recover quickly and confidently from cyberattacks. Immutability is not just a feature—it’s a foundational pillar of modern data protection. Documentation is available at - Backup Immutability for Long-Term Retention Backups - Azure SQL Database | Microsoft Learn467Views2likes0CommentsConvert geo-replicated databases to Hyperscale
We’re excited to introduce the next improvement in Hyperscale conversion: a new preview feature that allows customers to convert Azure SQL databases to Hyperscale by keeping active geo-replication or failover group configurations intact. This builds on our earlier improvements and directly addresses one of the most requested capabilities from customers. With this improvement, customers can now modernize your database architecture with Hyperscale while maintaining business continuity. Overview We have heard feedback from customers about possible improvements we could make while converting their databases to Hyperscale. Customers complained about the complex steps they needed to perform to convert a database to Hyperscale when the database is geo-replicated by active geo-replication or failover groups. Previously, converting to Hyperscale required tearing down geo-replication links and recreating them after the conversion. Now, that’s no longer necessary. This improvement allows customers to preserve their cross-region disaster recovery or read scale-out configurations and still allows conversion to Hyperscale which helps in minimizing downtime and operational complexity. This feature is especially valuable for applications that rely on failover group endpoints for connectivity. Before this improvement, if application needs to be available during conversion, then connection string needed modifications as a part of conversion because the failover group and its endpoints had to be removed. With this new improvement, the conversion process is optimized for minimal disruption, with telemetry showing majority of cutover times under one minute. Even with a geo-replication configuration in place, you can still choose between automatic and manual cutover modes, offering flexibility in scheduling the transition. Progress tracking is now more granular, giving customers better visibility into each stage of the conversion, including the conversion of the geo-secondary to Hyperscale. Customer feedback Throughout the private preview phase, we have received overwhelmingly positive feedback from several customers about this improvement. Viktoriia Kuznetcova, Senior Automation Test Engineer from Nintex says: We needed a low-downtime way to move our databases from the Premium tier to Azure SQL Database Hyperscale, and this new feature delivered perfectly; allowing us to complete the migration in our test environments safely and smoothly, even while the service remained under continuous load, without any issues and without needing to break the failover group. We're looking forward to the public release so we can use it in production, where Hyperscale’s ability to scale storage both up and down will help us manage peak loads without overpaying for unused capacity. Get started The good news is that there are no changes needed to the conversion process. The workflow automatically detects that a geo-secondary is present and converts it to Hyperscale. There are no new parameters, and the method remains the same as the existing conversion process which works for non-geo-replicated databases. All you need is to make sure that: You have only one geo-secondary replica because Hyperscale doesn't support more than one geo-secondary replica. If a chained geo-replication configuration exists, it must be removed before starting the conversion to Hyperscale. Creating a geo-replica of a geo-replica (also known as "geo-replica chaining") isn't supported in Hyperscale. Once the above requirements are satisfied, you can use any of the following methods to initiate the conversion process. Conversion to Hyperscale must be initiated starting from the primary geo-replica. The following table provides sample commands to convert a database named WideWorldImporters on a logical server called contososerver to an 8-vcore Hyperscale database with manual cutover option. Method Command T-SQL ALTER DATABASE WideWorldImporters MODIFY (EDITION = 'Hyperscale', SERVICE_OBJECTIVE = 'HS_Gen5_8') WITH MANUAL_CUTOVER; PowerShell Set-AzSqlDatabase -ResourceGroupName "ResourceGroup01" -ServerName "contososerver" -DatabaseName "WideWorldImporters" -Edition "Hyperscale" -RequestedServiceObjectiveName "HS_Gen5_8" -ManualCutover Azure CLI az sql db update --resource-group ResourceGroup01 --server contososerver --name WideWorldImporters --edition Hyperscale --service-objective HS_Gen5_8 --manual-cutover Here are some notable details of this improvement: The geo-secondary database is automatically converted to Hyperscale with the same service level objective as the primary. All database configurations such as maintenance window, zone-resiliency, backup redundancy etc. remain the same as earlier (i.e., both geo-primary and geo-secondary would inherit from their own earlier configuration). A planned failover isn't possible while the conversion to Hyperscale is in progress. A forced failover is possible. However, depending on the state of the conversion when the forced failover occurs, the new geo-primary after failover might use either the Hyperscale service tier, or its original service tier. If the geo-secondary database is in an elastic pool before conversion, it is taken out of the pool and might need to be added back to a Hyperscale elastic pool separately after the conversion. The deployment of the feature is in progress and is expected to be completed in all Azure regions in a few weeks. In case you see error (Update to service objective '<SLO name>' with source DB geo-replicated is not supported for entity '<Database Name>') while converting primary to Hyperscale, wait for this new improvement to become available in your region. If you don’t want to use this preview capability yet, make sure to remove any geo-replication configuration before converting your databases to Hyperscale. Conclusion This update marks a significant step forward in the Hyperscale conversion process, offering simple steps, less downtime and keeping the geo-secondary available during the conversion process. We encourage you to try this new capability and provide your valuable feedback and help us refine this feature for general availability. You can contact us by commenting on this blog post and we’ll be happy to get back to you. Alternatively, you can also email us at sqlhsfeedback AT microsoft DOT com. We are eager to hear from you all!1.3KViews2likes0CommentsConversion to Hyperscale: Now generally available with enhanced efficiency
We are excited to announce the general availability (GA) of the latest improvements in the Azure SQL Database conversion process to Hyperscale. These improvements bring shorter downtime, better control, and more visibility to the Hyperscale conversion process, making it easier and more efficient for our customers to switch to Hyperscale. Key enhancements We received feedback from customers about longer-than-expected downtime, lack of visibility, and unpredictable cutover time during database conversion to Hyperscale. In response, we have made key improvements in this area. 1. Shorter cutover time Prior to this improvement, the cutover time depended on the database size and workload. With the improvement we have significantly reduced the average cutover time (effective unavailability of the database for the application) from about six minutes, sometime extending to thirty minutes, to less than one minute. 2. Higher log generation rate By improving synchronizing mechanisms between source and destination while the conversion is in progress, we now support a higher log generation rate on the source database, ensuring that the conversion can complete successfully with write intensive workloads. This enhancement ensures a smoother and faster migration experience, even for high-transaction rate environments. We now support up to 50 MiB/s log generation rate on the source database during conversion. Once converted to Hyperscale, the supported log generation rate is 100 MiB/s, with higher rate of 150 MiB/s in preview. 3. Manual cutover One of the most significant improvements is the introduction of a customer-controlled cutover mode called manual cutover. This allows customers to have more control over the conversion process, enabling them to schedule and manage the cutover at the time of their choice. You can perform cutover within 24 hours once conversion process reaches “Ready to cutover” state. 4. Enhanced progress reporting Improved progress reporting capabilities provide detailed insights into the conversion process. Customers can now monitor the migration status in real-time, with clear visibility into each step of the process. Progress reporting is available via T-SQL, REST API, PowerShell, Azure CLI, or in the Azure portal. Detailed progress information about conversion phases provides greater transparency and control over the process. How to use it? All the improvements are applied automatically. Once exception is the manual cutover mode, where you need to use a new optional parameter in T-SQL, PowerShell, Azure CLI, or REST API while initiating the conversion process. The Azure portal also provides a new option to select manual cutover as shown in below image. Granular progress reporting is available irrespective of the cutover mode. One of our customers said - The migration to Hyperscale using the improvements was much easier than expected. The customer-controlled cutover and detailed progress reporting made the process seamless and efficient. For more information, see our documentation: Convert a Database to Hyperscale - Azure SQL Database | Microsoft Learn Conclusion We are thrilled to bring these enhancements to our customers and look forward to seeing how they will transform their Hyperscale conversion experience. This update marks a significant step forward in the Hyperscale conversion process, offering faster cutover time, enhanced control with a manual cutover option, and improved progress visibility. You can contact us by commenting on this blog post and we’ll be happy to get back to you. Alternatively, you can also email us at sqlhsfeedback AT microsoft DOT com. We are eager to hear from you all!2.2KViews2likes1CommentStream data changes from Azure SQL Database - a call for Private Preview of Change Event Streaming
In today's rapidly evolving digital landscape, modern, flexible and real-time data integration from various sources has become more important than ever. Organizations are increasingly relying on data-driven and real-time insights to make informed decisions, optimize operations, and drive innovation. In this context, we are excited to announce a private preview for Change Event Streaming, which enables you to stream your data changes from your SQL database directly into Azure Event Hubs. Starting from today, you can apply for the private preview program. Participants can test the functionality on Azure SQL Database and Azure Event Hubs, and support for more sources and destinations is planned. Participating in the private preview gives you an opportunity to work with the product team, test the functionality, provide your feedback and influence the final release. Note: CES can be tested on Azure SQL Database, and that includes its free offer. Typical use cases for Change Event Streaming are: Building event-driven systems on top of your relational databases, with minimal overhead and easy data integration. Data synchronization across systems, and more specifically, syncing data between microservices or keeping distributed systems in sync. Implementing real-time analytics on top of your relational data. Auditing and monitoring that requires tracking changes of sensitive data or logging specific events. Main advantages for using a message broker such as Azure Event Hubs, and Change Event Streaming are: Scalability, as message brokers are designed to handle high-throughput and can scale independently from a database. Decoupling, as systems downstream from a database and message broker are loosely coupled, enabling greater flexibility and easier maintenance. Multi-consumer support, as Azure Event Hubs allow multiple consumers to process the same data stream, enabling varied use cases from a single source. Real-Time integration, which enables seamless integration between OLTP systems and downstream systems for real-time data flow. If mentioned use cases and advantages resonate with requirements of your architectures, systems and solutions, then Change Event Streaming is the right choice for you. To apply for the private preview, please send email to sqlcesfeedback [at] microsoft [dot] com, and we’ll get back to you with more details! Useful resources Free Azure SQL Database. Free Azure SQL Managed Instance. Azure SQL – Year 2024 in review. Azure SQL YouTube channel.1.7KViews2likes0Comments