hyperscale
24 TopicsSimplified & lower pricing for Azure SQL Database and Azure SQL Managed Instance backup storage
Today as you deploy your Azure SQL database or Azure SQL managed instance, one of the important decisions to be made is the choice for your backup storage redundancy (BSR). I say it's important because the availability of your database depends on the availability of your backups. Here’s why. Consider the scenario where your DB has high availability configured via zone redundancy. And, let's say, your backups are configured non-zone redundant. In the event of a failure in the zone, your database fails over to another zone within the region, however your backups won't, because of their storage setting. Now, in the new zone, the backup service attempts to backup your database but cannot reach the backups in the zone where the failure happened causing the logs to become full and eventually impacting the availability of the database itself. As you create the Azure SQL database, the choices for backup storage redundancy are: Locally Redundant Storage (LRS) Zone Redundant Storage (ZRS) Geo Redundant Storage (GRS) and Geo Zone Redundant Storage (GZRS) Each of these storage types provides different levels of durability, resiliency and availability for your databases and database backups. Not surprisingly, each storage type also has different levels of pricing, and the price increases significantly as the protection level increases with GZRS storage type almost 4-5x LRS. Choosing between resilience and cost optimization is an extremely difficult choice that the DB owner must make. We are thrilled to announce that, starting from Nov 01, 2024, the backup storage pricing is now streamlined and simplified across Azure SQL database and Azure SQL Managed Instance. Bonus – we even reduced the prices 😊 The price changes apply to the Backup Storage Redundancy configuration for both Point-in-time and Long-Term Retention backups, across the following tiers of Azure SQL Database and Azure SQL Managed Instance: Product Service Tier Azure SQL Database General Purpose Business Critical Hyperscale Azure SQL Managed Instance General Purpose Business Critical Next Generation General Purpose (preview) As we made the changes, following were the principles we adhered to: No price increase BSR pricing for ZRS is reduced to match the BSR pricing for LRS BSR pricing for GZRS is reduced to match the BSR pricing of GRS BSR pricing for GRS/GZRS will be 2x that of LRS/ZRS Type of backups What is Changing PITR BSR pricing for ZRS is reduced by 20% to match pricing for LRS for all service tiers in Azure SQL Database and Azure SQL Managed Instance except for Azure SQL Database Hyperscale service tier. BSR pricing for GZRS is reduced by 41% to match pricing for GRS for all service tiers in Azure SQL Database and Azure SQL Managed Instance. LTR BSR pricing for ZRS is reduced by 20% to match pricing for LRS for all service tiers in Azure SQL Database and Azure SQL Managed Instance. BSR pricing for GZRS is reduced by 41% to match pricing for GRS for all service tiers in Azure SQL Database and Azure SQL Managed Instance. As an example, lets take East US as the region and look at the pricing for backup storage redundancy for Point in Time storage before and after the changes: For General Purpose/Business Critical service tiers the pricing would now be: Backup Storage Redundancy Current price New Price Price change LRS $0.10 $0.10 None ZRS $0.125 $0.10 20% less GRS $0.20 $0.20 None GZRS $0.34 $0.20 41% less For Hyperscale service tier, the new pricing would now be: Backup Storage Redundancy Current price New Price Price change LRS $0.08 $0.08 None ZRS $0.1 $0.10 None GRS $0.20 $0.20 None GZRS $0.34 $0.20 41% less Similarly, Backup storage redundancy prices for Long Term Retention backups in East US would be as follows: Backup Storage Redundancy Current price New Price Price change LRS $0.025 $0.025 None ZRS $0.0313 $0.025 20% less GRS $0.05 $0.05 None GZRS $0.0845 $0.05 41% less As a customer, the decision now becomes much easier for you. If you need regional resiliency: choose Zone Redundant Storage (ZRS) If you need regional and/or geo resiliency: choose Geo Zone Redundant Storage (GZRS). If the Azure region does not support Availability Zones, then choose Local Redundant Storage for regional resiliency, and Geo Redundant Storage for geo resiliency respectively. Please Note: The Azure pricing page and Azure pricing calculator will be updated with these new prices soon. The actual pricing meters have already been updated. Additionally, the LTR pricing change for Hyperscale will be in effect from January 1, 2025.1.3KViews0likes0CommentsConversion to Hyperscale: Now generally available with enhanced efficiency
We are excited to announce the general availability (GA) of the latest improvements in the Azure SQL Database conversion process to Hyperscale. These improvements bring shorter downtime, better control, and more visibility to the Hyperscale conversion process, making it easier and more efficient for our customers to switch to Hyperscale. Key enhancements We received feedback from customers about longer-than-expected downtime, lack of visibility, and unpredictable cutover time during database conversion to Hyperscale. In response, we have made key improvements in this area. 1. Shorter cutover time Prior to this improvement, the cutover time depended on the database size and workload. With the improvement we have significantly reduced the average cutover time (effective unavailability of the database for the application) from about six minutes, sometime extending to thirty minutes, to less than one minute. 2. Higher log generation rate By improving synchronizing mechanisms between source and destination while the conversion is in progress, we now support a higher log generation rate on the source database, ensuring that the conversion can complete successfully with write intensive workloads. This enhancement ensures a smoother and faster migration experience, even for high-transaction rate environments. We now support up to 50 MiB/s log generation rate on the source database during conversion. Once converted to Hyperscale, the supported log generation rate is 100 MiB/s, with higher rate of 150 MiB/s in preview. 3. Manual cutover One of the most significant improvements is the introduction of a customer-controlled cutover mode called manual cutover. This allows customers to have more control over the conversion process, enabling them to schedule and manage the cutover at the time of their choice. You can perform cutover within 24 hours once conversion process reaches “Ready to cutover” state. 4. Enhanced progress reporting Improved progress reporting capabilities provide detailed insights into the conversion process. Customers can now monitor the migration status in real-time, with clear visibility into each step of the process. Progress reporting is available via T-SQL, REST API, PowerShell, Azure CLI, or in the Azure portal. Detailed progress information about conversion phases provides greater transparency and control over the process. How to use it? All the improvements are applied automatically. Once exception is the manual cutover mode, where you need to use a new optional parameter in T-SQL, PowerShell, Azure CLI, or REST API while initiating the conversion process. The Azure portal also provides a new option to select manual cutover as shown in below image. Granular progress reporting is available irrespective of the cutover mode. One of our customers said - The migration to Hyperscale using the improvements was much easier than expected. The customer-controlled cutover and detailed progress reporting made the process seamless and efficient. For more information, see our documentation: Convert a Database to Hyperscale - Azure SQL Database | Microsoft Learn Conclusion We are thrilled to bring these enhancements to our customers and look forward to seeing how they will transform their Hyperscale conversion experience. This update marks a significant step forward in the Hyperscale conversion process, offering faster cutover time, enhanced control with a manual cutover option, and improved progress visibility. You can contact us by commenting on this blog post and we’ll be happy to get back to you. Alternatively, you can also email us at sqlhsfeedback AT microsoft DOT com. We are eager to hear from you all!1KViews2likes1CommentImproving the conversion to Hyperscale with greater efficiency
Update: On 09 April 2025 we announced the general availability this improvement. For more details, please read the GA announcement. We are thrilled to announce the latest improvement in the Azure SQL Database conversion process to Hyperscale. This update, now available in public preview, streamlines the database conversion process, reducing downtime and offering greater control and visibility to our customers. Let’s dive into the enhancements and what they mean for you. Overview We have heard feedback from customers about possible improvements we could make while converting their databases to Hyperscale. Customers complained about longer than expected downtime during conversion, no insights into the current state of conversion, and unpredictable cutover time. We acted on the feedback and made several key improvements in this release. 1. Shorter cutover time One of the most significant enhancements in this improvement is the reduction in cutover times when converting a database to Hyperscale. This improvement ensures that the cutover process is faster and more efficient, minimizing downtime and connectivity disruptions. Based on our telemetry, cutover time 99 th percentile has been reduced from ~6 minutes to less than ~1 minute. 2. Support for higher log generation rate During the migration, the new process supports a higher log generation rate on the source database, ensuring that the conversion can handle more write intensive workloads and complete successfully. In the previous version we saw that the database conversions were not able to finish when log generation on the source was more than 8 MiBps continuously, throughout the conversion process. With this improvement we can now support up to 50 MiBps log generation on source and still succeed. This was achieved by improving synchronizing mechanisms between source and destination while conversion is in progress. 3. Manual cutover A new option introduced with this improvement is manual cutover. This allows customers to initiate the conversion process and pause it when the database is ready to cutover to Hyperscale, giving them up to 72 hours to perform the cutover at a time that best suits their operational needs. If the manual cutover is not completed within the given timeframe, the process is automatically canceled, and the database remains on the original service tier, without any data loss. If the new parameter is not passed, then the cutover experience would be the same as earlier i.e. automatic cutover as soon as Hyperscale database is ready. 4. Granular progress reporting Another improvement in the conversion process is that now customers can monitor the entire conversion process at a granular level. Whether using T-SQL, REST API, PowerShell, Azure CLI or Azure portal, detailed progress information about conversion phases is available, providing greater transparency and control over the process. Customer feedback Throughout the private preview phase, we have received overwhelmingly positive feedback from several customers about this improvement. John Nafa, Cloud Architect, Asurgent says: The new database conversion experience from Microsoft has been incredibly seamless and efficient, making the conversion process to Azure SQL Database Hyperscale smooth and straightforward. The progress reporting and manual cutover features were especially valuable, providing real-time insights and ensuring a smooth transition. It’s been a pleasure working with this improvement, and I’m excited to see it become available to a wider audience. Get started Out of the four key improvements mentioned above, most are automatic. To utilize the manual cutover option, you need to use a new optional parameter in T-SQL, PowerShell, Azure CLI, or REST API while initiating the conversion process. Azure portal also provides a new option to select manual cutover. Granular progress reporting is available irrespective of the cutover mode. Use manual cutover Let us go through new options available in various interfaces with this improvement. Azure Portal To use manual cutover, a new Cutover mode option is provided in the Azure portal. The following screenshot shows the steps to convert a database to Hyperscale. The new option is shown under step 3. If you have not seen this option in the Azure portal yet, don’t worry. We are enabling this capability now and you can expect it to be available within days. Commands A new parameter has been introduced in each interface to initiate the conversion process with the manual cutover option. The following table provides sample commands to convert a database named WideWorldImporters on a logical server called contososerver to an 8-vcore serverless Hyperscale database. Method Command T-SQL ALTER DATABASE WideWorldImporters MODIFY (EDITION = 'Hyperscale', SERVICE_OBJECTIVE = 'HS_S_Gen5_8') WITH MANUAL_CUTOVER PowerShell Set-AzSqlDatabase -ResourceGroupName "ResourceGroup01" -ServerName "contososerver" -DatabaseName "WideWorldImporters" -Edition "Hyperscale" -requestedServiceObjectiveName "HS_S_Gen5_2" -ManualCutover Azure CLI az sql db update --resource-group ResourceGroup01 --server contososerver --name WideWorldImporters --edition Hyperscale --service-objective HS_S_Gen5_2 --manual-cutover Note: Additional parameters like backup storage redundancy or zone redundancy etc. can also be added. Refer documentation Set-AzSqlDatabase (Az.Sql) | Microsoft Learn (for PowerShell) and az sql db | Microsoft Learn (for Azure CLI). REST API also has new properties manualCutover and performCutover. Refer Databases - Create Or Update - REST API (Azure SQL Database) | Microsoft Learn for more details. Monitor the conversion progress The progress during this conversion can be monitored using various interfaces. Detailed phase information is available in the Dynamic Management View (DMV) sys.dm_operation_status for those using T-SQL. Similar command options are available for PowerShell and Azure CLI users. Azure Portal Progress of the conversion process can be seen by clicking on Details hyperlink as shown in the below screenshot at step 2. Monitoring programmatically As a part of this improvement, we have introduced new columns in the sys.dm_operation_status DMV called phase_code, phase_desc and phase_info which are populated during conversion process. Refer to documentation for more details. Similarly, new output columns, phase and phaseInformation, are available if using PowerShell under OperationPhaseDetails and if using Azure CLI under operationPhaseDetails. Here is the quick reference for commands via various interfaces. Method Command T-SQL SELECT state_desc, phase_code, phase_desc, JSON_VALUE(phase_info, '$.currentStep') AS currentStep, JSON_VALUE(phase_info, '$.totalSteps') AS totalSteps, phase_info, start_time, error_code, error_desc, last_modify_time FROM sys.dm_operation_status WHERE resource_type = 0 AND operation='ALTER DATABASE' AND major_resource_id = 'WideWorldImporters' PowerShell (Get-AzSqlDatabaseActivity -ResourceGroupName "ResourceGroup01" -ServerName "contososerver" -DatabaseName "WideWorldImporters").OperationPhaseDetails Azure CLI (az sql db op list --resource-group ResourceGroup01 --server contososerver --database WideWorldImporters | ConvertFrom-Json).operationPhaseDetails.phase Perform manual cutover The manual cutover, if specified, can be performed using the Azure portal or programmatically, ensuring a smooth transition to the Hyperscale tier. Azure Portal Cutover can be initiated from the same screen where progress of conversion is reported. Commands Here is the quick reference of the commands to perform cutover. Method Command T-SQL ALTER DATABASE WideWorldImporters PERFORM_CUTOVER PowerShell Set-AzSqlDatabase -ResourceGroupName "ResourceGroup01" -ServerName "contososerver" -DatabaseName "WideWorldImporters" -PerformCutover Azure CLI az sql db update --resource-group ResourceGroup01 --server contososerver --name WideWorldImporters --perform-cutover Conclusion This update marks a significant step forward in the Hyperscale conversion process, offering faster cutover time, enhanced control with a manual cutover option, and improved progress visibility. We encourage you to try these features and provide your valuable feedback and help us refine this feature for general availability. You can contact us by commenting on this blog post and we’ll be happy to get back to you. Alternatively, you can also email us at sqlhsfeedback AT microsoft DOT com. We are eager to hear from you all!1.8KViews3likes0CommentsAzure SQL Database Hyperscale – lower, simplified pricing!
Azure is a cloud platform designed to simplify building powerful and economical modern applications. Azure SQL Database Hyperscale is a leading relational database service offering for cloud-born applications. In addition to a rock-solid relational database foundation, Hyperscale offers several exciting modern developer features like REST and GraphQL endpoints, JSON data support, external API invocation. Hyperscale was built leveraging core cloud capabilities and offers auto-scaling, multi-tiered high performance storage, independently scalable compute, read scale-out, predictable, and quick operations like database copy, and much more! We want to ensure that all customers use Hyperscale for their application – no matter what size. Today, we're excited to announce changes to the way Hyperscale is priced. In most cases, you will see significantly lower costs – allowing you to invest the resultant savings in the resources you need to build AI-ready applications, increase the resiliency of your databases, and many other benefits unique to Hyperscale. Let’s take a deeper look at this exciting announcement! What is changing? We are reducing the price of compute by $0.10 USD per vCore per hour (some exceptions are listed later in this post), which in many cases can be up to 35% less than the pre-announcement (“current rate”) compute cost. The storage cost for Hyperscale has also been aligned with the market for developer databases and the pricing for other Azure Database offerings, while not charging for I/O operations. The new pricing will take effect and be displayed on the Azure SQL Database pricing page and Azure pricing calculator on December 15th. Examples of the pricing change Here are some examples to illustrate how the new pricing works as compared to the existing pricing. Note that all costs are estimated and assume a 730 hour month. Case 1: Hyperscale single DB with 6-vCore provisioned compute, 0 HA replica and 50 GB of allocated storage, East US Existing pricing New pricing Compute cost USD 1237.85 USD 800.05 Storage cost USD 5.0 USD 12.5 Total cost USD 1242.9 USD 812.55, saving 35% Case 2: Hyperscale single DB with 12-vCore provisioned compute, 1 HA replica and 200 GB of allocated storage, East US Existing pricing New pricing Compute cost USD 4075.9 USD 3200.20 Storage cost USD 20.0 USD 50.0 Total cost USD 4095.9 USD 3250.20, saving 21% Case 3: Hyperscale single DB with 32-vCore provisioned compute, 1 HA replica and 18 TB of allocated storage, East US Existing pricing New pricing Compute cost USD 10869.08 USD 8533.88 Storage cost USD 1,843.20 USD 4,608.00 Total cost USD 12712.28 USD 13141.88, 3% higher Conclusion In conclusion, these pricing changes for Hyperscale are aligned with our mission to provide the best features, with the highest performance and scalability, at a great price for all our customers. Our team is here to assist with any questions you may have about these changes. Please leave a comment on this blog and we’ll be happy to get back to you. Alternatively, you can also email us at sqlhsfeedback AT microsoft DOT com. We are eager to hear from you all! Frequently Asked Questions When does the change take effect, and what does it impact? The pricing changes take effect on December 15 th , 2023 at 00:00 hours UTC. The changes will apply to the following resources created on or after Dec 15 th , 2023. Any newly created Hyperscale single provisioned compute databases. Any (existing or new) Hyperscale single serverless databases (currently in preview). Any (existing or new) Hyperscale elastic pools (currently in preview). Any newly created Hyperscale elastic pooled databases (currently in preview). What happens to my existing Hyperscale resources? To start with, nothing changes till December 15 th , 2023. Here’s what will happen starting December 15 th , 2023: To provide a seamless experience without any impact on existing workloads, all existing Hyperscale single databases with provisioned compute created before December 15 th , 2023, will continue to be billed at the existing rates, for a period of up to 3 years (ending December 14 th , 2026). Customers will receive further notice of the pricing change to their Hyperscale databases in advance of December 14 th , 2026. All existing Hyperscale single databases with serverless compute (currently in preview) will automatically switch to the new pricing starting December 15 th , 2023. All existing Hyperscale elastic pools (currently in preview) will automatically switch to the new pricing starting December 15 th , 2023. What if I update / change a database to a different deployment option? Hyperscale allows seamless, rapid scaling of the database compute. You can also scale a Hyperscale database to move from provisioned compute to serverless compute (or the other way around). You can add an existing Hyperscale database to an elastic pool or move an elastic pooled database out of the elastic pool to a single database. Here’s how your costs are impacted if you perform any of these changes on or after December 15 th , 2023: Change Impact Hyperscale (serverless single, or elastic pooled) database is changed to a Hyperscale single database with provisioned compute. The final cost of the database will be based on when the database was created. If the database was created prior to December 15 th , 2023, it will be billed as per the existing pricing. If the database was created on or after December 15 th , 2023, it will be billed as per the new pricing. Hyperscale database is changed to a Hyperscale single database with serverless compute. The database will be billed with the new pricing. Hyperscale database is added to an elastic pool on or after December 15 th. The database’s storage will be charged as per the “new” storage pricing. There is no separate compute cost for a database in an elastic pool. Hyperscale single database with provisioned compute is scaled up, or down, or has high-availability replicas added or removed, or its hardware family is changed. The pricing model remains as it was before the scaling operation. The actual costs for compute resources will change based on the scaling operation (for example, they will increase if the database is scaled up, or replicas added). Do note that if you are using reserved capacity ("reservations") and are changing the hardware family, you will need to exchange those reservations to align to the new hardware family. The costs of storage resources associated with the single database will not change due to the scaling operation itself. Any copies of a Hyperscale database created as a Hyperscale single database, on or after December 15 th , 2023. The database copies will use the new pricing, regardless of when the original database was created. Any new single database created via. restore, or geo-replication operations, on or after December 15 th , 2023. The new database will use the new pricing, regardless of when the original database was created. Any non-Hyperscale database is updated to Hyperscale on or after December 15 th , 2023. The new database will use the new pricing, regardless of when it was originally created. See the summarized tables below for a quick reference. Single databases Hyperscale single databases with provisioned compute Hyperscale single databases with serverless compute Timeline: before December 15 th , 2023 Timeline: on or after December 15 th , 2023 Timeline: before December 15 th , 2023 Timeline: on or after December 15 th , 2023 Database was created or migrated to Hyperscale before December 15th, 2023 Database created or migrated to Hyperscale after December 15th, 2023 Database was created or migrated to Hyperscale before December 15th, 2023 Database created or migrated to Hyperscale after December 15th, 2023 Compute Existing provisioned compute price. Existing provisioned compute price. New provisioned compute price. Existing serverless compute price. New serverless compute price. New serverless compute price. Storage Existing storage prices. Existing storage prices. New storage prices. Existing storage prices. New storage prices. New storage prices. Elastic pools and pooled databases Hyperscale elastic pools Hyperscale elastic pooled databases Timeline: before December 15 th , 2023 Timeline: on or after December 15 th , 2023 Timeline: before December 15 th , 2023 Timeline: on or after December 15 th , 2023 Elastic pool was created before December 15th, 2023 Elastic pool created after December 15th, 2023 Database was created or migrated to Hyperscale before December 15th, 2023 Database created or migrated to Hyperscale after December 15th, 2023 Compute Existing provisioned compute price. New provisioned compute price. New provisioned compute price. N/A (charged per elastic pool) Storage N/A – storage is charged per database. Existing storage prices. New storage prices. New storage prices. Can I continue to use reservations for Hyperscale? With reservations, you make a commitment to use SQL Database for a period of one or three years to get a significant discount on the compute (vCores) costs. There are no changes to the Compute (vCores) pricing, and you can continue to use reserved capacity (“reservations”) for Hyperscale single databases with provisioned compute and Hyperscale elastic pools. How can I move my existing Hyperscale databases to the new pricing? Currently, there is no in-built support to switch pricing for existing Hyperscale databases. However, you can consider one of the redeployment methods (database copy, point-in-time restore, or geo-replication) to create a new “copy” of the existing Hyperscale database. The newly created “copy” of the database will be billed the new pricing. If you do decide to go down this path, do consider creating the new database with zone redundancy, as described here. Do you have any projections of likely costs when converting non-Hyperscale DBs to Hyperscale? We recommend you use the Azure pricing calculator to compare base cost for compute and storage. However, costs of backups can vary depending on the nature of the workload and backup retention settings configured. Databases in the (DTU) Basic, Standard and Premium service tiers include backup in the base cost. When converting such databases to Hyperscale, keep in mind that backups in Hyperscale can be a significant factor in overall cost. It is only possible to determine this after sufficient testing with realistic workloads on Hyperscale and we strongly recommend you do such testing before converting DTU service tier databases to Hyperscale. Does the reduction in compute price apply to all subscription offer types? In the case of dev/test subscriptions and related offer types including Enterprise dev/test, where you were already not paying for license costs, there will not be a further reduction in the price of compute. For such subscriptions, the storage costs for Hyperscale resources will still be based on the guidelines in the “When does the change take effect, and what does it impact?” and “What happens to my existing Hyperscale resources” sections in this blog post. Can I still use Azure Hybrid Benefit for Hyperscale? The change in price per vCore is done by eliminating the software license fee for Hyperscale resources. Hence, Azure Hybrid Benefit no longer applies to the Hyperscale tier, except for Hyperscale single databases with provisioned compute which were created prior to December 15, 2023. Even for those older databases, Azure Hybrid Benefit can only be used till December 14, 2026. Note that specifying values of BasePrice or LicenseIncluded for the LicenseType parameter in APIs / SDKs / PowerShell / CLI, is only relevant for Hyperscale single databases with provisioned compute which were created prior to December 15, 2023. These values are effectively ignored for all other types of Hyperscale resources. Current limitations and known issues Consider a database which was created originally as a non-Hyperscale database prior to Dec 15th, 2023. If this database is then migrated to Hyperscale on or after Dec 15th, 2023, the Cost Summary section of the Azure portal will incorrectly show the "old pricing" for this database. The Cost Summary section in the Azure portal is only intended to be an estimate. We recommend you rely on the Cost Management section in the Azure portal to review actual costs. Azure portal cost summary view in greater China regions does not show the updated pricing information accurately. This is only a display issue and does not impact the billing in any way. Please refer to the pricing calculator or the pricing page for accurate pricing information.41KViews2likes13CommentsShrink for Azure SQL Database Hyperscale is now generally available
Today we are thrilled to announce General Availability (GA) of the database shrink in Azure SQL Database Hyperscale. This milestone marks another significant improvement in our Hyperscale service, providing our customers with more flexibility and efficiency in managing their database storage. Overview Database shrink in Azure SQL Database allows customers to reclaim unused space within their databases to optimize storage costs. Now this is available in Hyperscale service tier too. This feature has been highly anticipated by many customers, and we are excited to deliver it with robust capabilities and seamless user experience. This improvement needs no new learnings as the same DBCC SHRINKDATABASE and DBCC SHRINKFILE commands are used. Database shrink was first announced for Hyperscale in public preview during last year. During the preview phase, we received invaluable feedback from our customers, which helped us refine and enhance this capability. We are grateful for the active participation and insights shared by the customers, which played a crucial role in shaping the feature. Key features Storage Optimization: Database shrink effectively reclaims unused space, reducing the allocated storage footprint of your database. Cost Efficiency: By optimizing storage usage, customers can potentially lower their storage costs. Ease of Use: The feature uses the same syntax as what is available in other service tiers and in SQL Server. So, customers are able to seamlessly use existing scripts, minimizing disruption during adoption. How to use To help you get started with the shrink functionality, we have provided comprehensive documentation including example scripts. Here are the basic steps to implement database shrink in your Hyperscale database: Connect to your Hyperscale database through your preferred management tool such as SQL Server Management Studio, Azure Data Studio etc. Evaluate the free space available in the database. Execute the shrink command using the provided T-SQL (DBCC SHRINKDATABASE and DBCC SHRINKFILE). Optionally, you can monitor the shrink process through DMV to ensure successful completion. Review the reclaimed space and rerun by adjusting the parameters, if necessary. Conclusion The release of database shrink in Hyperscale is a testament to our commitment to continuous improvement in Hyperscale service tier. The General Availability of database shrink in Azure SQL Database Hyperscale is a major milestone, and we are excited to see the positive impact it will have on your database management. You can contact us by adding to this blog post and we’ll be happy to get back to you. Alternatively, you can also email us at sqlhsfeedback AT microsoft DOT com. We are eager to hear from you all!2.5KViews3likes2CommentsTroubleshoot and optimize Hyperscale named replicas performance
In the Azure SQL Database Hyperscale service tier, managing named replicas effectively is crucial to maintaining optimal performance. By monitoring log rates and understanding the impact of undersized replicas, you can prevent potential issues such as log rate reduction on the primary replica. Ensuring that each named replica is appropriately sized for its workload can help maintain smooth operations and avoid performance bottlenecks. Remember, the flexibility of Hyperscale allows you to tailor each replica to specific needs, whether through different SLOs or serverless compute options. By staying vigilant and proactive, you can leverage the full potential of Azure SQL Database Hyperscale to meet your workload demands efficiently.832Views0likes0CommentsStream data changes from Azure SQL Database - a call for Private Preview of Change Event Streaming
In today's rapidly evolving digital landscape, modern, flexible and real-time data integration from various sources has become more important than ever. Organizations are increasingly relying on data-driven and real-time insights to make informed decisions, optimize operations, and drive innovation. In this context, we are excited to announce a private preview for Change Event Streaming, which enables you to stream your data changes from your SQL database directly into Azure Event Hubs. Starting from today, you can apply for the private preview program. Participants can test the functionality on Azure SQL Database and Azure Event Hubs, and support for more sources and destinations is planned. Participating in the private preview gives you an opportunity to work with the product team, test the functionality, provide your feedback and influence the final release. Note: CES can be tested on Azure SQL Database, and that includes its free offer. Typical use cases for Change Event Streaming are: Building event-driven systems on top of your relational databases, with minimal overhead and easy data integration. Data synchronization across systems, and more specifically, syncing data between microservices or keeping distributed systems in sync. Implementing real-time analytics on top of your relational data. Auditing and monitoring that requires tracking changes of sensitive data or logging specific events. Main advantages for using a message broker such as Azure Event Hubs, and Change Event Streaming are: Scalability, as message brokers are designed to handle high-throughput and can scale independently from a database. Decoupling, as systems downstream from a database and message broker are loosely coupled, enabling greater flexibility and easier maintenance. Multi-consumer support, as Azure Event Hubs allow multiple consumers to process the same data stream, enabling varied use cases from a single source. Real-Time integration, which enables seamless integration between OLTP systems and downstream systems for real-time data flow. If mentioned use cases and advantages resonate with requirements of your architectures, systems and solutions, then Change Event Streaming is the right choice for you. To apply for the private preview, please send email to sqlcesfeedback [at] microsoft [dot] com, and we’ll get back to you with more details! Useful resources Free Azure SQL Database. Free Azure SQL Managed Instance. Azure SQL – Year 2024 in review. Azure SQL YouTube channel.1.3KViews2likes0CommentsAnnouncing enhancements to Azure SQL Database Hyperscale
Increased maximum database size We are pleased to announce that the maximum database size in Azure SQL Database Hyperscale has increased from 100 TiB to 128 TiB. This enhancement is now generally available (GA) for single Hyperscale databases and will be released later for Hyperscale elastic pools. This expansion allows for even greater flexibility and capacity to manage large datasets, accommodating the needs of businesses with substantial data storage requirements. Higher limits for transaction log generation rate In our continuous effort to improve performance, the log generation rate in Azure SQL Database Hyperscale has been increased from 100 MiB/s to 150 MiB/s. The increased limit for the transaction log generation rate limit is currently in limited public preview, and you can sign up using this form: link. The higher log generation rate means faster data processing and better handling of write-intensive workloads. This ensures that your applications run smoothly and efficiently, even during peak usage times. Whether you're dealing with bulk data inserts, high-volume transaction processing, real-time data ingestion, or rebuilding of large indexes, the enhanced log generation rate provides the performance boost needed to keep your systems responsive and reliable. Continuous priming One another new feature we are introducing is continuous priming. This innovative feature is designed to optimize performance during failovers by priming secondary compute replicas. Here’s how it works: Continuous priming collects information about the most frequently accessed pages in all Hyperscale compute replicas, both primary and secondary. This information is aggregated at the Hyperscale storage layer (page servers). All Hyperscale compute replicas then use this list of most frequently accessed pages, which represents the typical customer workload, to “prime” both the buffer pool (BP) and the resilient buffer pool extension (RBPEX) with any missing pages. This “priming” process is done continuously to keep up with changes in the customer work set. With continuous priming, local HA replicas will prime themselves with the pages being used in the primary replica. This ensures that performance remains consistent and optimized, even during failovers. Please note that continuous priming is not applicable to Hyperscale databases with serverless compute and named replicas. If you’re interested in enrolling in this preview, please sign up using the provided link. Conclusion We are confident that these new features and enhancements will significantly benefit your operations, providing more scalability, faster data processing, and greater reliability. Stay tuned for more updates as we continue to innovate and improve Azure Database Hyperscale to meet your needs. Please share your feedback and questions by leaving a comment; you can also email us at sqlhsfeedback AT microsoft DOT com. We are eager to hear from you all!2.9KViews2likes0Comments