Azure SQL v/s Azure Synapse SQL (Dedicated Pool, Serverless Pool) - Part 2
Published Nov 23 2023 05:53 AM 2,408 Views
Microsoft

This article is intended for audience who are considering options to move their data into Azure and prefer T-SQL to query the data.

 

Overview

This is a follow-up blog to Azure SQL v/s Azure Synapse SQL (Dedicated Pool, Serverless Pool) - Part 1. By the time you get to this article you would have narrowed down your choices either to an OLTP use case or a Data Warehouse use case.

If the choice is leaning towards Azure SQL you need to understand the following key decision factors:

 

A) Files & File Groups

Every SQL Server database has two types operating system files namely data files and log files.

- Data files contain actual data in objects such as tables, indexes, stored procedures, and views.

- Log files contain metadata (data of data). The information that is required to recover all transactions in the database.

Transaction log files are never part of any filegroups. File paths are automatically assigned and the file location can't be specified but configured during SQL Server installation. The Azure SQL and Azure SQL Managed instances under the hood actually work with the data stored in the above file formats

SQLFamily.png

B) T-SQL Differences

If you are migrating SQL workloads to Azure SQL flavors most of the SQL features will be compatible but you can always refer the Feature Comparison List here to validate the same. If you are considering SQL Server on VM as well as one of your option you might want to refer to the exhaustive list here.

 

This list is often overlooked or missed during decision making. Total cost of engine shift (On-Premises or anywhere else to Azure) comprises of 2 aspects : direct cost benefits & cost for refactoring effortsThe general thumb rule here is if the cost refactoring is greater than cost of benefits you gain from choosing a particular service (with no actual current use case of additional features offered by the new engine), don't change the engine.

 

C) Purchasing Models

There are two purchasing models:

The following table and chart compares and contrasts the vCore-based and the DTU-based purchasing models:

Purchasing model Description Best for
DTU-based This model is based on a bundled measure of compute, storage, and I/O resources. for Compute sizes are expressed in DTUs single databases and in elastic database transaction units (eDTUs) for elastic pools. For more information about DTUs and eDTUs, see What are DTUs and eDTUs?. Customers who want simple, preconfigured resource options
 vCore-based This model allows you to independently choose compute and storage resources. The vCore-based purchasing model also allows you to use Azure Hybrid Benefitfor SQL Server to save costs. Customers who value flexibility, control, and transparency

 

In our SQL Migration Repo we have provided options how you go about choosing the service or SQL Paas (Platform as a service) Instance type. Benchmarking if not done right result in a negative impact on your solution's performance. The repo has detailed guidance on the set of tools that help you assess or migrate your existing workloads. One should always refer to the Azure SQL pricing calculator for latest services tiers and offerings

 

D) Licensing

If you are considering Platform as a Service Model the SQL server licensing cost is taken care by the service provider. There are additional discounted models for cost savings namely:

As of the date this article is published Azure SQL has a functionality of pause and resume database in the serverless model, where as  the functionality of Stop and Start of SQL MI Instance in preview. Implementing these techniques after careful pros and cons considerations is another key decision factor.

 

E) Backup and Restore

Azure SQL Database creates:

Azure SQL Managed Instance creates:

The exact frequency of transaction log backups is based on the compute size and the amount of database activity. With SQL 2019 Accelerated Database Recovery was introduced that improves database availability. It is enabled by default for Azure SQL Databases and Azure SQL MI. There are also options where you can configure full Long-Term Retention (LTR) backups for up to 10 years in Azure Blob Storage. By default, new databases in Azure SQL Database store backups in geo-redundant storage blobs that are replicated to a paired region.

 

When you restore a database, the service determines which full, differential, and transaction log backups need to be restored.

Both these options also provide Restore an Existing Database to a Point in Time, Restore a Deleted Database to a Point in Time & Restore Restore a Database to Another Region feature.

 

F) High Availability

Azure SQL Databases have the options of Active Geo-Replication and Auto-Failover Groups  which can be configured for running secondary copies at additional costs. Azure SQL MI have the option for Auto-Failover Groups which can be configured for running secondary copies at additional costs. Some customers prefer redirecting reporting queries to secondary so that primary is responsible for live transactional workloads alone. The copies are maintained asynchronously where the lag can be monitored through SQL queries. 

 

The choice here is determined by 2 aspects 

1) Does the business require a hot stand-by (active secondary databases) or cold stand-by (databases that can be created on the fly from backups)

2) In the first option you need to consider the cost of running the secondary & time lag between primary and secondary

3) In the second option you need to consider the cost of storage time of creating secondary.

Perform recovery drills to arrive at a reasonable estimation of Recovery Time Objective (RTO) and Recovery Point Objective (RPO). The decision to be made here is between the need of a hot stand-by or cold-stand by. It's the balancing act between cost and time that helps you arrive at the choice. As of when this article is published, the following table compares RPO and RTO of each recovery option. Always refer to Azure SQL Business Continuity Docs for latest updates on these intervals

Recovery method RTO RPO
Geo-restore from geo-replicated backups 12 h 1 h
Auto-failover groups 1 h 5 s
Manual database failover 30 s 5 s

 

G) Performance Tuning

Azure SQL Database and Azure SQL Managed Instance automatic tuning might be one of the most impactful features for consideration. When enabled the SQL Server Database Engine monitors the queries that are executed on the database and automatically improves performance of the workload. It does so by monitoring the workload, identifying critical Transact-SQL queries, and identifying indexes that should be added to improve performance, or indexes that are rarely used and could be removed to improve performance using built-in intelligence. Automatic tuning learns horizontally from all databases on Azure through AI (Service managed, cannot be modified or viewed) and dynamically improves its tuning actions. The longer a database runs with automatic tuning on, the better it performs. Depending on query execution frequency, the validation process can take from 30 minutes to 72 hours, taking longer for less frequently executing queries. If at any point during validation a regression is detected, changes are reverted immediately. Tuning operations applied to databases are fully safe for performance of your most intense workloads. The system has been designed with care not to interfere with user workloads. Automated tuning recommendations are applied only at the times of a low utilization of CPU, Data IO, and Log IO. 

 

The automatic tuning options available in Azure SQL Database and Azure SQL Managed Instance are:

Automatic Tuning Option Description SQL Database Support SQL Managed Instance Support
CREATE INDEX Identifies indexes that may improve performance of your workload, creates indexes, and automatically verifies that performance of queries has improved. When recommending a new index, the system considers space available in the database. If index addition is estimated to increase space utilization to over 90% toward maximum data size, index recommendation is not generated. Once the system identifies a period of low utilization and starts to create an index, it will not pause or cancel this operation even if resource utilization unexpectedly increases. If index creation fails, it will be retried during a future period of low utilization. Index recommendations are not provided for tables where the clustered index or heap is larger than 10 GB. Yes No
DROP INDEX Drops unused (over the last 90 days) and duplicate indexes. Unique indexes, including indexes supporting primary key and unique constraints, are never dropped. This option may be automatically disabled when queries with index hints are present in the workload, or when the workload performs partition switching. On Premium and Business Critical service tiers, this option will never drop unused indexes, but will drop duplicate indexes, if any. Yes No

FORCE LAST GOOD PLAN 

(automatic plan correction)

Identifies Azure SQL queries using an execution plan that is slower than the previous good plan, and forces queries to use the last known good plan instead of the regressed plan. Yes Yes
 
In the next articles we will dive deeper into Azure Synapse SQL key decision factors

Good Reads:

References:

  1. Azure SQL
  2. Azure SQL Documentation
  3. Azure SQL MI Documentation
  4. Azure SQL Elastic Pool
  5. Azure SQL Serverless
  6. Azure SQL Hyperscale

Don't forget to share a Kruti_Mehta_0-1700242649457.jpeg   if this helps

Credit: Thanks Ben Harding for review and guidance

FastTrack for Azure:  Move to Azure efficiently with customized guidance from Azure engineering. 

 

 

1 Comment
Co-Authors
Version history
Last update:
‎Nov 22 2023 11:09 PM
Updated by: