Introduction
Migrating from Db2 z/OS to Azure SQL Database Hyperscale is a major step toward modernizing your mainframe relational data infrastructure to Azure Data SQL managed relational database offering. But what happens to all those daily and periodic Db2 database maintenance tasks you used to perform on Mainframe?
In this post, to make migration easier we have provided the recommended mapping between Db2 zOS maintenance and Azure SQL DB Hyperscale maintenance tasks that shows:
- Tasks you must still schedule or trigger yourself
- Tasks that are only needed after specific events (like large data loads)
- Tasks that are now fully handled by Azure SQL Database PaaS or are simply not applicable anymore
- Best practices that, while not mandatory, are strongly recommended
Mainframe Db2 vs Azure SQL Database Hyperscale Database Maintenance
For each maintenance activity, you will find actionable guidance on how to perform it in Azure SQL Database Hyperscale, helping you streamline operations and focus on what matters most.
|
Db2 z/OS Task / Concept |
Purpose on Db2 |
Azure SQL DB Hyperscale Equivalent |
Responsibility |
Recommended Frequency (Post‑Migration) |
Remark | |||||||||||||
|
|
|
|
|
|
| |||||||||||||
|
Full Image COPY (Backup) |
Recoverability Note: Full image copy needs to be taken for data objects like Table space, index space etc. |
Azure SQL DB Hyperscale utilizes storage snapshot technology to capture a full, complete copy of the database's data files. The transaction logs generated since the last snapshot are kept unchanged ("as is") for the set retention period to ensure point-in-time recovery. |
Azure |
In Azure SQL DB HS default short term retention period of database backup is 7 days which can be configured till 35 days. You can configure retain backup for up to 10 years by configuring Long Term Retention Policy through Azure Portal / Azure CLI / PowerShell | ||||||||||||||
|
Incremental / Delta COPY |
Reduce backup window |
No incremental backup in HS |
Azure |
N/A | ||||||||||||||
|
Log Archive / Dual Logging |
Point-in-time recovery |
Transaction log backups automatic |
Azure |
Continuous | ||||||||||||||
|
RUNSTATS (Table/Index) |
Optimizer statistics |
Auto Create/Auto Update Statistics
For specific cases schedule STATS update |
Platform + You |
Continuous |
DBCC SHOW_STATISTICS command displays current query optimization statistics for a table or indexed view. The Query Optimizer determines when statistics might be out-of-date and then updates them when they're needed for a query plan when AUTO_UPDATE_STATISTICS ON or AUTO_UPDATE_STATISTICS_ASYNC is enabled. ASYNC_STATS_UPDATE_WAIT_AT_LOW_PRIORITY allows for updates of statistics asynchronously which can wait for the schema modification lock on a low priority queue. This improves concurrency for workloads with frequent query plan (re)compilations. Asynchronous Auto Update Statistics Leave it OFF (default) for most OLTP/reporting hybrids unless you see blocking on STATMAN operations. Consider enabling async if:
ALTER DATABASE CURRENT SET AUTO_UPDATE_STATISTICS_ASYNC ON; Statistics update for partitioned table. Option INCREMENTAL = { ON | OFF } allows for creation of statistics per partition of the table. If only some partitions are changed recently use below statistics update command for specific partition stats update. UPDATE STATISTICS dbo.Fact PARTITION = n Updating Statistics manually: UPDATE STATISTICS schema.table WITH FULLSCAN / SAMPLE N ROWS / SAMPLE N PERCENT Update Stats after: · Bulk load (insert millions of rows) into existing large table · Large data purge (delete/archival) · Partition switch-in / switch-out · Parameter sniffing + big, estimated vs actual row mismatch repeatedly Identifying Which Stats Need Attention Query to rank stats by modification ratio: Pick those with (general guidance tune according to specific workload and performance expectations):
Simple Maintenance Script (DMV-Driven) This updates only stats with >15% modification and >100K row changes:
Decision Matrix
| |||||||||||||
|
REORG TABLE (Tablespace) |
Eliminate overflow / reclaim space |
- Azure SQL does not have REORG TABLE command - Similar result can achieved by using REBUILD of the clustered index |
Not usually required. Should be done if there is large (>30 %) index defragmentation Note: 30% is just guidance. |
N/A |
Db2 has table level REORG Option, Azure SQL DB HS has index level REORG options. | |||||||||||||
|
REORG INDEX / REBUILD INDEX |
Defragment indexes |
REBUILD / REORGANIZE index |
You (Needed / Conditional) |
Weekly or when fragmentation > thresholds |
Check current fragmentation: SELECT avg_fragmentation_in_percent FROM sys.dm_db_index_physical_stats; Reorganize (5–30%): ALTER INDEX ix ON tbl REORGANIZE Rebuild (>30%): ALTER INDEX ix ON tbl REBUILD WITH (ONLINE=ON , RESUMABLE = ON, MAXDOP=1/4/8); Above numbers (5 % to 30 % REORG / > 30 % REBUILD) are just guidelines in many cases you may not need index REBUILD due to latest storage technologies or you may need to perform REBUILD after > 60-70 % fragmentation. Also, you may not need REORG. STATS update needs to be done diligently for query performance improvement. Perform through test compare before and after execution plans to derive conclusion on this. Large Index Rebuild special handling: ALTER INDEX ix ON tbl REBUILD WITH (ONLINE=ON , RESUMABLE = ON, MAX_DURATION = 60,MAXDOP=1/4/8); -- Pause after current batch ALTER INDEX ix ON tbl PAUSE; -- Resume with lower DOP ALTER INDEX ix ON tbl RESUME WITH (MAXDOP = 2, MAX_DURATION = 60); -- Abort and roll back if required ALTER INDEX ix ON tbl ABORT; Monitor progress of Index rebuild operation using below query: SELECT r.session_id, r.command, r.percent_complete, r.start_time, r.estimated_completion_time FROM sys.dm_exec_requests r WHERE r.command LIKE '%INDEX%'; Perform Index Rebuild using example below approaches: a) Ola Hallengren maintenance Scripts : link Or b) Adaptive Index Defragmentation: link Or c) How to maintain Azure SQL Indexes and Statistics : link Automating Azure SQL DB index and statistics maintenance using Azure Automation : link | |||||||||||||
|
COPY / QUIESCE utilities |
Consistent copy state |
Not needed (transactionally consistent snapshots) |
Not Applicable |
N/A |
N/A | |||||||||||||
|
CHECK DATA / CHECK INDEX |
Structural consistency |
DBCC CHECKDB |
You (Best Practice) |
Monthly or Weekly (off-peak) |
Azure SQL Database automatically runs internal consistency checks, but you may still run it manually if you want: T-SQL Query: DBCC CHECKDB(DatabaseName); For large DB: run on geo-secondary or named replica then review | |||||||||||||
|
RUN QUERY EXPLAIN SNAPSHOT |
Capture access paths |
Query Store captures execution plans |
Platform + You (Analysis) |
Ongoing |
| |||||||||||||
|
REBIND Packages |
Refresh access plans |
Not needed (no static package binding) |
Not Applicable |
N/A |
N/A | |||||||||||||
|
Catalog Statistics Maintenance |
Optimizer health |
System metadata auto maintained |
Platform |
N/A |
N/A | |||||||||||||
|
Buffer Pool Sizing (BP0/BP32K etc.) |
Memory tuning |
Managed by service tier |
Platform |
N/A |
N/A | |||||||||||||
|
Storage Space Preallocation |
Avoid space issues |
Storage allocated dynamically (i.e. auto-grow page servers) |
Platform |
N/A |
Monitor size: sys.database_files; optionally purge/archive data | |||||||||||||
|
Archive Log Space Monitoring |
Prevent log fill |
Log scaling |
Platform (capacity) + You (workload) |
Monitor during bursts |
sys.dm_db_resource_stats; watch log_write_percent | |||||||||||||
|
Partition Maintenance (ROLL-IN/OUT) |
Lifecycle management |
Table partitioning (if used) |
You (Conditional) |
During data lifecycle events |
ALTER TABLE [PartitionedTable] SWITCH PARTITION N TO [StagingTable];
Instead of rebuilding the entire table, you can rebuild indexes on a single partition, which is faster and less blocking: ALTER INDEX [IX_YourIndex] ON [YourPartitionedTable] REBUILD PARTITION = N WITH (ONLINE = ON); | |||||||||||||
|
Compression (ROW/PAGE) MGMT |
Space and IO |
ROW / PAGE / ColumnStore compression (if enabled) |
You (Optional) |
At design / periodic review |
Row compression stores fixed-length data as variable-length, ideal for CHAR, INT, etc. Page compression adds prefix and dictionary compression for repeated values in large tables. Columnstore compression stores data column-wise for huge analytical tables with millions of rows. | |||||||||||||
|
Security: RACF/External Auth Integration |
Access control |
Microsoft Entra ID Auth, Managed Identities, Microsoft Entra Service Principal |
You (Needed) |
At onboarding + periodic review |
Example authentication using Microsoft Entra ID: DECLARE @EntraIDUser NVARCHAR(128) = 'john@contoso.com'; CREATE USER [@EntraIDUser] FROM EXTERNAL PROVIDER; ALTER ROLE db_datareader ADD MEMBER [@EntraIDUser]; ALTER ROLE db_datawriter ADD MEMBER [@EntraIDUser]; | |||||||||||||
|
Encryption (Dataset / Log) |
Compliance |
TDE auto-enabled |
Platform |
N/A |
Optionally manage customer-managed keys (CMK) via Key Vault | |||||||||||||
|
Auditing / SMF / IFI |
Compliance logging |
Azure SQL Auditing / Defender |
You (Needed) |
Enable once; review monthly |
Azure SQL Database / Managed Instance supports auditing at the server or database level. Audits can be sent to Log Analytics, Storage Account, or Event Hub. | |||||||||||||
|
Performance Trace (IFCID) |
Problem diagnostics |
Query Store + Extended Events, Database Watcher, DMVs, Azure Portal Monitoring |
You (Conditional) |
When diagnosing issues |
Example: Use SSMS to browse Query Store view; Use Azure Portal to monitor performance; or use DMVs for detailed troubleshooting.
| |||||||||||||
|
Job Scheduling (JCL) |
Orchestrate utilities |
Azure Automation / Elastic Jobs in Azure SQL DB/ ADF Pipelines / Azure Logic Apps / Azure Functions |
You (As per need) |
Per task (daily/weekly) |
| |||||||||||||
|
Index Design Advisor |
Workload tuning |
Automatic Tuning (create/drop indexes) |
Platform (optional) |
Continuous |
ALTER DATABASE CURRENT SET AUTOMATIC_TUNING (CREATE_INDEX=ON, DROP_INDEX=ON) | |||||||||||||
|
Plan Regression Detection |
Stability |
Automatic Plan Correction |
Platform (if enabled) |
Continuous |
ALTER DATABASE CURRENT SET AUTOMATIC_TUNING (FORCE_LAST_GOOD_PLAN=ON) | |||||||||||||
|
HA / DR (Dual site, GDPS) |
Availability |
Built-in HA + Geo-Replication / Failover Groups |
Platform + You (DR config) |
Configure once; drills semi-annual |
| |||||||||||||
|
Backup Retention / Offsite Vault |
Long-term retention |
LTR (Long-Term Retention) |
You (Optional) |
Configure at migration; review annually |
Set LTR via
| |||||||||||||
|
Monitoring (OMEGAMON, Tivoli) |
Health & capacity |
Azure Monitor / Log Analytics / Database Watcher |
You (Needed) |
Daily dashboard; alert reaction |
| |||||||||||||
|
Deadlock / Lock Escalation Review |
Concurrency tuning |
Extended Events + DMVs |
You (Conditional) |
Investigate alerts |
| |||||||||||||
|
Batch Window Management |
Avoid contention |
Scale compute / workload smoothing |
You (Best Practice) |
Before large ETL |
If required, scale up database resources before batch execution to shorten batch execution: ALTER DATABASE ... MODIFY (SERVICE_OBJECTIVE='HS_Gen5_8'); scale down database resources after batch completion | |||||||||||||
|
ETL Load |
Faster bulk load |
Minimal logging depends on model (Always FULL) |
You (Conditional) |
During large loads |
On large data load to a given table (> 100 GB) Preferably:
SELECT object_name(object_id) AS TableName, index_id, percent_complete, state_desc FROM sys.index_resumable_operations WHERE state_desc = 'IN_PROGRESS'; | |||||||||||||
|
Object Ownership / Schema Sync |
Governance |
VS Code extension |
You (Needed) |
On deployment |
Schema compare between Db2 and SQL ADF based solution for Db2 and SQL Schema comparison: link Schema compare between Azure SQL databases The schema comparison tooling enables you to compare two database definitions:link | |||||||||||||
|
Database Comparison |
Governance |
Database Compare Utility |
You |
On-demand |
If you want to compare data between Db2 and Azure SQL DB HS you can use Database Compare Utility which is available for download at location : link |
Minimal Practical Post-Migration Maintenance Set (Typical Cadence)
Daily:
- Review alerts (performance, error rates, failed logins)
- Optional lightweight index fragmentation check if workload highly write-intensive
- Monitor Query Store for top regressions
After Bulk Loads:
- UPDATE STATISTICS table WITH FULLSCAN (only for large changes)
- Consider index rebuild if fragmentation spiked
Weekly:
- Targeted index maintenance (only fragmented ones)
- Targeted statistics maintenance (only for stale stats)
- Query Store review: force known good plan if auto correction off
Monthly (or quarterly for very large DB):
- DBCC CHECKDB (prefer off-peak or on a geo-secondary). If database size is large, perform this operation on business-critical tables or use specific options like PHYSICAL_ONLY.
- Security/audit review
- Cost/compute tier right-sizing
Annual / Semi-annual:
- DR failover test (Failover Group)
- LTR retention policy review
- Compression strategy review
- Automatic Tuning effectiveness assessment
Migration Mindset Tips
- Don’t lift-and-shift Db2 utility cadence; you might over-maintain and waste resources.
- Replace "daily job list" with "monitor + exception-based actions."
- For very large Hyperscale DBs, consider named replicas for offloading DBCC CHECKDB or reporting.
Disclaimer: The guidance, recommendations, and examples provided in this blog are based on our experience with migrating Db2 from Mainframe to Azure SQL Database Hyperscale and may not be universally applicable to all environments. Every customer’s workload, configuration, and performance characteristics are unique. You should thoroughly test and validate these recommendations in your own development or staging environment before applying them in production. We do not assume any responsibility or liability for any issues, downtime, or impacts resulting from the use of the information in this blog.
Feedback and suggestions
If you have feedback or suggestions for improving this data migration asset, please send an email to Database Platform Engineering Team.