Blog Post

Modernization Best Practices and Reusable Assets Blog
10 MIN READ

Redefining Database Maintenance after Migrating from Db2 on Mainframe to Azure SQL DB Hyperscale

Sandip-Khandelwal's avatar
Mar 25, 2026

Introduction

Migrating from Db2 z/OS to Azure SQL Database Hyperscale is a major step toward modernizing your mainframe relational data infrastructure to Azure Data SQL managed relational database offering. But what happens to all those daily and periodic Db2 database maintenance tasks you used to perform on Mainframe?

In this post, to make migration easier we have provided the recommended mapping between Db2 zOS maintenance and Azure SQL DB Hyperscale maintenance tasks that shows:

  • Tasks you must still schedule or trigger yourself
  • Tasks that are only needed after specific events (like large data loads)
  • Tasks that are now fully handled by Azure SQL Database PaaS or are simply not applicable anymore
  • Best practices that, while not mandatory, are strongly recommended

Mainframe Db2 vs Azure SQL Database Hyperscale Database Maintenance

For each maintenance activity, you will find actionable guidance on how to perform it in Azure SQL Database Hyperscale, helping you streamline operations and focus on what matters most.

Db2 z/OS Task / Concept

Purpose on Db2

Azure SQL DB Hyperscale Equivalent

Responsibility

Recommended Frequency (Post‑Migration)

Remark

 
 
 
 
 
 

Full Image COPY (Backup)

Recoverability

Note: Full image copy needs to be taken for data objects like Table space, index space etc.

Azure SQL DB Hyperscale utilizes storage snapshot technology to capture a full, complete copy of the database's data files. The transaction logs generated since the last snapshot are kept unchanged ("as is") for the set retention period to ensure point-in-time recovery.

Azure

Continuous

In Azure SQL DB HS default short term retention period of database backup is 7 days which can be configured till 35 days. You can configure retain backup for up to 10 years by configuring Long Term Retention Policy through Azure Portal / Azure CLI / PowerShell

Incremental / Delta COPY

Reduce backup window

No incremental backup in HS

Azure

N/A

 

Log Archive / Dual Logging

Point-in-time recovery

Transaction log backups automatic

Azure

Continuous

 

RUNSTATS (Table/Index)

Optimizer statistics

Auto Create/Auto Update Statistics

 

For specific cases schedule STATS update

Platform

+ You

Continuous

DBCC SHOW_STATISTICS command displays current query optimization statistics for a table or indexed view.

The Query Optimizer determines when statistics might be out-of-date and then updates them when they're needed for a query plan when AUTO_UPDATE_STATISTICS ON or AUTO_UPDATE_STATISTICS_ASYNC is enabled.

ASYNC_STATS_UPDATE_WAIT_AT_LOW_PRIORITY allows for updates of statistics asynchronously which can wait for the schema modification lock on a low priority queue. This improves concurrency for workloads with frequent query plan (re)compilations.

Asynchronous Auto Update Statistics

Leave it OFF (default) for most OLTP/reporting hybrids unless you see blocking on STATMAN operations. Consider enabling async if:

  • You have latency-sensitive OLTP queries that frequently block on synchronous stat refresh.
  • Trade-off: First executions after threshold may use stale stats until async update finishes.

ALTER DATABASE CURRENT SET AUTO_UPDATE_STATISTICS_ASYNC ON;

Statistics update for partitioned table. 

Option INCREMENTAL = { ON | OFF } allows for creation of statistics per partition of the table.

If only some partitions are changed recently use below statistics update command for specific partition stats update.

UPDATE STATISTICS dbo.Fact PARTITION = n

Updating Statistics manually:

UPDATE STATISTICS schema.table WITH FULLSCAN / SAMPLE N ROWS / SAMPLE N PERCENT

Update Stats after: 

·        Bulk load (insert millions of rows) into existing large table

·        Large data purge (delete/archival)

·        Partition switch-in / switch-out

·        Parameter sniffing + big, estimated vs actual row mismatch repeatedly

Identifying Which Stats Need Attention

Query to rank stats by modification ratio:

Pick those with (general guidance tune according to specific workload and performance expectations):

  • modification_pct > 15% on large fact tables
  • or absolute modification_counter very high (e.g., > 100K changes) even if pct smaller (extreme scale tables)

Simple Maintenance Script (DMV-Driven)

This updates only stats with >15% modification and >100K row changes:

DECLARE @sql nvarchar(max) = N'';
WITH c AS (
  SELECT
    s.object_id, s.stats_id,
    QUOTENAME(OBJECT_SCHEMA_NAME(s.object_id)) + '.' + QUOTENAME(OBJECT_NAME(s.object_id)) AS full_table_name,
    QUOTENAME(s.name) AS stat_name,
    sp.rows, sp.modification_counter,
    1.0 * sp.modification_counter / NULLIF(sp.rows,0) AS mod_ratio
  FROM sys.stats s
  CROSS APPLY sys.dm_db_stats_properties(s.object_id, s.stats_id) sp
  WHERE sp.modification_counter IS NOT NULL
    AND OBJECTPROPERTY(s.object_id,'IsMsShipped') = 0
    AND sp.rows >= 100000
    AND sp.modification_counter >= 100000
    AND (1.0 * sp.modification_counter / NULLIF(sp.rows,0)) >= 0.15
)
SELECT @sql = STRING_AGG(
   N'UPDATE STATISTICS ' + full_table_name + N' ' + stat_name + N' WITH SAMPLE 30 PERCENT;',' ')
FROM c;
PRINT @sql;  -- Review first
EXEC sp_executesql @sql;

 

Decision Matrix

Question

Answer

Should I keep AUTO_CREATE/UPDATE ON?

Often Yes. For big tables with millions of records with sizes 100 GB+ if auto create / update / asynchronous stats update is having performance impact, you can disable this and schedule daily / weekly process to perform STATAS UPDATE.

Do I need a nightly job?

Often no; maybe lightweight sp_updatestats for very volatile

workloads.

After heavy ETL?

Yes, targeted stats refresh (especially dimensions/facts touched).

After index rebuild?

No extra stats on that index; other column stats unaffected may still

need update.

Use FULLSCAN often?

Only for a handful of skewed, performance-critical tables with proven

cardinality issues.

REORG TABLE (Tablespace)

Eliminate overflow / reclaim space

- Azure SQL does not have REORG TABLE command

- Similar result can achieved by using REBUILD of the clustered index

Not usually required. Should be done if there is large (>30 %) index defragmentation

Note: 30% is just guidance.

N/A

Db2 has table level REORG Option, Azure SQL DB HS has index level REORG options.

REORG INDEX / REBUILD INDEX

Defragment indexes

REBUILD / REORGANIZE index

You (Needed / Conditional)

Weekly or when fragmentation > thresholds

Check current fragmentation: SELECT avg_fragmentation_in_percent FROM sys.dm_db_index_physical_stats;

Reorganize (5–30%): ALTER INDEX ix ON tbl REORGANIZE

Rebuild (>30%): ALTER INDEX ix ON tbl REBUILD WITH (ONLINE=ON , RESUMABLE = ON, MAXDOP=1/4/8);

Above numbers (5 % to 30 % REORG / > 30 % REBUILD) are just guidelines in many cases you may not need index REBUILD due to latest storage technologies or you may need to perform REBUILD after > 60-70 % fragmentation. Also, you may not need REORG. STATS update needs to be done diligently for query performance improvement. Perform through test compare before and after execution plans to derive conclusion on this.

Large Index Rebuild special handling:

ALTER INDEX ix ON tbl REBUILD WITH (ONLINE=ON , RESUMABLE = ON, MAX_DURATION = 60,MAXDOP=1/4/8);

-- Pause after current batch

ALTER INDEX ix ON tbl PAUSE;

-- Resume with lower DOP

ALTER INDEX ix ON tbl RESUME WITH (MAXDOP = 2, MAX_DURATION = 60);

-- Abort and roll back if required

ALTER INDEX ix ON tbl ABORT;

Monitor progress of Index rebuild operation using below query:

SELECT r.session_id, r.command, r.percent_complete, r.start_time, r.estimated_completion_time

FROM sys.dm_exec_requests r

WHERE r.command LIKE '%INDEX%';

Perform Index Rebuild using example below approaches:

a)          Ola Hallengren maintenance Scripts : link

    Or

b)          Adaptive Index Defragmentation: link

    Or

c)           How to maintain Azure SQL Indexes and Statistics : link

Automating Azure SQL DB index and statistics maintenance using Azure Automation : link

COPY / QUIESCE utilities

Consistent copy state

Not needed (transactionally consistent snapshots)

Not Applicable

N/A

N/A

CHECK DATA / CHECK INDEX

Structural consistency

DBCC CHECKDB

You (Best Practice)

Monthly or Weekly (off-peak)

Azure SQL Database automatically runs internal consistency checks, but you may still run it manually if you want:

T-SQL Query: DBCC CHECKDB(DatabaseName);

For large DB: run on geo-secondary or named replica then review

RUN QUERY EXPLAIN SNAPSHOT

Capture access paths

Query Store captures execution plans

Platform + You (Analysis)

Ongoing

  • Ensure Query Store ON with Operation Mode DEFAULT i.e. READ_WRITE (collects and persists query stats).
  • Easy analysis of Query store details using SSMS

REBIND Packages

Refresh access plans

Not needed (no static package binding)

Not Applicable

N/A

N/A

Catalog Statistics Maintenance

Optimizer health

System metadata auto maintained

Platform

N/A

N/A

Buffer Pool Sizing (BP0/BP32K etc.)

Memory tuning

Managed by service tier

Platform

N/A

N/A

Storage Space Preallocation

Avoid space issues

Storage allocated dynamically (i.e. auto-grow page servers)

Platform

N/A

Monitor size: sys.database_files; optionally purge/archive data

Archive Log Space Monitoring

Prevent log fill

Log scaling

Platform (capacity) + You (workload)

Monitor during bursts

sys.dm_db_resource_stats; watch log_write_percent

Partition Maintenance (ROLL-IN/OUT)

Lifecycle management

Table partitioning (if used)

You (Conditional)

During data lifecycle events

  • Switching Partitions (Fast Load/Unload) : Move data in/out of a partitioned table without large DELETE/INSERT operations:

ALTER TABLE [PartitionedTable] SWITCH 

PARTITION N TO [StagingTable];

  • Rebuilding/Defragmenting Indexes per Partition

Instead of rebuilding the entire table, you can rebuild indexes on a single partition, which is faster and less blocking:

ALTER INDEX [IX_YourIndex] ON [YourPartitionedTable] REBUILD PARTITION = N

WITH (ONLINE = ON);

Compression (ROW/PAGE) MGMT

Space and IO

ROW / PAGE / ColumnStore compression (if enabled)

You (Optional)

At design / periodic review

Row compression stores fixed-length data as variable-length, ideal for CHAR, INT, etc.

Page compression adds prefix and dictionary compression for repeated values in large tables.

Columnstore compression stores data column-wise for huge analytical tables with millions of rows.

Security: RACF/External Auth Integration

Access control

Microsoft Entra ID Auth, Managed Identities, Microsoft Entra Service Principal 

You (Needed)

At onboarding + periodic review

Example authentication using Microsoft Entra ID:

DECLARE @EntraIDUser NVARCHAR(128) = 'john@contoso.com';

CREATE USER [@EntraIDUser] FROM EXTERNAL PROVIDER;

ALTER ROLE db_datareader ADD MEMBER [@EntraIDUser];

ALTER ROLE db_datawriter ADD MEMBER [@EntraIDUser];

Encryption (Dataset / Log)

Compliance

TDE auto-enabled

Platform

N/A

Optionally manage customer-managed keys (CMK) via Key Vault

Auditing / SMF / IFI

Compliance logging

Azure SQL Auditing / Defender

You (Needed)

Enable once; review monthly

Azure SQL Database / Managed Instance supports auditing at the server or database level. Audits can be sent to Log Analytics, Storage Account, or Event Hub.

Performance Trace (IFCID)

Problem diagnostics

Query Store + Extended Events, Database Watcher, DMVs, Azure Portal Monitoring

You (Conditional)

When diagnosing issues

Example: Use SSMS to browse Query Store view; Use Azure Portal to monitor performance; or use DMVs for detailed troubleshooting.

 

Job Scheduling (JCL)

Orchestrate utilities

Azure Automation /

Elastic Jobs in Azure SQL DB/

ADF Pipelines /

Azure Logic Apps / Azure Functions

You (As per need)

Per task (daily/weekly)

  • Azure Automation – Serverless runbooks or scripts run in the cloud using managed identities or credentials.
  • Elastic Job in Azure SQL DB - You can create and schedule elastic jobs that could be periodically executed against one or many Azure SQL databases to run Transact-SQL (T-SQL) queries and perform maintenance tasks.
  • ADF / Fabric Pipelines – Orchestrate queries and maintenance as scheduled or triggered pipeline activities.
  • Logic Apps / Functions – Event-driven query execution in response to timers or Azure events.

Index Design Advisor

Workload tuning

Automatic Tuning (create/drop indexes)

Platform (optional)

Continuous

ALTER DATABASE CURRENT SET AUTOMATIC_TUNING (CREATE_INDEX=ON, DROP_INDEX=ON)

Plan Regression Detection

Stability

Automatic Plan Correction

Platform (if enabled)

Continuous

ALTER DATABASE CURRENT SET AUTOMATIC_TUNING (FORCE_LAST_GOOD_PLAN=ON)

HA / DR (Dual site, GDPS)

Availability

Built-in HA + Geo-Replication / Failover Groups

Platform + You (DR config)

Configure once; drills semi-annual

  • You can create up to 4 HA replicas. HA replica uses same page servers as the primary replica, so no data copy is required to add an HA replica. Information for High Availability replica is present at documentation: link
  • Named replica just like an HA uses the same page servers as the primary replica. Named replica can have their own SLO and you can create up to 30 named replicas for read scale out purpose. Create Hyperscale named replica by following documentation: link
  • For DR replica easily configure geo replication / failover group for Azure SQL Database as explained at location : link

Backup Retention / Offsite Vault

Long-term retention

LTR (Long-Term Retention)

You (Optional)

Configure at migration; review annually

Set LTR via

  • Azure Portal or
  • Azure CLI : az sql db ltr-policy set or
  • Azure Powershell : Set-AzSqlDatabaseBackupLongTermRetentionPolicy

Monitoring (OMEGAMON, Tivoli)

Health & capacity

Azure Monitor / Log Analytics / Database Watcher

You (Needed)

Daily dashboard; alert reaction

  • Using Azure Monitor create Alerts on CPU %, Memory %, Transaction Log throughput %, Storage %, blocking, failed logins etc.
  • Database Watcher collects in-depth workload monitoring data to give you a detailed view of database performance, configuration, and health.

Deadlock / Lock Escalation Review

Concurrency tuning

Extended Events + DMVs

You (Conditional)

Investigate alerts

  • sys.dm_tran_locks: Current (transient) lock inventory. Useful for spotting patterns leading to deadlocks (e.g., two sessions holding incompatible locks in different order) and for capturing blocking chains before a deadlock forms. It does NOT retain history; deadlocks often resolve in milliseconds, so you rarely see the actual deadlock moment here.
  • system_health Extended Event: Always running; captures xml_deadlock_report events with a full deadlock graph (processes, resource nodes, lock modes, victim). Gives you post‑mortem detail even if you missed it live.

Batch Window Management

Avoid contention

Scale compute / workload smoothing

You (Best Practice)

Before large ETL

If required, scale up database resources before batch execution to shorten batch execution:

ALTER DATABASE ... MODIFY (SERVICE_OBJECTIVE='HS_Gen5_8');

scale down database resources after batch completion

ETL Load

Faster bulk load

Minimal logging depends on model (Always FULL)

You (Conditional)

During large loads

On large data load to a given table (> 100 GB)

Preferably:

  1.  Keep Clustered Column Store Indexes on table as it is. Use batch size of >102, 400 rows for better performance.
  2.  Drop Clustered Row store and non-clustered indexes before load if possible and recreate them after load.
  3.  While recreating indexes on large tables use these options : ONLINE = ON, RESUMABLE = ON, MAXDOP = 4 / 8
  4.  Monitor the progress of index creation using:

SELECT object_name(object_id) AS TableName, index_id,  percent_complete,  state_desc FROM sys.index_resumable_operations

WHERE state_desc = 'IN_PROGRESS';

Object Ownership / Schema Sync

Governance

VS Code extension

You (Needed)

On deployment

Schema compare between Db2 and SQL

ADF based solution for Db2 and SQL Schema comparison: link

Schema compare between Azure SQL databases

The schema comparison tooling enables you to compare two database definitions:link

Database Comparison

Governance

Database Compare Utility

You

On-demand

If you want to compare data between Db2 and Azure SQL DB HS you can use Database Compare Utility which is available for download at location : link

Minimal Practical Post-Migration Maintenance Set (Typical Cadence)

Daily:

  • Review alerts (performance, error rates, failed logins)
  • Optional lightweight index fragmentation check if workload highly write-intensive
  • Monitor Query Store for top regressions

After Bulk Loads:

  • UPDATE STATISTICS table WITH FULLSCAN (only for large changes)
  • Consider index rebuild if fragmentation spiked

Weekly:

  • Targeted index maintenance (only fragmented ones)
  • Targeted statistics maintenance (only for stale stats)
  • Query Store review: force known good plan if auto correction off

Monthly (or quarterly for very large DB):

  • DBCC CHECKDB (prefer off-peak or on a geo-secondary). If database size is large, perform this operation on business-critical tables or use specific options like PHYSICAL_ONLY.
  • Security/audit review
  • Cost/compute tier right-sizing

Annual / Semi-annual:

  • DR failover test (Failover Group)
  • LTR retention policy review
  • Compression strategy review
  • Automatic Tuning effectiveness assessment

Migration Mindset Tips

  • Don’t lift-and-shift Db2 utility cadence; you might over-maintain and waste resources.
  • Replace "daily job list" with "monitor + exception-based actions."
  • For very large Hyperscale DBs, consider named replicas for offloading DBCC CHECKDB or reporting.
Disclaimer: The guidance, recommendations, and examples provided in this blog are based on our experience with migrating Db2 from Mainframe to Azure SQL Database Hyperscale and may not be universally applicable to all environments. Every customer’s workload, configuration, and performance characteristics are unique. You should thoroughly test and validate these recommendations in your own development or staging environment before applying them in production. We do not assume any responsibility or liability for any issues, downtime, or impacts resulting from the use of the information in this blog.

Feedback and suggestions

If you have feedback or suggestions for improving this data migration asset, please send an email to Database Platform Engineering Team.

Updated Nov 28, 2025
Version 1.0
No CommentsBe the first to comment