We used to have situations where our customer needs to export 2 TB of data using SQLPackage in Azure SQL Database. Exporting this amount of data might take time and following we would like to share with you some best practices for this specific scenario.
- If you’re exporting from General Purpose Managed Instance (remote storage), you can increase remote storage database files to improve IO performance and speed up the export.
- Temporarily increase your compute size.
- Limit usage of database during export (like in Transactional consistency scenario consider using dedicated copy of the database to perform the export operation)
- Use a Virtual Machine in Azure with Accelerated Networking in Azure and in the same region of the database.
- Use as a folder destination and temporal file with a enough capacity and SSD to improve the exported file performance and multiple temporary files created.
- Consider using a clustered index with non-null values on all large tables. With clustered index, export can be parallelized, hence much more efficient. Without clustered indexes, export service needs to perform table scan on entire tables in order to export them, and this can lead to time-outs after 6-12 hours for very large tables.
- Review the following articles:
- Besides the diagnostic parameter in SQLPackage , you could see how are the queries is running enabling SQL Auditing in this database or using SQL Server Profiler extension in Azure Data Studio.