First published on MSDN on Jan 20, 2017
Hello again,
I worked on another very interesting service request when our customer received the message: "MODIFY FILE failed. Size is greater than MAXSIZE. Please query sys.database_files and use DBCC SHRINKFILE to reduce the file size first" trying to scale down the Azure database tier.
This message happens when the database file size is greater than the allowed size for the target database tier. This increment of the data file could be caused due to a huge insert, delete or update operation. You could find more information in this post .
To be able to scale down the database tier, we need to reduce the size of the data file running the DBCC SHRINKFILE up to the limit of the target database tier. There is not needed to reduce the size of the Transaction log due to it will be managed by Azure SQL Database engine.
But, sometimes, when we run the DBCC SHRINKFILE(1,<desiredsizeinmb) we could get timeout, the process could take hours or after finishing the process we were not able to reduce the datafile size.
These problems, could be for several reasons, for example,
As solution to be able to reduce the data file size, I suggested:
Enjoy!
Hello again,
I worked on another very interesting service request when our customer received the message: "MODIFY FILE failed. Size is greater than MAXSIZE. Please query sys.database_files and use DBCC SHRINKFILE to reduce the file size first" trying to scale down the Azure database tier.
This message happens when the database file size is greater than the allowed size for the target database tier. This increment of the data file could be caused due to a huge insert, delete or update operation. You could find more information in this post .
To be able to scale down the database tier, we need to reduce the size of the data file running the DBCC SHRINKFILE up to the limit of the target database tier. There is not needed to reduce the size of the Transaction log due to it will be managed by Azure SQL Database engine.
But, sometimes, when we run the DBCC SHRINKFILE(1,<desiredsizeinmb) we could get timeout, the process could take hours or after finishing the process we were not able to reduce the datafile size.
These problems, could be for several reasons, for example,
- Our customer has the database compressed and the resources of the database tier are not enough to complete the operation, due to, the CPU, DATA IO and Transaction Log are 100% all the time.
- Our customer deleted a huge amount of data. In this case, I would like to explain that when the data is deleted, this data will be not physically removed it will be marked as usable. You could find more information in this URL for basic knowledge.
As solution to be able to reduce the data file size, I suggested:
-
First, rebuild the index(es) of the table(s) affected by the huge operation insert/delete/update. Rebuilding the index all rows marked as deleted will be removed physically.
- If you want to rebuild all indexes of a table you could run ALTER INDEX ALL ON [TableName] REBUILD
- If you don't know how to obtain the tablas, rows and the space for every one, I would like to share a modified script (below) obtained from this URL when you could find this information.
SELECT
t.NAME AS TableName,
s.Name AS SchemaName,
p.rows AS RowCounts,
SUM(a.total_pages) * 8 AS TotalSpaceKB,
SUM(a.used_pages) * 8 AS UsedSpaceKB,
(SUM(a.total_pages) - SUM(a.used_pages)) * 8 AS UnusedSpaceKB
FROM sys.tables t
INNER JOIN sys.indexes i ON t.OBJECT_ID = i.object_id
INNER JOIN sys.partitions p ON i.object_id = p.OBJECT_ID AND i.index_id = p.index_id
INNER JOIN sys.allocation_units a ON p.partition_id = a.container_id
LEFT OUTER JOIN sys.schemas s ON t.schema_id = s.schema_id
WHERE t.is_ms_shipped = 0 AND i.OBJECT_ID > 255
GROUP BY t.Name, s.Name, p.Rows
ORDER BY t.Name
- Second, instead of execute the DBCC SHRINKFILE for the size that you want to have for your database tier target, reduce the size running the DBCC SHRINKFILE in chunking process, for example, every 50 GB, e.g., if your database file has 825 GB and you deleted 400 GB, instead of execute DBCC SHRINKFILE(1,425000), try to execute several times the command for every 50 GB, like this DBCC SHRINKFILE(1,775000), when this finish, run again DBCC SHRINKFILE(1,725000), etc...
Enjoy!
Updated Mar 14, 2019
Version 2.0Jose_Manuel_Jurado
Microsoft
Joined November 29, 2018
Azure Database Support Blog
Follow this blog board to get notified when there's new activity