And another question around SQL Server Database Compression answered
Published Mar 13 2019 08:29 AM 199 Views
Microsoft
First published on MSDN on Mar 16, 2010

In the earlier post, I mentioned a customer who reduced their SAP BW data volume from 2.3TB to 600GB. Talking about this in some customer meetings, I got asked how we usually reclaim the free space which results out of database compression. Or in other words, how can I give this rather large free space back to the file system. This is the point where I really save money.

We had a few customer cases where the customer decided to do nothing more than leave the increased free space within the database well knowing that the growth rate they experience would eat up that free space in 12-18 months again. Since SQL Server backups don’t backup free space, but only the real allocated pages/extents, this way doesn’t affect backup volumes in a negative manner.

However in other cases where the free space is as large as in the case above and the growth rate is moderate, you definitely want to reclaim the disk space. Despite my negative statements on dbcc shrinkfile I expressed in this document: http://www.sdn.sap.com/irj/sdn/sqlserver?rid=/library/uuid/4ab89e84-0d01-0010-cda2-82ddc3548c65...

In most cases this is the only reasonable method to reclaim disk space as a result of database compression. Why did we discourage customers from using dbcc shrinkfile or shrinkdb in a first instance:

·         The process is rather slow and needs to be done ideally file by file serially

·         More severe for some SAN infrastructures in the past, this process of shrinking could confuse read-ahead logic of SAN systems. Reason is the way shrinkfile works. It does work its way from the back of a file to the mark where it should stop. It is very likely that it does encounter some extents in the range it should cut. Shrinkfile now will move such extents on a page by page basis. The algorithm basically will allocate a new extent in the range of the file which remains. Then it copies the last page of the extent to be moved as first page of the new extent. Moving a perfectly de-fragmented extent (next logical page is equal to the next physical page) this process basically turns the logical order of the pages within an extent opposite to the physical order. Or in other words, the last logical and physical page of the source extent ends up as the first physical page in the new extent. However it still remains the last logical page. Years back this could confuse some SAN read-ahead algorithms to a degree that highly expensive SAN devices showed way slower performance than a few internal disks.

Today most of the algorithms SAN vendors use are resilient to such a side effect of shrinkfile. The effect also only is maximized when the table is mostly de-fragmented. For a highly fragmented table, it really doesn’t matter. Therefore you can use dbcc shrinkfile to reclaim disk space. If you detect that the disk I/O performance as a result of the shrinkfile is impacted in a negative way, a rebuild of the clustered indexes would straighten out the logical and physical order again. Nevertheless shrinking SAP database files only to expand those a few weeks later piecewise again shouldn’t become strategy or process.

Regards  Juergen


Version history
Last update:
‎Mar 13 2019 08:29 AM
Updated by: