I decided to talk about an issue I've recently dug a little deeper, and so i will talk about how and why Log files (and their growth strategy) affect recovery times, mirroring, replication, large data changes (batches), leads to disk fragmentation, and ultimately adds stress factors to SQL Server I/O.
First of all, to lay down the basics, know that SQL Server allocates Log space in logical sections called Virtual Log Files (VLFs), and these are created with each file growth.
These VLFs, both active and inactive, make up the Log file of every Database.
According to Kimberly Tripp, the growth factor is directly connected to the number of created VLFs in the following ratio:
Growth of less than 64MB creates 4 VLFs.
Growth of more than 64MB and less than 1GB creates 8 VLFs.
Growth of 1GB and larger creates 16 VLFs.
Also, as a rule-of-thumb, these shouldn’t exceed 50 on a given TLog, with a average of 20 or 30. Even 100 can be an acceptable number, and I encourage you to test and find an optimum value for your DBs.
The overhead of reading/writing to VLFs by the LogReader and LogWriter processes is affected by the sheer quantity and size of them, and can affect the servers performance on a number of operations, like the discovery of VLFs when starting up a Database, or when scanning every VLF for transactions that are marked for replication.
That is why these operations can take a long time when there is a large TLog made up of a big number of VLFs.
To find how many VLFs you have, use the following undocumented command, and look at the rowcount:
For example, let’s imagine a 15GB Log file, grown in 100MB increments (which is not an uncommon setup). You’ll have 1200 VLFs with about 12,5MB each.
On highly I/O intensive setups, you may even find a smaller growth factor, of 10MB for instance, so you can imagine the potential number of VLFs spawned.
We can speculate if this is due to the thought that growing in really small iterations helps keeping I/O stress down due to file growth to a minimum, which is true, but then you have other side effects that affect performance in ways I've listed above.
On the given example of a large 15GB log, and according to Kimberly Tripp advise, we should have roughly 30 VLFs of 512MB each.
On DBs that have been in production for some time, the transaction log shouldn’t be growing that much, and eventually it should reach a stable size. That is when you can intervene and keep the VLFs under control, by shrinking the TLog to a minimum (via
ALTER DATABASE (…) MODIFY FILE
), and then growing it manually, based on the VLF creation figures above, until you reach the stable size found for that specific TLog and Database.
Bottom line, plan your log size and don’t rely on autogrow. These can be in place for emergency growth only, and be preferably in MB chunks instead of percentage.
there is a known
in SQL Server 2005 and SQL Server 2008 (already fixed in the next SQL Server release) that when you grow a TLog in 4GB increments, the first growth is only of a fraction of the issued size (a few Kilobytes), and reissuing the growth command will then work as intended.
is to use a 3GB or a 5GB growth increment.
Until next time!
I hope that the information on these pages is valuable to you. Your use of the information contained in these pages, however, is at your sole risk. All information on these pages is provided "as -is", without any warranty, whether express or implied, of its accuracy, completeness, fitness for a particular purpose, title or non-infringement, and none of the third-party products or information mentioned in the work are authored, recommended, supported or guaranteed by Ezequiel. Further, Ezequiel shall not be liable for any damages you may sustain by using this information, whether direct, indirect, special, incidental or consequential, even if it has been advised of the possibility of such damages.