Blog Post

Exchange Team Blog
2 MIN READ

Do we need to file-level defragment Exchange database drives?

The_Exchange_Team's avatar
Oct 25, 2004

Every so often there is a question: "Should we run file-level defragmentation software on Exchange servers?"

Usually, this comes from confusion that file system defragmentation actually helps Exchange as - well, Exchange databases get fragmented too.

The process of Exchange database fragmentation is a completely different story though - it is the defragmentation of the "white space" or empty database pages within the Exchange database. There are 2 types of defragmentation of Exchange databases:

ONLINE defragmentation - this is what happens as part of online maintenance which is by default run on nightly basis. Here we rearrange the data (database pages really) within the database to have more contiguous white space. Typically you will want to make sure that your backup schedule does NOT interfere with online maintenance schedule, as starting of online backup will stop the online defrag.

OFFLINE defragmentation - this is what happens when you run ESEUTIL utility with the /d switch - therefore you need to take the database offline to do it. This is typically done only when there is a specific reason to do it - such as reclaiming huge amounts of hard drive space, if instructed to do so by Support Services when troubleshooting a specific problem, or after a database hard repair (which is another thing that we should never do).

So - that being said - what about file system defragmentation?

I would never do it on running production server databases. The reason for it is simple actually - file system defrag is a very intense I/O operation. So the disc will be very busy. I have seen some cases here in Support Services, where our database engine has actually started logging warnings that the write to the disc was successful, but it took "unusually long" to complete, and it was suggesting that hardware might be at fault. Sure enough - a disk defrag kicked off just before this started happening as witnessed by the Application log. That right there is enough reason for me not to do it in real life.

The bottom line really is - you do not HAVE to file-level defrag the Exchange database drives. Exchange reads and writes to it's databases in very random fashion. Large sequential reads and writes will see much more improvement from file system defrag than Exchange databases will. But if you really WANT to do it - I would do it the old-fashioned way: move the databases off to some other volume, file system defrag the drive and then move the databases back... Or at least make sure you have a good backup, dismount the databases and file-system defrag then.

Few related things to read:

328804 How to Defragment Exchange Databases
http://support.microsoft.com/?id=328804

192185 XADM: How to Defragment with the Eseutil Utility (Eseutil.exe)
http://support.microsoft.com/?id=192185

256352 Online Defragmentation Does Not Reduce Size of .edb Files
http://support.microsoft.com/?id=256352

- Nino Bilic

Updated Jul 01, 2019
Version 2.0
  • Interesting observation.

    Based on my own experiences I’ve experienced quite a noticeable increase in performance after running regular file-level defrag. This is done off peak hours of course and only after backup as run its due course.

    Further more I’ve been in disaster recovery situations where we had Eseutil taking 7-8 hours before completing even one pass at a 9Gb badly corrupted store. The reason we found out was 30’000+ fragments at the file level. After we ran defrag a few rounds hours spent with Eseutil decreased to a “mere” 4-5. Obviously the store was in deep trouble regardless.

    Since Exchange accesses the stores in a random fashion it makes sense that defrag should not make out to a huge difference, but I personal experience is that with regular file-level defrag, you will not really notice the I/O hit. For a system that is continuously busy 24/7 I agree that this might be a problem.

  • I think the thing to remember is the best way to prevent fragmentation of the store is to first defragment the servers disk drive before creating the store. And keep other file I/O off that drive. If possible, NTFS should append your growing store to the next logical block.

    Also, if you defragment a second drive, and move your store from one drive to another, you should reduce the fragmentation as well.

    I agree defragmenting a RUNNING server is a disaster waiting to happen.
  • Às reply to TJ, I do agree that - at first - it does help to first Defrag your system before creating a store.

    However, in our case the data stores are moved/created to a completely different volume which was completely empy - just formatted.

    After migrating all our data from groupwise and a month worth of production, our database has grown to 81GB... when using the analyze feature of defrag I only see a huge amount of red and next to none of blue lines...

    Perhaps this is due to the nature of the Jetbase engine. Personally I like the system of Oracle better where the database has a particular filesize to start with so there simply won't be any fragmentation. (provided ofcause again that the drive was unfragmented to start with.)
  • Some more observations:

    - Microsoft's defrag APIs fully and safely support defragmentation of Exchange datastores without first shutting down Exchange services.

    - While I agree that kicking off a defrag pass on a drive with a heavy I/O load can result in a slowdown in drive performance, it does NOT mean that it is unsafe to do - regardless of application (file serving, Exchange, SQL, etc...)

    - I have searched for several years now for information from Microsoft indicating that it is unsafe to run a file-level defragmenter on drives where Exchange datastores/files are located and have yet to find any whatsoever.

    While many Exchange people strongly believe that it is unsafe, they are never able to point to a definitive source (Microsoft) to justify. Usually it is, "somebody told me it was unsafe" or "I heard it was unsafe".

    Greg/Microsoft MVP - Windows File Systems