How to Determine the Minimum Staging Area DFSR Needs for a Replicated Folder
Published Apr 04 2019 06:03 PM 7,198 Views
Microsoft
First published on TechNet on Jul 13, 2011

Warren here again. This is a quick reference guide on how to calculate the minimum staging area needed for DFSR to function properly. Values lower than these may cause replication to go slowly or stop altogether. Keep in mind these are minimums only . When considering staging area size, remember this: the bigger the staging area the better, up to the size of the Replicated Folder. See the section “How to determine if you have a staging area problem” and the blog posts linked at the end of this article for more details on why it is important to have a properly sized staging area.


Update : Warren is very persuasive! We now have a hotfix to help you with calculating staging sizes. http://support.microsoft.com/kb/2607047


Rules of thumb


Windows Server 2003 R2 – The staging area quota must be as large as the 9 largest files in the Replicated Folder


Windows Server 2008 and 2008 R2 – The staging area quota must be as large as the 32 largest files in the Replicated Folder


Initial Replication will make much more use of the staging area than day-to-day replication. Setting the staging area higher than the minimum during initial replication is strongly encouraged if you have the drive space available


Where do I get PowerShell?


PowerShell is included on Windows 2008 and higher. You must install PowerShell on Windows Server 2003. You can download PowerShell for Windows 2003 here .


How do you find these X largest files?


Use a PowerShell script to find the 32 or 9 largest files and determine how many gigabytes they add up to (thanks to Ned Pyle for the PowerShell commands). I am actually going to present you with three PowerShell scripts. Each is useful on its own; however, number 3 is the most useful.



1. Run:


Get-ChildItem c:\temp -recurse | Sort-Object length -descending | select-object -first 32 | ft name,length -wrap –auto


This command will return the file names and the size of the files in bytes. Useful if you want to know what 32 files are the largest in the Replicated Folder so you can “visit” their owners.


2. Run:


Get-ChildItem c:\temp -recurse | Sort-Object length -descending | select-object -first 32 | measure-object -property length –sum


This command will return the total number of bytes of the 32 largest files in the folder without listing the file names.


3. Run:


$big32 = Get-ChildItem c:\temp -recurse | Sort-Object length -descending | select-object -first 32 | measure-object -property length –sum


$big32.sum /1gb


This command will get the total number of bytes of 32 largest files in the folder and do the math to convert bytes to gigabytes for you. This command is two separate lines. You can paste both them into the PowerShell command shell at once or run them back to back.



Manual Walkthrough


To demonstrate the process and hopefully increase understanding of what we are doing, I am going to manually step through each part.


Running command 1 will return results similar to the output below. This example only uses 16 files for brevity. Always use 32 for Windows 2008 and later operating systems and 9 for Windows 2003 R2


Example Data returned by PowerShell












































































Name


Length


File5.zip


10286089216


archive.zip


6029853696


BACKUP.zip


5751522304


file9.zip


5472683008


MENTOS.zip


5241586688


File7.zip


4321264640


file2.zip


4176765952


frd2.zip


4176765952


BACKUP.zip


4078994432


File44.zip


4058424320


file11.zip


3858056192


Backup2.zip


3815138304


BACKUP3.zip


3815138304


Current.zip


3576931328


Backup8.zip


3307488256


File999.zip


3274982400



How to use this data to determine the minimum staging area size:




  • Name = Name of the file.

  • Length = bytes

  • One Gigabyte = 1073741824 Bytes


First, you need to sum the total number of bytes. Next divide the total by 1073741824. I suggest using Excel or your spreadsheet of choice to do the math.


Example


From the example above the total number of bytes = 75241684992. To get the minimum staging area quota needed I need to divide 75241684992 by 1073741824.



75241684992 / 1073741824 = 70.07 GB



Based on this data I would set my staging area to 71 GB if I round up to the nearest whole number.


Real World Scenario:


While a manual walkthrough is interesting it is likely not the best use of your time to do the math yourself. To automate the process, use command 3 from the examples above. The results will look like this





Using the example command 3 without any extra effort except for rounding to the nearest whole number, I can determine that I need a 6 GB staging area quota for d:\docs.


Do I Need to Reboot or Restart the Service for the Changes to be Picked Up?


Changes to the staging area quota do not require a reboot or restart of the service to take effect. You will need to wait on AD replication and DFSR’s AD polling cycle for the changes to be applied.


How to determine if you have a staging area problem


You detect staging area problems by monitoring for specific events IDs on your DFSR servers. The list of events is 4202, 4204, 4206, 4208 and 4212. The texts of these events are listed below. It is important to distinguish between 4202 and 4204 and the other events. It is possible to log a high number of 4202 and 4204 events under normal operating conditions. Think of 4202 and 4204 events as being analogous to taking your pulse whereas 4206, 4208 and 4212 are like chest pains. I explain below how to interpret your 4202 and 4204 events below.


Staging Area Events



Event ID: 4202
Severity: Warning


The DFS Replication service has detected that the staging space in use for the replicated folder at local path (path) is above the high watermark. The service will attempt to delete the oldest staging files. Performance may be affected.


Event ID: 4204
Severity: Informational


The DFS Replication service has successfully deleted old staging files for the replicated folder at local path (path). The staging space is now below the high watermark.


Event ID: 4206
Severity: Warning


The DFS Replication service failed to clean up old staging files for the replicated folder at local path (path). The service might fail to replicate some large files and the replicated folder might get out of sync. The service will automatically retry staging space cleanup in (x) minutes. The service may start cleanup earlier if it detects some staging files have been unlocked.


Event ID: 4208
Severity: Warning


The DFS Replication service detected that the staging space usage is above the staging quota for the replicated folder at local path (path). The service might fail to replicate some large files and the replicated folder might get out of sync. The service will attempt to clean up staging space automatically.


Event ID: 4212
Severity: Error


The DFS Replication service could not replicate the replicated folder at local path (path) because the staging path is invalid or inaccessible.



What is the difference between 4202 and 4208?


Events 4202 and 4208 have similar text; i.e. DFSR detected the staging area usage exceeds the high watermark. The difference is that 4208 is logged after staging area cleanup has run and the staging quota is still exceeded. 4202 is a normal and expected event whereas 4208 is abnormal and requires intervention.


How many 4202, 4204 events are too many?


There is no single answer to this question. Unlike 4206, 4208 or 4212 events, which are always bad and indicate action is needed, 4202 and 4204 events occur under normal operating conditions. Seeing many 4202 and 4204 events may indicate a problem. Things to consider:



  1. Is the Replicated Folder (RF) logging 4202 performing initial replication? If so, it is normal to log 4202 and 4204 events. You will want to keep these to down to as few as possible during Initial Replication by providing as much staging area as possible

  2. Simply checking the total number of 4202 events is not sufficient. You have to know how many were logged per RF. If you log twenty 4202 events for one RF in a 24 hour period that is high. However if you have 20 Replicated Folders and there is one event per folder, you are doing well.

  3. You should examine several days of data to establish trends.


I usually counsel customers to allow no more than one 4202 event per Replicated Folder per day under normal operating conditions. “Normal” meaning no Initial Replication is occurring. I base this on the reasoning that:



  1. Time spent cleaning up the staging area is time spent not replicating files. Replication is paused while the staging area is cleared.

  2. DFSR benefits from a full staging area using it for RDC and cross-file RDC or replicating the same files to other members

  3. The more 4202 and 4204 events you log the greater the odds you will run into the condition where DFSR cannot clean up the staging area or will have to prematurely purge files from the staging area.

  4. 4206, 4208 and 4212 events are, in my experience, always preceded and followed by a high number of 4202 and 4204 events.


While allowing for only one 4202 event per RF per day is conservative it greatly decreases your odds of running into staging area problems and better utilizes your DFSR server’s resources for the intended purpose of replicating files.


More Information


http://blogs.technet.com/b/askds/archive/2010/03/31/tuning-replication-performance-in-dfsr-espe...


http://blogs.technet.com/b/askds/archive/2007/10/05/top-10-common-causes-of-slow-replication-wi...


Warren “way over my Oud quota” Williams

Version history
Last update:
‎Apr 04 2019 06:03 PM
Updated by: