First published on TECHNET on Apr 10, 2006
A blog reader recently contacted us about his DFS Replication deployment and asked why the health reports show surprisingly large bandwidth savings on a relatively small replicated folder. The customer writes, “I have noticed a great savings in bandwidth, but I have also noticed that every day the service claims to have saved around 75 GB (transferring about 2 GB a day instead of the 78 GB it says it would have transferred) of transfers. I find it hard to believe because the replication group only has about 6 GB of data in it. I would love to diagnose this problem but don't know where to start.”
One of our DFS Replication gurus, Rob Post, walked the customer through the troubleshooting process with this advice:
"Although entirely possible, it does seem a little exaggerated. One general explanation is some types of files have high volatility (ie their contents change frequently.) This is even more pronounced for files that frequently change by very little. In these cases RDC will essentially only transfer the delta between version n and version n+1. This results in the bandwidth savings because the bulk of the file was not replicated over the wire, only the delta was.
Here is one sample scenario that could account for 500 MB of data to be replicated (without RDC). Consider a graphic designer working on a 25-MB image in the replicated folder for half a day. Assuming the users saves his progress (or there is some sort of autosave feature) throughout the day the file may be saved 20 times over the course of 4 hrs. So in this time period our 25MB file has been replicated 20 times, that is 500MB of data to be replicated. DFSR and productivity suites (like MS Office) work together to avoid this type of scenario by using temp files and other optimizations.
Another example is users’ .pst files. Every time a message is added to the .pst file, replication will be triggered.
If you’re on a LAN this shouldn’t be that noticeable, over tighter links it could be a concern. You can use the DFSR perf counters to get a better idea of what DFSR is doing. You can also inspect the DFSR log files to determine what files are being transferred and how effective RDC is for each individual file. This will tell you what is being excessively replicated.
You could use connection schedules to reduce replication if we could diagnose the issue to one of the above two reasons or something similar. If you turn off replication for part of the working day then, obviously, nothing will be replicated saving you from all the updates. However, the con is that you won’t always have an up to date view of users’ data."
The customer understood the general explanation of the semantics that could cause frequent replication, but the specifics didn’t apply. The customer wasn’t using pst files in the replicated folder, and an application that paralleled the image editing example wasn’t immediately recognized. The customer decided to inspect the log files using this advice from Rob:
"The logs are located in c:windowsdebug and are prefaced with DFSR. If you unzip and grep (or findstr) the logs for “+ name” you’ll get an idea of what files are being transferred the most. Hopefully there will be an abundance of one file, type of files, or files in a directory... "
The customer examined his log files and instantly saw the culprit—a long series of files with names similar to the following:
+ nameConflict 0
+ name CC-032006.log
+ nameConflict 0
+ name CC-032006.log
+ nameConflict 0
+ name CC-032006.log
+ nameConflict 0
+ name CC-032006.log
The customer recognized these files as those generated by a PBX application. The customer decided to try to change the path of those files so they are saved to a non-replicated folder. Rob also suggested file filters:
"Another option is available if these files have a predictable filename then you could filter them from being replicated. For example, you could add a file filter for CC-*.log, that would omit all such files from being replicated using the DFS mgmt UI. The con is that if someone created a legitimate file called “CC-Want to Replicate.log” it wouldn’t be replicated because it matches the filter, but that seems like a remote possibility."
Rob and the customer discussed another option: using a subfolder filter to filter out the folder that contained these files. The customer asked how to apply the subfilter—should it be Application DataPBXLogs or PBXLogs or just Logs? Rob responded:
"You can only specify the last directory in an absolute path for a subfolder filter. You can’t do anything like “dir1dir2dir3”. So it could only be “logs” (which I would not recommend because any directory in the replicated folder with logs would not be replicated). If there wasn’t anything important in the “PBX” folder then you could filter that whole thing. This seems like a reasonably safe name to filter as long as there isn’t anything in that directory of interest. I think the best solution is still to use the file filter of CC-*.log."
A blog reader recently contacted us about his DFS Replication deployment and asked why the health reports show surprisingly large bandwidth savings on a relatively small replicated folder. The customer writes, “I have noticed a great savings in bandwidth, but I have also noticed that every day the service claims to have saved around 75 GB (transferring about 2 GB a day instead of the 78 GB it says it would have transferred) of transfers. I find it hard to believe because the replication group only has about 6 GB of data in it. I would love to diagnose this problem but don't know where to start.”
One of our DFS Replication gurus, Rob Post, walked the customer through the troubleshooting process with this advice:
"Although entirely possible, it does seem a little exaggerated. One general explanation is some types of files have high volatility (ie their contents change frequently.) This is even more pronounced for files that frequently change by very little. In these cases RDC will essentially only transfer the delta between version n and version n+1. This results in the bandwidth savings because the bulk of the file was not replicated over the wire, only the delta was.
Here is one sample scenario that could account for 500 MB of data to be replicated (without RDC). Consider a graphic designer working on a 25-MB image in the replicated folder for half a day. Assuming the users saves his progress (or there is some sort of autosave feature) throughout the day the file may be saved 20 times over the course of 4 hrs. So in this time period our 25MB file has been replicated 20 times, that is 500MB of data to be replicated. DFSR and productivity suites (like MS Office) work together to avoid this type of scenario by using temp files and other optimizations.
Another example is users’ .pst files. Every time a message is added to the .pst file, replication will be triggered.
If you’re on a LAN this shouldn’t be that noticeable, over tighter links it could be a concern. You can use the DFSR perf counters to get a better idea of what DFSR is doing. You can also inspect the DFSR log files to determine what files are being transferred and how effective RDC is for each individual file. This will tell you what is being excessively replicated.
You could use connection schedules to reduce replication if we could diagnose the issue to one of the above two reasons or something similar. If you turn off replication for part of the working day then, obviously, nothing will be replicated saving you from all the updates. However, the con is that you won’t always have an up to date view of users’ data."
The customer understood the general explanation of the semantics that could cause frequent replication, but the specifics didn’t apply. The customer wasn’t using pst files in the replicated folder, and an application that paralleled the image editing example wasn’t immediately recognized. The customer decided to inspect the log files using this advice from Rob:
"The logs are located in c:windowsdebug and are prefaced with DFSR. If you unzip and grep (or findstr) the logs for “+ name” you’ll get an idea of what files are being transferred the most. Hopefully there will be an abundance of one file, type of files, or files in a directory... "
The customer examined his log files and instantly saw the culprit—a long series of files with names similar to the following:
+ nameConflict 0
+ name CC-032006.log
+ nameConflict 0
+ name CC-032006.log
+ nameConflict 0
+ name CC-032006.log
+ nameConflict 0
+ name CC-032006.log
The customer recognized these files as those generated by a PBX application. The customer decided to try to change the path of those files so they are saved to a non-replicated folder. Rob also suggested file filters:
"Another option is available if these files have a predictable filename then you could filter them from being replicated. For example, you could add a file filter for CC-*.log, that would omit all such files from being replicated using the DFS mgmt UI. The con is that if someone created a legitimate file called “CC-Want to Replicate.log” it wouldn’t be replicated because it matches the filter, but that seems like a remote possibility."
Rob and the customer discussed another option: using a subfolder filter to filter out the folder that contained these files. The customer asked how to apply the subfilter—should it be Application DataPBXLogs or PBXLogs or just Logs? Rob responded:
"You can only specify the last directory in an absolute path for a subfolder filter. You can’t do anything like “dir1dir2dir3”. So it could only be “logs” (which I would not recommend because any directory in the replicated folder with logs would not be replicated). If there wasn’t anything important in the “PBX” folder then you could filter that whole thing. This seems like a reasonably safe name to filter as long as there isn’t anything in that directory of interest. I think the best solution is still to use the file filter of CC-*.log."
--Rob
Updated Apr 10, 2019
Version 2.0FileCAB-Team
Iron Contributor
Joined April 10, 2019
Storage at Microsoft
Follow this blog board to get notified when there's new activity