DFSR Reparse Point Support (or: Avoiding Schrödinger's File)
Published Apr 10 2019 02:56 AM 9,030 Views
Microsoft
First published on TECHNET on Feb 14, 2013
Hi folks, Ned Pyle here again. We are occasionally asked which reparse points DFS Replication can handle, and if we can add more. Today I explain DFSR behaviors and why simply adding reparse point support isn’t cut and dry.

Background


A reparse point is user-defined data understood by an application. Reparse points are stored with files and folders as tags ; when the file system opens a tagged file, the OS attempts to find the associated file system filter. If found, the filter processes the file as directed by the reparse data.

You may already be familiar with one reparse point type, called a junction . Domain controllers have used a few junction points since Windows 2000: in the SYSVOL folder. Any guesses on why? Let me know in the Comments section.

Another common junction is the DfsrPrivate folder. Since Windows Server 2008 , the DfsrPrivate folder has used a reparse point back into the \System Volume Information\DFSR\ <some RF GUID> folder.

You can see these using DIR with the /A:L option (the attribute L shows reparse points):

Or FSUTIL if you interested in tag details for some reason:

Enter DFSR


DFSR deliberately blocks most reparse points from replicating, for the excellent reason that tags can direct to data that exists outside the replicated folder, or to folder paths that don’t align between DFSR servers. For example, if I am replicating c:\rf2 , you can see how these reparse point targets will be a problem:

Mklink is the tool of choice for playing with reparse points

We talk about support in the DFSR FAQ: http://technet.microsoft.com/en-us/library/cc773238(WS.10).aspx

Does DFS Replication replicate NTFS file permissions, alternate data streams, hard links, and reparse points?

  • Microsoft does not support creating NTFS hard links to or from files in a replicated folder – doing so can cause replication issues with the affected files. Hard link files are ignored by DFS Replication and are not replicated. Junction points also are not replicated, and DFS Replication logs event 4406 for each junction point it encounters.

  • The only reparse points replicated by DFS Replication are those that use the IO_REPARSE_TAG_SYMLINK tag; however, DFS Replication does not guarantee that the target of a symlink is also replicated. For more information, see the Ask the Directory Services Team blog.

  • Files with the IO_REPARSE_TAG_DEDUP, IO_REPARSE_TAG_SIS, or IO_REPARSE_TAG_HSM reparse tags are replicated as normal files. The reparse tag and reparse data buffers are not replicated to other servers because the reparse point only works on the local system. As such, DFS Replication can replicate folders on volumes that use Data Deduplication in Windows Server 2012, or Single Instance Storage (SIS), however, data deduplication information is maintained separately by each server on which the role service is enabled.


And Can I configure which file attributes are replicated? https://technet.microsoft.com/en-us/library/cc773238(v=ws.10).aspx#BKMK_038

  • DFS Replication does not replicate reparse point attribute values unless the reparse tag is IO_REPARSE_TAG_SYMLINK. Files with the IO_REPARSE_TAG_DEDUP, IO_REPARSE_TAG_SIS or IO_REPARSE_TAG_HSM reparse tags are replicated as normal files. However, the reparse tag and reparse data buffers are not replicated to other servers because the reparse point only works on the local system.


Different reparse points give different results. For instance, you get a friendly event log error for junction points:

Well, as friendly as an error can be, I reckon

A hard-linked file uses NTFS magic to tie various instances of a file together ( I’ve talked about this before in the context of USMT ). We do not allow DFSR to deal with all those instances, as the file can be both in and out of the replica set, simultaneously. Moreover, hard-linked files cannot move as hardlinks between volumes – even if you were just copying the files between the C and D drive yourself.

You probably don’t care about this, though; hardlinks are extremely uncommon and your users would have to be very familiar with MKLINK to create one. If by some chance someone did actually create one, you get a DFSR debug log entry instead of an event. For those that like reading such things :
20130122 17:27:24.956 1460 OUTC   591 OutConnection::OpenFile Received request for update:

+      present                         1

+      nameConflict                    0

+      attributes                      0x20

+      ghostedHeader                   0

+      data                            0

+      gvsn                            {85BFBD50-BC6D-4290-8341-14F8D64304CB}-v52 <-- here I modified a hard-linked file on the upstream DFSR server

+      uid                             {85BFBD50-BC6D-4290-8341-14F8D64304CB}-v51

+      parent                          {3B5B7E77-3865-4C42-8BBE-DD8A15F8BC1E}-v1

+      fence                           Default (3)

+      clockDecrementedInDirtyShutdown 0

+      clock                           20130122 22:27:21.805 GMT (0x1cdf8efa5515cb2)

+      createTime                      20130122 22:25:49.736 GMT

+      csId                            {3B5B7E77-3865-4C42-8BBE-DD8A15F8BC1E}

+      hash                            00000000-00000000-00000000-00000000

+      similarity                      00000000-00000000-00000000-00000000

+      name                            hardlink4.txt

+      rdcDesired:1 connId:{FA95B57E-8076-47F6-B08A-768E5747B39E} rgName:rg2


20130122 17:27:24.956 1460 OUTC  4403 OutConnectionContentSetContext::GetUpdatedRecord Database is too out of sync with updateUid:{85BFBD50-BC6D-4290-8341-14F8D64304CB}-v51 connId:{FA95B57E-8076-47F6-B08A-768E5747B39E} rgName:rg2


20130122 17:27:24.956 1460 SRTR  3011 [WARN] InitializeFileTransferAsyncState::ProcessIoCompletion Failed to initialize a file transfer. connId:{FA95B57E-8076-47F6-B08A-768E5747B39E} rdc:1 uid:{85BFBD50-BC6D-4290-8341-14F8D64304CB}-v51 gsvn:{85BFBD50-BC6D-4290-8341-14F8D64304CB}-v52 completion:0 ptr:0000008864676210 Error: <-- DFSR warns that it cannot begin the file transfer on the changed file; note the matching UID that tells us this is hardlink4.txt

+      [Error:9024(0x2340) UpstreamTransport::OpenFile upstreamtransport.cpp:1238 1460 C The file meta data is not synchronized with the file system]

+      [Error:9024(0x2340) OutConnection::OpenFile outconnection.cpp:689 1460 C The file meta data is not synchronized with the file system]

+      [Error:9024(0x2340) OutConnectionContentSetContext::OpenFile outconnection.cpp:2562 1460 C The file meta data is not synchronized with the file system]

+      [Error:9024(0x2340) OutConnectionContentSetContext::GetUpdatedRecord outconnection.cpp:4436 1460 C The file meta data is not synchronized with the file system]

+      [Error:9024(0x2340) OutConnectionContentSetContext::GetUpdatedRecord outconnection.cpp:4407 1460 C The file meta data is not synchronized with the file system] <-- DFSR states that the file meta data is not in sync with the file system. This is true! A hardlink makes a file exist in multiple places at once, and some of these places are not replicated.

With symbolic links (sometimes called soft links), DFSR does support replication of the reparse point tags. DFSR sends the reparse point - without modification - along with the file. There are some potential issues with this, though:

  1. Symbolic links can point to data that lies outside the replicated folder

  2. Symbolic links can point to data that lies within the replicated folder, but along a different relative path on each DFSR server

  3. Even though a Windows Server 2003 R2 DFSR server can replicate a symbolic link-tagged file in, it has no idea what a symbolic link tag is!


I documented the first case a few years ago. The second case is more subtle – any guesses on why this is a problem? When you create a symbolic link, you usually store the entire path in the tag. This means that if the downstream DFSR server uses a different relative path for its replicated folder, the tag will point to a non-existent path:


Alive and dead at the same time… like the quantum cat

The third case is becoming less of a possibility all the time; Windows Server 2003 R2 DFSR is on its way out , and we have steps for migrating off .

For all these reasons – and much like steering a car with your feet - using symbolic links is possible, but not a particularly good idea.

That leaves us with the special reparse points for single-instance storage and dedup. Since these tags are used merely to aid in the dehydration and rehydration of files for de-duplication purposes, DFSR is happy to replicate the files. However, since dedup is something done only on a per-volume, per-computer basis, DFSR sends the rehydrated file without the reparse tag to the other server. This means you will have to run dedup on all the replication partners in order to save that space. Dehydrating and rehydrating files on Windows Server 2012 does not cause replicati... .

Why making changes here is harder than it looks


Even though reparse point usage is uncommon, there are some cases where third party software vendors will use them. The example I usually hear is for near-line archiving or tiered storage: by traversing a reparse point, an application will use a filter driver to perform their magic. These vendors or their customers periodically ask to add support for reparse point type X to DFSR.

This puts us in a rather difficult position. Consider the ramifications of this change:

  1. It introduces incompatibility into DFSR nodes, where older OSes will not understand or support a new data type, leading to replica sets that will never converge. This divergence will not be easily discoverable until it’s too late. Data divergence scenarios caused by topology design are very hard to communicate – i.e. for every administrator that reads the TechNet, KB, or blog post telling them why they cannot safely use certain topologies, many others will simply deploy and then later open support cases about DFSR being “broken”. Data fidelity is the utmost priority in a distributed file replication system. This already happens with unsupported reparse points– I found 66 MS Support cases from the past few years with a quick search of our support case database, and that was just looking for obvious signs of the problem.

  2. Even if we added the reparse point support and simply required customers use the latest version of Windows Server and DFSR, customers would have to replace all nodes simultaneously in any existing replicas. These can number in the hundreds. Even if it were only two nodes, they would have to remove and recreate the replication topology, and then re-replicate all the data. Otherwise, end users accessing data will find some nodes with no data, as they will filter out of replication on previous operating systems. This kind of “random” problem is no fun for administrators to troubleshoot, and if using DFS Namespaces to distribute load or raise availability, the problem grows.

  3. Since we are talking about third party reparse point tags, DFSR would need a mechanism for allowing customers to add the tag types – we can’t hard-code non-Microsoft tags into Windows, obviously. There is nowhere in the existing DFSR management tools to specify this kind of setting, and no attribute in Active Directory. This means customers would have to hand-maintain the custom reparse point rules on a per-server basis, probably using the registry, and remember to set them as new nodes were added or replaced over time. If the new junior admin didn’t know about this when sent off to add replicas, see #1 and #2.


Distributed file replication is one of the more complex computing scenarios, and it is a minefield of unintended consequences. Data that points to other data is an area where DFSR has to take great care, lest you create replication black holes. This goes for us as the developers of DFSR as well as you, the implementers of complex topologies. I hope this article sheds some light on the picture.

Moreover, the next time you ding your software vendor for not supporting DFSR, check with them about reparse points – that very well may be the reason. Heck, they may have sent you this blog post!

Until next time,

- Ned “Reductio ad absurdum” Pyle
Version history
Last update:
‎Apr 10 2019 02:56 AM
Updated by: