Home
%3CLINGO-SUB%20id%3D%22lingo-sub-465458%22%20slang%3D%22en-US%22%3ERe%3A%20Scale-Out%20File%20Server%20Improvements%20in%20Windows%20Server%202019%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-465458%22%20slang%3D%22en-US%22%3E%40JohnMarlin%3A%20My%20understanding%20is%20that%20Infrastructure%20SOFS%20will%20support%20running%20a%20hyper-converged%20S2D%20cluster%20while%20also%20exposing%20volumes%20through%20SOFS%20for%20another%20cluster%20to%20use%20as%20storage%20-%20something%20which%20is%20possible%20but%20unsupported%20at%202016.%20This%20is%20brilliant.%20Will%20this%20concept%20also%20work%20and%20be%20supported%20if%20the%20client%20Hyper-V%20cluster%20is%20still%20Windows%20Server%202016%3F%20Thomas%20Israelsen%3C%2FLINGO-BODY%3E%3CLINGO-SUB%20id%3D%22lingo-sub-622860%22%20slang%3D%22en-US%22%3ERe%3A%20Scale-Out%20File%20Server%20Improvements%20in%20Windows%20Server%202019%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-622860%22%20slang%3D%22en-US%22%3E%3CP%3EIs%20any%20official%20documentation%20about%20this%20Infrastructure%20SoFS%20like%20Planing%2C%20deploying%2C%20Managing%20%3F%20Because%20on%20%3CA%20href%3D%22https%3A%2F%2Fdocs.microsoft.com%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%20noopener%20noreferrer%22%3Ehttps%3A%2F%2Fdocs.microsoft.com%3C%2FA%3E%20i%20have%20only%20documentation%20(planing%2C%20deploying%2C%20managing)%20for%20SoFS%20applies%20to%20Windows%20Server%202012R2%20and%20Windows%20Server%202012%20%3CA%20href%3D%22https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fwindows-server%2Ffailover-clustering%2Fsofs-overview%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%20noopener%20noreferrer%22%3Ehttps%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fwindows-server%2Ffailover-clustering%2Fsofs-overview%3C%2FA%3E%3C%2FP%3E%3C%2FLINGO-BODY%3E%3CLINGO-SUB%20id%3D%22lingo-sub-372156%22%20slang%3D%22en-US%22%3EScale-Out%20File%20Server%20Improvements%20in%20Windows%20Server%202019%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-372156%22%20slang%3D%22en-US%22%3E%3CP%3E%3CSTRONG%3ESMB%20Connections%20move%20on%20connect%20%3C%2FSTRONG%3E%20%3CBR%20%2F%3E%3CBR%20%2F%3EScale-Out%20File%20Server%20(SOFS)%20relies%20on%20DNS%20round%20robin%20for%20inbound%20connections%20sent%20to%20cluster%20nodes.%26nbsp%3B%20When%20using%20Storage%20Spaces%20on%20Windows%20Server%202016%20and%20older%2C%20this%20behavior%20can%20be%20inefficient%3A%20if%20the%20connection%20is%20routed%20to%20a%20cluster%20node%20that%20is%20not%20the%20owner%20of%20the%20Cluster%20Shared%20Volume%20(aka%20the%20coordinator%20node)%2C%20all%20data%20redirects%20over%20the%20network%20to%20another%20node%20before%20returning%20to%20the%20client.%20The%20SMB%20Witness%20service%20detects%20this%20lack%20of%20direct%20I%2FO%20and%20moves%20the%20connection%20to%20a%20coordinator.%26nbsp%3B%20This%20can%20lead%20to%20delays.%20%3CBR%20%2F%3E%3CBR%20%2F%3EIn%20Windows%20Server%202019%2C%20we%20are%20much%20more%20efficient.%26nbsp%3B%20The%20SMB%20Server%20service%20determines%20if%20direct%20I%2FO%20on%20the%20volume%20is%20possible.%26nbsp%3B%20If%20direct%20I%2FO%20is%20possible%2C%20it%20passes%20the%20connection%20on.%26nbsp%3B%20If%20it%20is%20redirected%20I%2FO%2C%20it%20will%20move%20the%20connection%20to%20the%20coordinator%20before%20I%2FO%20starts.%26nbsp%3B%20Synchronous%20client%20redirection%20required%20changes%20in%20the%20SMB%20client%2C%20so%20only%20Windows%20Server%202019%20and%20Windows%2010%20Fall%202017%20clients%20can%20use%20this%20new%20functionality%20when%20talking%20to%20a%20Windows%202019%20Failover%20Cluster.%20%26nbsp%3BSMB%20clients%20from%20older%20OS%20versions%20will%26nbsp%3Bcontinue%20relying%20upon%20the%20SMB%20Witness%20to%20move%20to%20a%20more%20optimal%20server.%20%3CBR%20%2F%3E%3CBR%20%2F%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-inline%22%20style%3D%22width%3A%20536px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Fgxcuf89792.i.lithium.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F90669i45AE03C8D5CF0710%2Fimage-size%2Flarge%3Fv%3D1.0%26amp%3Bpx%3D999%22%20%2F%3E%3C%2FSPAN%3E%20%3CBR%20%2F%3EAs%20a%20note%20here%2C%20I%20wanted%20to%20point%20out%20when%20a%20move%20would%20and%20would%20not%20occur%20in%20a%20stretch%20scenario%20and%20it%20will%20depend%20on%20the%20storage%20you%20are%20using.%26nbsp%3B%20So%20for%20my%20example%2C%20my%20Scale-Out%20File%20Server%20is%20running%20on%20NodeA%20in%20SiteA.%26nbsp%3B%26nbsp%3BAll%20node's%20IP%20Addresses%20are%20registered%20in%20DNS%20and%20it%20is%20round%20robin%20on%20where%20a%20client%20connects.%26nbsp%3B%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EIf%20you%20have%20a%20stretch%20Failover%20Cluster%20and%20the%20storage%20presents%20itself%20as%20symmetric%3B%20meaning%2C%20all%20nodes%20have%20access%20to%20the%20drives%2C%20the%20client%20connection%20will%20be%20moved%20to%20SiteA%20as%20described%20above.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EBut%20let's%20say%20the%20SAN%20storage%20and%20is%20asymmetric%3B%20meaning%2C%20each%20site%20has%20it's%20own%20SAN%20storage%20and%20there%20is%20replication%20between%20them.%26nbsp%3B%20This%20is%20the%20process%20that%20will%20occur.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E1.%20A%20client%20connection%20is%20sent%20to%20a%20node%20in%20SiteB%3C%2FP%3E%0A%3CP%3E2.%20The%20node%20in%20SiteB%20will%20retain%20that%20connection.%26nbsp%3B%3C%2FP%3E%0A%3CP%3E3.%20All%20data%20requests%20will%20be%20redirected%20over%20the%20CSV%20network%20to%20SiteA.%3C%2FP%3E%0A%3CP%3E4.%20Data%20is%20retrieved%20and%20sent%20back%20over%20the%20CSV%20network%20to%20the%20node%20in%20SiteB.%3C%2FP%3E%0A%3CP%3E5.%20The%20node%20in%20SiteB%20then%20sends%20the%20data%20to%20the%20client.%3C%2FP%3E%0A%3CP%3E6.%20Rinse%2C%20repeat%20for%20all%20other%20data%20requests.%3C%2FP%3E%0A%3CP%3E%3CBR%20%2F%3EInfrastructure%20Scale-Out%20File%20Server%20There%20is%20a%20new%20Scale-Out%20File%20Server%20role%20in%20Windows%20Server%202019%20called%20Infrastructure%20File%20Server.%26nbsp%3B%20When%20you%20create%20an%20Infrastructure%20File%20Server%2C%20it%20will%20create%20a%20single%20namespace%20share%20automatically%20for%20the%20CSV%20drive%20(i.e.%20%5C%5CInfraSOFSName%5CVolume1%2C%20etc.).%26nbsp%3B%20In%20hyper-converged%20configurations%2C%20an%20Infrastructure%20SOFS%20allows%20an%20SMB%20client%20(Hyper-V%20host)%20to%20communicate%20with%20guaranteed%20Continuous%20Availability%20(CA)%20to%20the%20Infrastructure%20SOFS%20SMB%20server.%26nbsp%3B%20There%20can%20be%20at%20most%20only%20one%20infrastructure%20SOFS%20cluster%20role%20on%20a%20Failover%20Cluster.%20%3CBR%20%2F%3E%3CBR%20%2F%3ETo%20create%20the%20Infrastructure%20SOFS%2C%20you%20would%20need%20to%20use%20PowerShell.%26nbsp%3B%20For%20example%3A%20Add-ClusterScaleOutFileServerRole%20-Cluster%20MyCluster%20-Infrastructure%20-Name%20InfraSOFSName%20SMB%20Loopback%20There%20is%20an%20enhancement%20made%20with%20Server%20Message%20Block%20(SMB)%20to%20work%20properly%20with%20SMB%20local%20loopback%20to%20itself%20which%20was%20previously%20not%20supported.%26nbsp%3B%20This%20hyper-converged%20SMB%20loopback%20CA%20is%20achieved%20via%20Virtual%20Machines%20accessing%20their%20virtual%20disk%20(VHDx)%20files%20where%20the%20owning%20VM%20identity%20is%20forwarded%20between%20the%20client%20and%20server.%20%3CBR%20%2F%3E%3CBR%20%2F%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-inline%22%20style%3D%22width%3A%20719px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Fgxcuf89792.i.lithium.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F90674iF5C7CB15CF69BF63%2Fimage-size%2Flarge%3Fv%3D1.0%26amp%3Bpx%3D999%22%20%2F%3E%3C%2FSPAN%3E%20%3CBR%20%2F%3E%3CBR%20%2F%3EThis%20is%20a%20role%20that%20Cluster%20Sets%20takes%20advantage%20of%20where%20the%20path%20to%20the%20VHD%2FVHDX%20is%20placed%20as%20%5C%5CInfraSOFSName%5CVolume1.%26nbsp%3B%20This%20%5C%5CInfraSOFSName%5CVolume1%20path%20can%20then%20be%20utilized%20by%20the%20virtual%20machine%26nbsp%3Bif%20it%20is%20local%20or%20remote.%20%3CBR%20%2F%3E%3CBR%20%2F%3E%20Identity%20Tunneling%20In%20Server%202016%2C%20if%20Hyper-V%20virtual%20machines%20are%20hosted%20on%20a%20SOFS%20share%2C%20you%20must%20grant%20the%20machine%20accounts%20of%20the%20Hyper-V%20compute%20nodes%20permission%20to%20access%20the%20VHD%2FVHDX%20files.%26nbsp%3B%20If%20the%20virtual%20machines%20and%20VHD%2FVHDX%20is%20running%20on%20the%20same%20cluster%2C%20then%20the%20user%20must%20have%20rights.%26nbsp%3B%20This%20can%20make%20management%20difficult%20as%20two%20sets%20of%20permissions%20are%20needed.%20%3CBR%20%2F%3E%3CBR%20%2F%3EIn%20Windows%20Server%202019%20when%20using%20SOFS%2C%20we%20now%20have%20%E2%80%9Cidentity%20tunneling%E2%80%9D%20on%20Infrastructure%20shares.%20When%20you%20access%20Infrastructure%20Share%20from%20the%20same%20cluster%20or%20Cluster%20Set%2C%20the%20application%20token%20is%20serialized%20and%20tunneled%20to%20the%20server%2C%20and%20VM%20disk%20access%20is%20done%20using%20that%20token.%20This%20works%20even%20if%20your%20identity%20is%20Local%20System%2C%20a%20service%2C%20or%26nbsp%3Bvirtual%20machine%26nbsp%3Baccount.%20%3CBR%20%2F%3E%3CBR%20%2F%3EThanks%2C%20John%20Marlin%20Senior%20Program%20Manager%20High%20Availability%20and%20Storage%20Follow%20me%20on%20Twitter%20%40JohnMarlin_MSFT%3C%2FP%3E%3C%2FLINGO-BODY%3E%3CLINGO-TEASER%20id%3D%22lingo-teaser-372156%22%20slang%3D%22en-US%22%3E%3CP%3EFailover%20Clustering%20Scale-Out%20File%20Server%20was%20first%20introduced%20in%20Windows%20Server%202012%20to%20take%20advantage%20of%20Cluster%20Shared%20Volumes%20(CSV).%3C%2FP%3E%3C%2FLINGO-TEASER%3E%3CLINGO-LABS%20id%3D%22lingo-labs-372156%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3Ecluster%20shared%20volumes%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3Efailover%20clustering%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3Ejohn%20marlin%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3Escale%20out%20file%20server%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3ESMB%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Microsoft

SMB Connections move on connect

Scale-Out File Server (SOFS) relies on DNS round robin for inbound connections sent to cluster nodes.  When using Storage Spaces on Windows Server 2016 and older, this behavior can be inefficient: if the connection is routed to a cluster node that is not the owner of the Cluster Shared Volume (aka the coordinator node), all data redirects over the network to another node before returning to the client. The SMB Witness service detects this lack of direct I/O and moves the connection to a coordinator.  This can lead to delays.

In Windows Server 2019, we are much more efficient.  The SMB Server service determines if direct I/O on the volume is possible.  If direct I/O is possible, it passes the connection on.  If it is redirected I/O, it will move the connection to the coordinator before I/O starts.  Synchronous client redirection required changes in the SMB client, so only Windows Server 2019 and Windows 10 Fall 2017 clients can use this new functionality when talking to a Windows 2019 Failover Cluster.  SMB clients from older OS versions will continue relying upon the SMB Witness to move to a more optimal server.


As a note here, I wanted to point out when a move would and would not occur in a stretch scenario and it will depend on the storage you are using.  So for my example, my Scale-Out File Server is running on NodeA in SiteA.  All node's IP Addresses are registered in DNS and it is round robin on where a client connects.  

 

If you have a stretch Failover Cluster and the storage presents itself as symmetric; meaning, all nodes have access to the drives, the client connection will be moved to SiteA as described above.

 

But let's say the SAN storage and is asymmetric; meaning, each site has it's own SAN storage and there is replication between them.  This is the process that will occur.

 

1. A client connection is sent to a node in SiteB

2. The node in SiteB will retain that connection. 

3. All data requests will be redirected over the CSV network to SiteA.

4. Data is retrieved and sent back over the CSV network to the node in SiteB.

5. The node in SiteB then sends the data to the client.

6. Rinse, repeat for all other data requests.


Infrastructure Scale-Out File Server

There is a new Scale-Out File Server role in Windows Server 2019 called Infrastructure File Server.  When you create an Infrastructure File Server, it will create a single namespace share automatically for the CSV drive (i.e. \\InfraSOFSName\Volume1, etc.).  In hyper-converged configurations, an Infrastructure SOFS allows an SMB client (Hyper-V host) to communicate with guaranteed Continuous Availability (CA) to the Infrastructure SOFS SMB server.  There can be at most only one infrastructure SOFS cluster role on a Failover Cluster.

To create the Infrastructure SOFS, you would need to use PowerShell.  For example:
Add-ClusterScaleOutFileServerRole -Cluster MyCluster -Infrastructure -Name InfraSOFSName






SMB Loopback

There is an enhancement made with Server Message Block (SMB) to work properly with SMB local loopback to itself which was previously not supported.  This hyper-converged SMB loopback CA is achieved via Virtual Machines accessing their virtual disk (VHDx) files where the owning VM identity is forwarded between the client and server.



This is a role that Cluster Sets takes advantage of where the path to the VHD/VHDX is placed as \\InfraSOFSName\Volume1.  This \\InfraSOFSName\Volume1 path can then be utilized by the virtual machine if it is local or remote.

Identity Tunneling

In Server 2016, if Hyper-V virtual machines are hosted on a SOFS share, you must grant the machine accounts of the Hyper-V compute nodes permission to access the VHD/VHDX files.  If the virtual machines and VHD/VHDX is running on the same cluster, then the user must have rights.  This can make management difficult as two sets of permissions are needed.

In Windows Server 2019 when using SOFS, we now have “identity tunneling” on Infrastructure shares. When you access Infrastructure Share from the same cluster or Cluster Set, the application token is serialized and tunneled to the server, and VM disk access is done using that token. This works even if your identity is Local System, a service, or virtual machine account.

Thanks,
John Marlin
Senior Program Manager
High Availability and Storage

Follow me on Twitter @JohnMarlin_MSFT

2 Comments
Occasional Visitor
@JohnMarlin: My understanding is that Infrastructure SOFS will support running a hyper-converged S2D cluster while also exposing volumes through SOFS for another cluster to use as storage - something which is possible but unsupported at 2016. This is brilliant. Will this concept also work and be supported if the client Hyper-V cluster is still Windows Server 2016? Thomas Israelsen
Regular Visitor

Is any official documentation about this Infrastructure SoFS like Planing, deploying, Managing ? Because on https://docs.microsoft.com i have only documentation (planing, deploying, managing) for SoFS applies to Windows Server 2012R2 and Windows Server 2012 https://docs.microsoft.com/en-us/windows-server/failover-clustering/sofs-overview