The S2D Campus Cluster is a new configuration that is supported on Windows Server 2025 with the 2025-12 Security Update (KB5072033) (26100.7462) – increasing high availability for Hyper-V VMs, SQL Server FCI, File Servers, SAP, and other applications installed on bare metal between two rooms or buildings in the same campus. With this configuration, you can achieve rack-level resiliency with either two or four copies of data. This configuration can be deployed using PowerShell cmdlets.
Applies to: Windows Server 2025 Failover Clustering with the 2025-12 Security Update (KB5072033) (26100.7462) applied on each Failover Cluster node.
First, I need to THANK the Microsoft MVPs - external consultants that we work closely with – who proposed this configuration, advocated for it, and helped to validate it! MVPs, we listen to you! Thank You Very Much! Coffee is on me!
While this article applies to Windows Server 2025, please note that our friends and colleagues on the Azure Local product team call this configuration “Rack Aware Clustering” – see their articles here: Overview of Azure Local rack aware clustering (Preview) - Azure Local | Microsoft Learn
Note that the S2D Campus Cluster configuration supports two RACKs and is different than the S2D Stretch Cluster configuration that supports two geographically distant SITES. The S2D Stretch Cluster relies on Storage Replica to replicate volumes between sites, while the S2D Campus Cluster does not use Storage Replica and uses S2D replication between the cluster nodes.
What is a Campus?
- Factories
- Business Parks / Office Parks
- Hospitals
- School Campuses, College Campuses
- Vessels / Cruise Ships
- Stadiums
- Any location with two strands of dark fibre cable between two rooms or buildings that can be defined as RACK fault domains in the failover cluster.
We think that there are many opportunities - especially in Europe - that require two separate data rooms to meet NIS2 requirements.
Included in the 2025-12 Security Update (KB5072033) (26100.7462) for Windows Server 2025 is the Rack Level Nested Mirror (RLNM) for S2D, which improves resiliency for the S2D Campus Cluster by keeping the same number of copies in each rack – one copy in each rack for a two-copy volume, and two copies per rack for a four copy volume. With a 2+2 S2D Campus Cluster – meaning two failover clustering nodes in each rack, a new level of resiliency can be achieved with a four copy volume: the RLNM will place a copy of data on each node, increasing resiliency to “Rack + Node” – meaning that you can lose a Rack AND a node and still have one copy of the data. We think that the 2+2 S2D Campus Cluster is a good tradeoff between cost and performance for many applications.
In the diagram below, note that seven servers are used: four servers are used for the S2D Campus Cluster (VMs, applications, storage), and there is an AD and DNS server in each rack, and a File Share Witness server.
In designing S2D Campus Cluster solutions, it is very important to find the correct balance between cost and hardware redundancy. Achieving low Recovery Time Objective (RTO) and Recovery Point Objective (RPO) can require investing in redundant hardware. The business cost of VM and application downtime needs to be clearly understood. Businesses like hotels and colleges will have different RTOs and RPOs than businesses like airports and oil refineries.
It’s also important to "practice failure" when validating S2D Campus Cluster solutions: test node failure, test rack failure, rack and node failure, test network switch failure, test disk failure – is the failover behavior acceptable? Was data corruption and data loss avoided?
What are the support requirements for the S2D Campus Cluster on Windows Server 2025?
- Install the Windows Server 2025 12B CU (KB5072033) on every node in the failover cluster.
- “Flat” S2D storage – all capacity drives, all flash meaning SSD or NVMe drives – do not use HDDs.
- Define exactly two RACK cluster fault domains – and place the cluster nodes in these two racks.
- Follow hardware OEM guidelines.
- Cluster quorum resource (File Share Witness, Disk Witness, Cloud Witness, or USB Witness) should be placed in a third room, separate from the data room containing the racks.
- Recommended: Each rack should have a separate network path to the cluster quorum resource.
- Recommended: Redundant TOR switches, CORE switches, and dedicated networks for S2D storage traffic to minimize Single Points Of Failure (SPOF) and maximize workload uptime and durability.
- Recommended: 1ms latency (or less) between racks.
- Recommended: RDMA NICs and switches - because RDMA NICs and switches can achieve 30% CPU savings!
Frequently Asked Questions (FAQs):
- How can I measure network latency?
Answer: We recommend downloading the PsPing utility: PsPing - Sysinternals | Microsoft Learn
- With the Rack Level Nested Mirror, can I automatically convert an existing campus cluster with a three copy volume to a four copy volume?
Answer: No, VMs will need to be stopped and data will need to be copied to an external location, RACK fault domains must be defined, and a new storage pool must be created to use the Rack Level Nested Mirror (RLNM). After the new storage pool is created, and volumes created from that storage pool, then data and VMs can be copied onto the new volumes.
- Will you support 1+1, 2+2, 3+3, 4+4, and 5+5 S2D Campus Clusters configurations?
Answer: Yes. Note that Rack+Node resiliency is a special case with a 1+1 S2D Campus Cluster using a two-copy volume and a 2+2 S2D Campus Cluster using a four-copy volume.
- Can I create as many two-copy and four-copy volumes as I need, or are there limits?
Answer: Yes, you can create as many two-copy and four-copy volumes as you need, given the capacity of the storage pool. There is a trade-off between resiliency and capacity: a two-copy volume delivers 50% capacity, and a four-copy volume delivers 25% capacity. We recommend that valuable VMs, applications, and data should be stored on four-copy volumes.
- For the S2D Campus Cluster on Windows Server 2025, are redundant core switches required? Are two TOR switches per RACK required?
Answer: Core switches are recommended but optional. A single TOR switch per RACK is acceptable, but we note that it’s a single point of failure. Network infrastructure investments (switches, NICs, cabling) corresponding to the Recovery Time Objective (RTO) and Recovery Point Objective (RPO) requirements of the business.
If you would like to see additional scenarios supported, please let us know so that we can prioritize development and validation and address your business needs!
Please send your questions, comments, and feature requests about S2D and SAN Coexistence to: wsfc_s2dcampuscluster@microsoft.com
PowerShell Script for Deploying S2D Campus Cluster:
#Create a test cluster but do not create storage:
New-Cluster -Name TestCluster -Node Node1, Node2, Node3, Node4 -NoStorage
#Define the fault domains for the cluster – two nodes are in “Room1” and two nodes are in “Room2”:
Set-ClusterFaultDomain -XML @”<Topology><Site Name=”Redmond”><Rack Name=”Room1”><Node Name=”Node1”/><Node Name=”Node2”/></Rack><Rack Name=”Room2”><Node Name=”Node3”/> <Node Name=”Node4”/></Rack></Site></Topology>”@@
#Alternatively you can define the fault domains using New-ClusterFaultDomain and Set-ClsuterFaultDomain cmdlets:
#New-ClusterFaultDomain -Name Room1 -FaultDomainType Rack
#New-ClusterFaultDomain -Name Room2 -FaultDomainType Rack
#Set-ClusterFaultDomain -Name Room1 -FaultDomain Redmond
#Set-ClusterFaultDomain -Name Room2 -FaultDomain Redmond
#Set-ClusterFaultDomain -Name Node1 -FaultDomain Room1
#Set-ClusterFaultDomain -Name Node2 -FaultDomain Room1
#Set-ClusterFaultDomain -Name Node3 -FaultDomain Room2
#Set-ClusterFaultDomain -Name Node4 -FaultDomain Room2
#Note that you can check your fault domains using the Get-ClusterFaultDomain cmdlet.
#Add Storage Spaces Direct (S2D) Storage to the cluster – note that Enable-ClusterS2D cmdlet can also be used:
Enable-ClusterStorageSpacesDirect
#Update the Storage Pool
Get-StoragePool S2D* | Update-StoragePool
#Check that the StoragePool version is 29:
(Get-CimInstance -Namespace root/microsoft/windows/storage -ClassName MSFT_StoragePool -Filter 'IsPrimordial = false').CimInstanceProperties['Version'].Value
#Check that the Storage Pool’s FaultDomainAwareness property is set to StorageRack:
Get-storagepool -FriendlyName <S2DStoragePool> | fl
#Note that resiliency is specified when the volume is created – our idea with the S2D Campus cluster is that IT Admins can create as many two-copy and four-copy volumes as needed. Valuable workload (VMs) and data should go on the 4-copy volumes.
#Create a four-copy volume on the storage pool, fixed provisioned:
New-Volume -FriendlyName “FourCopyVolumeFixed” -StoragePoolFriendlyName S2D* -FileSystem CSVFS_ReFS –Size 500GB -ResiliencySettingName Mirror -PhysicalDiskRedundancy 3 -ProvisioningType Fixed -NumberOfDataCopies 4 –NumberOfColumns 3
#Optional - Create a four-copy volume on the storage pool, thinly provisioned:
New-Volume -FriendlyName “FourCopyVolumeThin” -StoragePoolFriendlyName S2D* -FileSystem CSVFS_ReFS –Size 500GB -ResiliencySettingName Mirror -PhysicalDiskRedundancy 3 –ProvisioningType Thin -NumberOfDataCopies 4 –NumberOfColumns 3
#Optional - Create a two-copy volume on the storage pool, fixed provisioned:
New-Volume -FriendlyName “TwoCopyVolumeFixed” -StoragePoolFriendlyName S2D* -FileSystem CSVFS_ReFS –Size 500GB -ResiliencySettingName Mirror -PhysicalDiskRedundancy 1 -ProvisioningType Fixed
#Optional - Create a two-copy volume on the storage pool, thinly provisioned:
New-Volume -FriendlyName “TwoCopyVolumeThin” -StoragePoolFriendlyName S2D* -FileSystem CSVFS_ReFS –Size 500GB -ResiliencySettingName Mirror -PhysicalDiskRedundancy 1 -ProvisioningType Thin