Blog Post

Failover Clustering
3 MIN READ

Understanding the state of your Cluster Shared Volumes

RobHind's avatar
RobHind
Icon for Microsoft rankMicrosoft
Mar 15, 2019
First published on MSDN on Dec 05, 2013
Cluster Shared Volumes (CSV) is the clustered file system for the Microsoft Private cloud, first introduced in Windows Server 2008 R2. In Windows Server 2012, we radically improved the CSV architecture. We presented a deep dive of these architecture improvements at TechEd 2012 . Building on this new and improved architecture, in Windows Server 2012 R2, we have introduced several new CSV features . In this blog, I am going to discuss one of these new features – the new Get-ClusterSharedVolumeState Windows Server Failover Clustering PowerShell® cmdlet. This cmdlet enables you to view the state of your CSV. Understanding the state of your CSV is useful in troubleshooting failures as well as optimizing the performance of your CSV. In the remainder of this blog, I will explain how to use this cmdlet as well as how to interpret the information provided by the cmdlet.

Get-ClusterSharedVolumeState Windows PowerShell® cmdlet


The Get-ClusterSharedVolumeState cmdlet allows you to view the state of your CSV on a node in the cluster. Note that the state of your CSV can vary between the nodes of a cluster. Therefore, it might be useful to determine the state of your CSV on multiple or all nodes of your cluster.

To use the Get-ClusterSharedVolumeState cmdlet open a new Windows PowerShell console and run the following:

  • To view the state of all CSVs on all the nodes of your cluster


Get-ClusterSharedVolumeState

  • To view the state of all CSVs on a subset of the nodes in your cluster


Get-ClusterSharedVolumeState –Node clusternode1,clusternode2

  • To view the state of a subset of CSVs on all the nodes of your cluster


Get-ClusterSharedVolumeState –Name "Cluster Disk 2","Cluster Disk 3"

OR



Get-ClusterSharedVolume "Cluster Disk 2" | Get-ClusterSharedVolumeState

Understanding the state of your CSV


The Get-ClusterSharedVolumeState cmdlet output provides two important pieces of information for a particular CSV – the state of the CSV and the reason why the CSV is in that particular state. There are three states of a CSV – Direct, File System Redirected and Block Redirected. I will now examine the output of this cmdlet for each of these states.

Direct Mode


In Direct Mode, I/O operations from the application on the cluster node can be sent directly to the storage. It therefore, bypasses the NTFS or ReFS volume stack.




File System Redirected Mode


In File System Redirected mode, I/O on a cluster node is redirected at the top of the CSV pseudo-file system stack over SMB to the disk. This traffic is written to the disk via the NTFS or ReFS file system stack on the coordinator node.











Note:

  • When a CSV is in File System Redirected Mode, I/O for the volume will not be cached in the CSV Block Cache .

  • Data deduplication occurs on a per file basis. Therefore, when a file on a CSV volume is deduped, all I/O for that file will occur in File System Redirected mode. I/O for the file will not be cached in the CSV Block Cache – it will instead be cached in the Deduplication Cache. For the remaining non-deduped files, CSV will be in direct mode. The state of the CSV will be reflected as being in Direct mode.

  • The Failover Cluster Manager will show a volume as in Redirected Access only when it is in File System Redirected Mode and the FileSystemRedirectedIOReason is UserRequest .





Block Redirected Mode


In Block level redirected mode, I/O passes through the local CSVFS proxy file system stack and is written directly to Disk.sys on the coordinator node. As a result it avoids traversing the NTFS/ReFS file system stack twice.









In conclusion, the Get-ClusterSharedVolumeState cmdlet is a powerful tool that enables you to understand the state of your Cluster Shared Volume and thus troubleshoot failures and optimize the performance of your private cloud storage infrastructure.

Thanks!
Subhasish Bhattacharya
Senior Program Manager
Clustering and High Availability
Microsoft
Updated Mar 15, 2019
Version 2.0
  • Hi RobHind thanks for your article. 

     

    There is a lot of pro and cons of using ReFS as a filesystem for Hyper-V machines. See also 

    https://techcommunity.microsoft.com/t5/failover-clustering/cluster-shared-volume-a-systematic-approach-to-finding/bc-p/1483528#M307 + comments

     

    Why does a CSVFS preformatted with ReFS prevent DirectIO? - state Server 2019 1809, as of June 2020

    The overall other benefits of using ReFS with Hyper-V are clearly documented. Not alone for Storage Spaces Direct. Here we have no Storage Spaces involved at all, but direct attached SAS Frontend LUNs, means storage is directly attached to the cluster nodes.

    All CSVFS that are ReFS has Stateinfo FileSystemRedirected, If I create a CSVFS with NTFS 64k, it is going direct. 

     

    Example:

    CSV-NTFS is a LUN GPT preformatted with NTFS 64k file allocation size
    VM-DATA is is a LUN GPT preformatted with ReFS 3.4 (Server 2019) 64k file allocation size

     

     

    Nowhere I find a reason for this. Bug? Affects Server 2019. 

     

    cc: EldenChristensen , https://techcommunity.microsoft.com/t5/failover-clustering/cluster-shared-volume-diagnostics/ba-p/371908

  • Hassel-z's avatar
    Hassel-z
    Copper Contributor

    RobHind 

    Hi Rob,

    I’m new to Windows clustering and recently came across your article—I’m very thankful for it! Currently, I’m setting up a two-node cluster, with each node having a physical connection to an NVMe SSD. I’ve configured Storage Spaces Direct (S2D) to create a storage pool and set up a two-way mirror CSV.

     

    In your article, I noticed that one CSV can achieve Direct I/O on both nodes, but I’m only seeing Direct I/O on the coordinator node. Could this be due to my node or disk configuration, or do I need a different SSD connection setup?

  • Hi Hassel-z you might want to join azurestackhci.slack.com to receive answers. Feel welcome. It's a serious technical community for S2D and likewise Azure Stack HCI.

     

    Generally S2D works with direct and local attached storage SAS, NVMe, unlike unclustered Storage Spaces, which could even support USB.

     

    ReFS is the file system of choice for S2D. 

    By-Design only the CSV owner (coordinator node) uses Direct IO. 

     

    It's a good practice to create one or better two CSV per physical node and try to align VMs to the node and CSV owner for 100% of performance in highly write intensive workloads (IOPS). 

     

    All other traffic is always redirected, but then using a storage intent for SMB multichannel and at best RDMA with RoCEv2 for minimal penalty for the redirection.

     

    See my comments above why ReFS CSV may not be used for SAN / NAS in the same fashion. 

     

    You can learn more about this in learn.microsoft.com on the topics ReFS, Storage Spaces Direct, Azure Stack HCI networking requirements and concepts (they are similar to S2D). 

     

    Admit the S2D docs might be cluttered or appear outdated, it is in active development with another round of major advancements coming in Windows Server 2025. 

     

    You might want to learn more about the upcoming changes in the Windows Server 2025 summit recordings on techcommunity.microsoft.com.

    You can also leverage Windows Server 2025 b26100 already for testing. 

     

    You can also test and learn more S2D and Azure Stack HCI with mslabs on GitHub.

    Note: Domain Controller for mslab still require WS 2022 at the moment. WS 2025 preview works for all other VM types. 

     

  • Hassel-z's avatar
    Hassel-z
    Copper Contributor

    Hi Karl-WE 

    Thank you for your detailed response and for recommending the community resources! I’ll make sure to review them carefully.

     

    To clarify my Direct I/O issue: I have two SOFS nodes, each connected to an NVMe SSD via PCIe. I used S2D to create a mirrored CSV on NTFS, but only one node shows Direct I/O, while the other is marked as "Storage Spaces Not Attached."

     

    I have two questions: 

    1. Do I need to use a SAN, with both nodes connected to the physical SSD, and switch to a simple CSV setup to achieve Direct I/O on both nodes simultaneously?

    2. If both nodes can achieve Direct I/O, can they work together to improve performance during read operations?

     

    Thanks again! 

     

     

  • Hi Hassel-z

    you need to create a CSV formatted with ReFS please. Depending on the application and average filesize choosing either 4k or 64k sectors. 

     

    What you are seeing about the Direct IO state is expected. See my comment above. 

     

    No you do not need a SAN or NAS for SoFS. But if you would be using one, you cannot leverage S2D. Plus you would need to Format the CSV with NTFS instead of ReFS.