Forum Discussion

BadgerMD974's avatar
BadgerMD974
Occasional Reader
Feb 26, 2026

CSV Auto-Pause on Windows Server 2025 Hyper-V Cluster

Hi everyone, i'm facing a very strange behavior with a newly created HyperV Clsuter running on Windows Server 2025. One of the two nodes keep calling for autopause on the CSV during the I/O peak. Does anyone have experienced this ?

 

Here are the details : 

 

Environment

Cluster: 2-node Failover Cluster

Nodes: HV1 & HV2 (HPE ProLiant DL360 Gen11)

OS: Windows Server 2025 Datacenter, Build 26100.32370 (KB5075899 installed Feb 21, 2026)

Storage: HPE MSA 2070 full SSD, iSCSI point-to-point (4×25 Gbps per node, 4 MPIO paths)

CSV: Single volume "Clsuter Disk 2" (~14 TB, NTFS, CSVFS_NTFS)

Quorum: Disk Witness (Node and Disk Majority)

Networking: 4×10 Gbps NIC Teaming for management/cluster/VMs traffic, dedicated iSCSI NICs

 

Problem Description

The cluster experiences CSV auto-pause events daily during a peak I/O period (~10:00-11:30), caused by database VMs generating ~600-800 MB/s (not that much). The auto-pause is triggered by HV2's CsvFs driver, even though HV2 hosts no VMs. All VMs run on HV1, which is the CSV coordinator/owner.

 

Comparative Testing (Feb 23-26, 2026)
DateHV2 StatusEvent 5120SMB Slowdowns (1054)Auto-pause CyclesVM Impact
Feb 23Active 1441 cycle (237ms recovery)None
Feb 24Active080None
Feb 25Drained (still in cluster)4~60 (86,400,000ms max!)3 cascade cyclesSevere - all VMs affected
Feb 26Powered off000None

 

Key finding: Draining HV2 does NOT prevent the issue. Only fully powering off HV2 eliminates all auto-pause events and SMB slowdowns during the I/O peak.

 

Root Cause Analysis

1. CsvFs Driver on HV2 Maintains Persistent SMB Sessions to CSV

 

SMB Client Connectivity log (Event 30833) on HV2 shows ~130 new SMB connections per hour to the CSV share, continuously, constant since boot:

 

Share: \\xxxx::xxx:xxx:xxx:xxx\xxxxxxxx-...-xxxxxxx$ (HV1 cluster virtual adapter)

All connections from PID 4 (System/kernel) — CsvFs driver

5,649 connections in 43.6 hours = ~130/hour

Each connection has a different Session ID (not persistent)

This behavior continues even when HV2 is drained

 

2. HV2 Opens Handles on ALL VM Files

 

During the I/O peak on Feb 25, SMB Server Operational log (Event 1054) on HV1 showed HV2 blocking on files from every VM directory, including powered-off VMs and templates:

 

.vmgs, .VMRS, .vmcx, .xml — VM configuration and state files

.rct, .mrt — RCT/CBT tracking files

Affected VMs: almost all

Also affected: powered-off VMs

And templates: winsrv2025-template

 

3. Catastrophic Block Durations

 

On Feb 25 (HV2 drained but still in cluster):

 

Operations blocked for 86,400,000 ms (exactly 24 hours) — handles accumulated since previous day

These all expired simultaneously at 10:13:52, triggering cascade auto-pause

Post-autopause: big VM freeze/lag for additional 2,324 seconds (39 minutes)

 

On Feb 24 (HV2 active):

 

Operations blocked for 1,150,968 ms (19 minutes) on one of the VM files

Despite this extreme duration, no auto-pause was triggered that day

 

4. Auto-pause Trigger Mechanism

HV2 Diagnostic log at auto-pause time:

 

CsvFs Listener: CsvFsVolumeStateChangeFromIO->CsvFsVolumeStateDraining, status 0xc0000001 OnVolumeEventFromCsvFs: reported VolumeEventAutopause to node 1

 

Error status 0xc0000001 (STATUS_UNSUCCESSFUL) on I/O operation from HV2

CsvFsVolumeStateChangeFromIO = I/O failure triggered the auto-pause

HV2 has no VMs running — this is purely CsvFs metadata/redirected access

 

5. SMB Connection Loss During Auto-pause

SMB Client Connectivity on HV2 at auto-pause time:

 

Event 30807: Share connection lost - "Le nom réseau a été supprimé" Event 30808: Share connection re-established

 

What Has Been Done

KB5075899 installed (Feb 21) — Maybe improved recovery from multi-cycle loop to single cycle a little, but did not prevent the auto-pause

Disabled ms_server binding on iSCSI NICs (both nodes)

Tuned MPIO: PathVerification Enabled, PDORemovePeriod 120, RetryCount 6, DiskTimeout 100

Drained HV2 — no effect

Powered off HV2 — Completely eliminated the problem

 

I'm currently running mad with this problem, i've deployed a lot of HyperV clusters and it's the first time i'm experiencing such a strange behavior, the only workaround i found is to take the second nodes off to be sure he is not putting locks on CSV files. The cluster is only running well with one node turned on.

 

Why does the CsvFs driver on a non-coordinator node (HV2) maintain ~130 new SMB connections per hour to the CSV, even when it hosts no VMs and is drained?Why do these connections block for up to 24 hours during I/O peaks on the coordinator node?

Why does draining the node not prevent CsvFs from accessing the CSV?

Is this a known issue with the CsvFs driver in Windows Server 2025 Build 26100.32370?

Are there any registry parameters to limit or disable CsvFs metadata scanning on non-coordinator nodes ?

 

If someone sees somthing that i am missing i would be so grateful ! 

Have a great day.

No RepliesBe the first to reply