hyper-v
455 TopicsCSV Auto-Pause on Windows Server 2025 Hyper-V Cluster
Hi everyone, i'm facing a very strange behavior with a newly created HyperV Clsuter running on Windows Server 2025. One of the two nodes keep calling for autopause on the CSV during the I/O peak. Does anyone have experienced this ? Here are the details : Environment Cluster: 2-node Failover Cluster Nodes: HV1 & HV2 (HPE ProLiant DL360 Gen11) OS: Windows Server 2025 Datacenter, Build 26100.32370 (KB5075899 installed Feb 21, 2026) Storage: HPE MSA 2070 full SSD, iSCSI point-to-point (4×25 Gbps per node, 4 MPIO paths) CSV: Single volume "Clsuter Disk 2" (~14 TB, NTFS, CSVFS_NTFS) Quorum: Disk Witness (Node and Disk Majority) Networking: 4×10 Gbps NIC Teaming for management/cluster/VMs traffic, dedicated iSCSI NICs Problem Description The cluster experiences CSV auto-pause events daily during a peak I/O period (~10:00-11:30), caused by database VMs generating ~600-800 MB/s (not that much). The auto-pause is triggered by HV2's CsvFs driver, even though HV2 hosts no VMs. All VMs run on HV1, which is the CSV coordinator/owner. Comparative Testing (Feb 23-26, 2026) Date HV2 Status Event 5120 SMB Slowdowns (1054) Auto-pause Cycles VM Impact Feb 23 Active 1 44 1 cycle (237ms recovery) None Feb 24 Active 0 8 0 None Feb 25 Drained (still in cluster) 4 ~60 (86,400,000ms max!) 3 cascade cycles Severe - all VMs affected Feb 26 Powered off 0 0 0 None Key finding: Draining HV2 does NOT prevent the issue. Only fully powering off HV2 eliminates all auto-pause events and SMB slowdowns during the I/O peak. Root Cause Analysis 1. CsvFs Driver on HV2 Maintains Persistent SMB Sessions to CSV SMB Client Connectivity log (Event 30833) on HV2 shows ~130 new SMB connections per hour to the CSV share, continuously, constant since boot: Share: \\xxxx::xxx:xxx:xxx:xxx\xxxxxxxx-...-xxxxxxx$ (HV1 cluster virtual adapter) All connections from PID 4 (System/kernel) — CsvFs driver 5,649 connections in 43.6 hours = ~130/hour Each connection has a different Session ID (not persistent) This behavior continues even when HV2 is drained 2. HV2 Opens Handles on ALL VM Files During the I/O peak on Feb 25, SMB Server Operational log (Event 1054) on HV1 showed HV2 blocking on files from every VM directory, including powered-off VMs and templates: .vmgs, .VMRS, .vmcx, .xml — VM configuration and state files .rct, .mrt — RCT/CBT tracking files Affected VMs: almost all Also affected: powered-off VMs And templates: winsrv2025-template 3. Catastrophic Block Durations On Feb 25 (HV2 drained but still in cluster): Operations blocked for 86,400,000 ms (exactly 24 hours) — handles accumulated since previous day These all expired simultaneously at 10:13:52, triggering cascade auto-pause Post-autopause: big VM freeze/lag for additional 2,324 seconds (39 minutes) On Feb 24 (HV2 active): Operations blocked for 1,150,968 ms (19 minutes) on one of the VM files Despite this extreme duration, no auto-pause was triggered that day 4. Auto-pause Trigger Mechanism HV2 Diagnostic log at auto-pause time: CsvFs Listener: CsvFsVolumeStateChangeFromIO->CsvFsVolumeStateDraining, status 0xc0000001 OnVolumeEventFromCsvFs: reported VolumeEventAutopause to node 1 Error status 0xc0000001 (STATUS_UNSUCCESSFUL) on I/O operation from HV2 CsvFsVolumeStateChangeFromIO = I/O failure triggered the auto-pause HV2 has no VMs running — this is purely CsvFs metadata/redirected access 5. SMB Connection Loss During Auto-pause SMB Client Connectivity on HV2 at auto-pause time: Event 30807: Share connection lost - "Le nom réseau a été supprimé" Event 30808: Share connection re-established What Has Been Done KB5075899 installed (Feb 21) — Maybe improved recovery from multi-cycle loop to single cycle a little, but did not prevent the auto-pause Disabled ms_server binding on iSCSI NICs (both nodes) Tuned MPIO: PathVerification Enabled, PDORemovePeriod 120, RetryCount 6, DiskTimeout 100 Drained HV2 — no effect Powered off HV2 — Completely eliminated the problem I'm currently running mad with this problem, i've deployed a lot of HyperV clusters and it's the first time i'm experiencing such a strange behavior, the only workaround i found is to take the second nodes off to be sure he is not putting locks on CSV files. The cluster is only running well with one node turned on. Why does the CsvFs driver on a non-coordinator node (HV2) maintain ~130 new SMB connections per hour to the CSV, even when it hosts no VMs and is drained?Why do these connections block for up to 24 hours during I/O peaks on the coordinator node? Why does draining the node not prevent CsvFs from accessing the CSV? Is this a known issue with the CsvFs driver in Windows Server 2025 Build 26100.32370? Are there any registry parameters to limit or disable CsvFs metadata scanning on non-coordinator nodes ? If someone sees somthing that i am missing i would be so grateful ! Have a great day.21Views0likes0CommentsWMI Filter for non-Hyper-V Host
I have been struggling for several days trying to set a GPO WMI Filter that would apply settings to any server, virtual or physical, as long as it is not the Hyper-V Host. It should apply to any VM on VMWare or on Hyper-V hypervisors. I found many suggestions online but none of them really work, like looking for Hypervisorpresent, that is also set to TRUE on VMs so no help. I have many ways to find and apply to an Hyper-V but EXCLUDING Hyper-Vs seems to be a tough one, the WMI filters are designed to find something and apply if it finds it, not the opposite. I have tried queries on the OptionalFeatures class, again it helps me find the Hyper-V but not EXCLUDE it. Anyone have an idea about doing this. BTW, this is to apply a setting only to non-Hyper-V and ignore if it is an Hyper-V. I am also trying to avoid blocking GPOs at a specific OU and re-linking all but 1 GPO from that level, I have to assume that there is a way to target all servers except Hyper-V. Hopefully someone has succeeded in doing the same. Thank youSolved43Views0likes3CommentsMigrating from VMware to Hyper-v
Hi, I've recently deployed a new 3x node Hyper-v cluster running Windows Server 2025. I have an existing VMware cluster running exsi 7.x. What tools or approach have you guys used to migrate from VMware to Hyper-v? I can see there are many 3rd party tools available, and now the Windows Admin Center appears to also support this. Having never done this before (vmware to hyper-v) I'm not sure what the best method is, does anyone here have any experience and recommendations pls?120Views0likes2CommentsGetting Started with Windows Admin Center Virtualization Mode
Getting Started with Windows Admin Center Virtualization Mode Windows Admin Center (WAC) Virtualization Mode is a new, preview experience for managing large Hyper-V virtualization fabrics—compute, networking, and storage—from a single, web-based console. It’s designed to scale from a handful of hosts up to thousands, centralizing configuration and day-to-day operations. This post walks through: • What Virtualization Mode is and its constraints • How to install it on a Windows Server host • How to add an existing Hyper-V host into a resource group Prerequisites and Constraints Before you begin, note the current preview limitations: • The WAC Virtualization Mode server and the Hyper-V hosts it manages must be in the same Active Directory domain. • You cannot install Virtualization Mode side-by-side with a traditional WAC deployment on the same server. • Do not install Virtualization Mode directly on a Hyper-V host you plan to manage. – You can install it on a VM running on that host. • Plan for at least 8 GB RAM on the WAC Virtualization Mode server. For TLS, the walkthrough assumes you have an Enterprise CA and are deploying domain-trusted certificates to servers, so browsers automatically trust the HTTPS endpoint. You can use a self signed certificate, but you’ll end up with all the fun that entails when you use WAC-V from a host on which the self signed cert isn’t installed. Given the domain requirements of WAC-V and the hosts it manages, going the Enterprise CA method seemed the path of least resistance. Step 1 – Install the C++ Redistributable On your Windows Server 2025 host that will run WAC Virtualization Mode: 1. Open Windows Terminal or PowerShell. 2. Use winget to search for the VC++ redistributable: powershell winget search "VC Redist" 3. Identify the package corresponding to “Microsoft Visual C++ 2015–2022 Redistributable” (or equivalent). 4. Install it with winget, for example: powershell winget install "Microsoft.VC++2015-2022Redist-x64" This fulfills the runtime dependency for the WAC Virtualization Mode installer. Step 2 – Install Windows Admin Center Virtualization Mode 1. Download the installer 1. Download the Windows Admin Center Virtualization Mode installer from the Windows Insider Preview location provided in the official documentation. Save it to a local folder on the WAC host. 2. Run the setup wizard 1. Double-click the downloaded binary. 2. Approve the UAC prompt. 3. In the Welcome page, proceed as with traditional WAC setup. 3. Accept the license and choose setup type 1. Accept the license agreement. 2. Choose Express setup (suitable for most lab and PoC deployments). 4. Select a TLS certificate 1. When prompted for a TLS certificate: 1. Select a certificate issued by your Enterprise CA that matches the server name. 2. Using CA-issued certs ensures all domain-joined clients will trust the site without manual certificate import. 5. Configure PostgreSQL for WAC 1. Virtualization Mode uses PostgreSQL as its configuration and state database. 2. When prompted: 1. Provide a strong password for the database account WAC will use. 2. Record this securely if required by your org standards. 6. Configure update and diagnostic settings 1. Choose how WAC should be updated (manual/automatic). 2. Set diagnostic data preferences according to your policy. 7. Complete the installation 1. Click Install to deploy: 1. The WAC Virtualization Mode web service 2. The PostgreSQL database instance 2. When installation completes, click Finish. Step 3 – Sign In to Virtualization Mode 1. Open a browser on a domain-joined machine and browse to the WAC URL (for example, https://wac-vmode01.contoso.internal). 2. Sign in with your domain credentials that have appropriate rights to manage Hyper-V hosts (for example, DOMAIN\adminuser). 3. You’ll see the new Virtualization Mode UI, which differs significantly from traditional WAC and is optimized for fabric-wide management. Step 4 – Create a Resource Group Resource groups help you logically organize Hyper-V servers you’ll manage (for example, by site, function, or cluster membership). 1. In the Virtualization Mode UI, select Resource groups. 2. Click Create resource group. 3. Provide a name, such as Zava-Nested-Vert. 4. Save the resource group. You now have a logical container ready for one or more Hyper-V hosts. Step 5 – Prepare the Hyper-V Host Before adding an existing Hyper-V host: 1. Ensure the host is: 1. Running Hyper-V and reachable by FQDN (for example, zava-hvA.zavaops.internal). 2. In the same AD domain as the WAC Virtualization Mode server. 2. Temporarily open File and Printer Sharing from the Hyper-V host’s firewall to the WAC Virtualization Mode server: 1. This is required for initial onboarding. 2. After onboarding, you can re-lock firewall rules according to your security baseline. Step 6 – Add a Hyper-V Host to the Resource Group 1. In the WAC Virtualization Mode UI, go to your resource group. 2. Click the ellipsis (…) and choose Add resource. 3. On the Add resource page, select Compute (you’re adding a Hyper-V server, not a storage fabric resource). 4. Enter the Hyper-V host’s FQDN (for example, zava-hvA.zavaops.internal). 5. Confirm the host resolves correctly and proceed. Configure Networking Template 1. On the Networking page, assign fabric roles to NICs using the network template model: 1. Each NIC can be tagged for one or more roles: 1. Compute 2. Management 3. Storage 2. In a simple, single-NIC lab scenario, you may assign Compute, Management, and Storage all to Ethernet0. 3. All three roles must be fully assigned across available adapters before you can proceed. Configure Storage 1. On the Storage page, specify the storage model: 1. For an existing host using local disks, choose Use existing storage. 2. In future, you can select SAN or file server storage when those options are available and configured in your environment. Configure Compute Properties 1. On the Compute page, configure host-level defaults: 1. Enable or disable Enhanced Session Mode. 2. Set the maximum concurrent live migrations. 3. Confirm or update the default VM storage path. 2. Review the configuration, click Next, then Submit. 3. The Hyper-V host is registered into the resource group and becomes manageable via Virtualization Mode. Step 7 – Verify Host and VM Management With the host onboarded: 1. Open the resource group and select the Hyper-V host. 2. You’ll see a streamlined view similar to traditional WAC, with nodes for: 1. Event logs 2. Files 3. Networks 4. Storage 5. Windows Update 6. Virtual Machines 3. To validate functionality, create a test VM: 1. Go to Virtual Machines → Add. 2. Provide a VM name (for example, WS25-temp). 3. Set vCPUs (for example, 2). 4. Optionally enable nested virtualization. 5. Select the appropriate virtual switch. 6. Click Create, then attach an ISO or existing VHDX and complete OS setup. ▶️ Public Preview: https://aka.ms/WACDownloadvMode ▶️ Documentation: https://aka.ms/WACvModeDocs2.3KViews2likes5CommentsEncrypted vhdx moved to new host, boots without pin or recovery key
Hyper-V environment. Enabled VTPM on guest Server, 2022 OS and encrypted OS drive C:\ with BitLocker. Host server 2022 has physical TPM. Shut down guest OS and copied vhdx file to another Hyper-V host server that is completely off network (also server 2022 with a physical TPM). Created a new VM based on the "encrypted" vhdx. I was able to start the VM without needing a PIN or a recovery key. Doesn't this defeat the whole point of encrypting vhd's? Searching says that this should not be possible, but I replicated it twice on two different off network Hyper-V host servers. Another odd thing is that when the guest boots on the new host and you log in, the drive is NOT encrypted. So, where's the security in that? Does anyone have any ideas on this or if I'm missing something completely? Or have I just made Microsoft angry for pointing out this glaring flaw??101Views0likes3CommentsWindows Backup taking waaaaay to long
While I'm not a heavy user of these MS forums I have had to resort to them from time to time over the last 15-20 years. Yet I still can't figure out the organizational structure and it seems I can never find the right forum for my query. Almost every time my post gets moved to the correct forum or message board, or someone gives me a link directly to it. I expect it to be no different this time, and I'm perfectly fine with that. So here we go. I have Windows Server 2025 installed as a VM using MS's built-in Hyper-V on a Server 2025 computer. the VM is set up as a DC and all that stuff functions exactly as it should. However, doing the backup has suddenly gone from taking anywhere from 2 hours to a max that comes close to but has never exceeded four hours. Obviously, it depends on how much there is to actually back up. I've already gone through the troubleshooting tips to do things like checking the VSS settings and a bit of other stuff I can't exactly recall at the moment. I have an external physical 1TB usb hard drive attached to the physical computer and then it's attached as a drive to the Server 2025 VM and shows up in computer management/disk manager ad Disk 1, as it should. I have the VM set up to use this Disk 1 as the backup disk with the Windows Server Backup program. Some things I note and add here in case it matters. - The size of the VM disk for this Server 2025 VM is 500GB and the partition size of Drive C shows as 498.91GB with the remaining shown as 100MB for the EFI system partion and 1001MB for the recovery partition. - When backup starts, a new disk labeled Disk 2 appears in the disk management window on the VM and I note it's the same size as Drive C on the VM at 498.91GB. I'm wondering if this has anything to do with why my backups suddenly went from taking a max of 4 hours to as long as 20 hours to complete. Where is this virtual disk created? I looked on the VM host machine in the C:\programdata\microsoft\windows\Virtual Hard Disks directory, and it's not there. It's not on the VM machine because the virtual hard disk directory doesn't exist in that same location on the VM. THe host machine itself has a 2TB hard drive in it with 993GB of free space. Any advice or suggestions here? I have no idea why backups went from 2-4 hours to taking 20 hours or more to complete. Thanks for any help, advice or suggestions anyone can offer here. -Carl105Views0likes0CommentsEnable Nested Virtualization on Windows Server 2025
Nested virtualization allows you to run Hyper-V inside a VM, opening up incredible flexibility for testing complex infrastructure setups, demos, or learning environments, all without extra hardware. First, ensure you’re running a Hyper-V host capable of nested virtualization and have the Windows Server 2025 VM on which you want to enable as a Hyper-V host ready. To get started, open a PowerShell window on your Hyper-V host and execute: Set-VMProcessor -VMName "<Your-VM-Name>" -ExposeVirtualizationExtensions $true Replace <Your-VM-Name> with the actual name of your VM. This command configures Hyper-V to allow nested virtualization on the target VM. Boot up the Windows Server 2025 VM that you want to configure as a Hyper-V host. In the VM, open Server Manager and attempt to install the Hyper-V role via Add Roles and Features. Most of the time, this should work right away. However in some cases you might hit an error stating: “Hyper-V cannot be installed because virtualization support is not enabled in the BIOS.” To resolve this error run an elevated PowerShell session inside the VM on which you want to enable Hyper-V and run the command: bcdedit /set hypervisorlaunchtype auto This command ensures the Hyper-V hypervisor starts up correctly the next time you boot. Restart your VM to apply the change. After the reboot, head back to Add Roles and Features and try installing Hyper-V again. This time, it should proceed smoothly without the BIOS virtualization error. Once Hyper-V is installed, perform a final reboot if prompted. Open Hyper-V Manager inside your VM and you’re now ready to run test VMs in your nested environment!888Views1like1CommentvNVMe on Hyper-V to unlock PCIe 5.0 NVMe performance
On hosts with NVMe PCIe 5.0 (E3.S/U.2), Hyper-V guests still use virtual SCSI and leave a lot of performance on the table. We are paying for top-tier storage, yet software becomes the limiter. A virtual NVMe device that preserves checkpoints/Replica/Live Migration would align guest performance with modern hardware without forcing DDA and its operational trade-offs.753Views1like7CommentsAutomating VMware to Hyper-V Migration with SCVMM
This blog post provides a PowerShell script for automating the migration of a virtual machine (VM) from most versions of VMware to Hyper-V using SCVMM 2022 UR2+ or 2025 and retaining all settings (yes, even the static IP!). With 10GB or better Layer 2 networking and flash storage, expected downtime is approximately 5 minutes per 100GB of disk.NUMA Problems after In-Place Upgrade 2022 to 2025
We have updated several Hyper-V hosts with AMD Milan processors from Windows 2022 to Windows 2025 using the in-place update method. We are encountering an issue where, after starting about half of the virtual machines, the remaining ones fail to start due to a resource shortage error. The host's RAM is about 70% free. We can only get them to start by enabling the "Allow Spanning" configuration, but this reduces performance, and with so many free resources, this shouldn't be happening. Has anyone else experienced something similar? What has changed in 2025 to cause this issue? The error is: Virtual machine 'R*****2' cannot be started on this server. The virtual machine NUMA topology requirements cannot be satisfied by the server NUMA topology. Try to use the server NUMA topology, or enable NUMA spanning. (Virtual machine ID CA*****3-ED0E-4***4-A****C-E01F*********C4). Event ID: 10002 <EventRecordID>41</EventRecordID> <Correlation /> <Execution ProcessID="5524" ThreadID="8744" /> <Channel>Microsoft-Windows-Hyper-V-Compute-Admin</Channel> <Computer>HOST-JLL</Computer>594Views0likes4Comments