Poor VM Storage Performance with Storage Spaces Direct

Copper Contributor

Hey Everyone,


I am building a 4-node cluster with Windows Server 2022, Hyper-V, and Storage Spaces Direct. Everything is going well except for storage performance for guest VMs, it is performing much slower than expected. In general, the Hyper-V hosts see ~10x the performance of the Hyper-V guest.


This environment is used for VDI and may host other production VMs depending on how our VMWare quote comes back later this spring.



Hyper-V Hosts:

  • Model: Dell 740XD2
  • RAM: 768 GB
  • CPU: 64 cores (2 sockets x 32 cores)
  • Storage: 2x 256GB SAS SSD (OS), 4x 1.6TB NVME SSD, 16x 3.2TB NVME SSD
  • Network: 2x Broadcom 25GbE, Cisco ACI


Performance Numbers:


Computer   ReadMiBSec   WriteMiBSec   ReadIOPS   WriteIOPS   Hardware Storage
HVGuest21          60            18      7,654       2,297   HV VM    S2D-volume21

HVHost21          636           190     81,353      24,338   Dell     S2D-volume21
HVHost22          616           184     78,899      23,603   Dell     S2D-volume22
HVHost23          509           153     65,215      19,524   Dell     S2D-volume23
HVHost24          606           181     77,575      23,210   Dell     S2D-volume24



  • Performance data was generated using diskSpd.exe
  • Command: diskspd.exe -b8k -d30 -o4 -t8 -h -r -w23 -L -Z1G -c20G C:\ClusterStorage\[hvhostxx]\DiskSpd.dat
  • Host storage is aligned
    • There is one virtual disk per HV host
    • Each virtual disk is assigned to the corresponding host
    • Each host uses the corresponding disk for testing
  • Guest VM storage is also aligned
  • Guest VM: W2022 Core, 8 CPU core, 4GB RAM
  • If the host and storage are un-aligned, performance metrics drop to ~50% for the HV hosts and the HV guest
  • There is minimal load on the HV hosts, with 2 idle test VMs per host.


In our VMWare environment, our guest VM throughput and IOPS numbers are ~3x what we see with Hyper-V / S2D using VSAN or NetApp/NFS, identical servers and networking, and the VMWare servers are hosting ~60 VMs each.  This is not apples-to-apples, just for reference.




  • Is it typical to see this much difference in storage performance between the host and the guest?
  • Can I do something to improve the guest storage performance?


I hope I'm just missing something.  Any suggestions would be appreciated.





0 Replies