Forum Discussion

uedgars's avatar
uedgars
Copper Contributor
Nov 22, 2018

Server 2019: Poor write performance on S2D two node cluster

Hello!

 

I am trying to set up S2D on two node cluster for Hyper converged infrastructure. Unfortunately I observe significant write performance drop if we compare S2D storage with slowest physical hard drive performance participating in cluster.

 

What could cause this?

How to get better results?

 

My test environment

OS: Windows Server 2019 Datacenter Build 17723.rs5_release.180720-1452

Both nodes are connected directly using one 10 Gbps link for S2D

Each node have 1 Gbps link for management

S2D two node cluster configured with Cache disabled

 

Node 1

System: Supermicro X9SRH-7F/7TF

CPU: Intel Xeon E5-2620 2.00 GHz (6CPUs)

RAM: 32 GB DDR3

Network: Intel X540-AT2 10 Gbps copper

System drive: Samsung SSD 840 PRO 512 GB

Storage drives: Samsung SSD 850 PRO 512 GB, Samsung SSD 840 PRO 512 GB

 

Node 2

System: Intel S2600WTT

CPU: Genuine Intel CPU 2.30 GHz (ES) (28 CPUs)

RAM: 64 GB DDR4

Network: Intel X540-AT2 10 Gbps copper

System drive: INTEL SSDSC2BB240G7 240 GB

Storage drives: Samsung SSD 850 PRO 512 GB, Samsung SSD 840 PRO 512 GB

 

Before enabling S2D I turned off write cache for each SSD drive individually and tested their write performance by copying 30 GB large VHD file. Results were around 130 - 160 MB/s for Samsung SSD 840 PRO drives and around 60 - 70 MB/s for Samsung SSD 850 PRO drives.

 

After enabling S2D write performance drops to 40 - 44 MB/s (see attachment)

  • LaMerk's avatar
    LaMerk
    Copper Contributor

    Hi ,

    Your nodes don’t comply with S2D requirements. Additionally, I would not recommend to measure performance by windows file copying, you’ll find arguments here:

    https://blogs.technet.microsoft.com/josebda/2014/08/18/using-file-copy-to-measure-storage-performance-why-its-not-a-good-idea-and-what-you-should-do-instead/

    Better use DiskSPD from MS.

     

    Regarding the storage solution, you can look at virtual SAN vendors. I have a good experience of using Starwind vSAN for 2 servers cluster. The performance is better and no problem with configuration. You can find guide here:

    https://www.starwindsoftware.com/resource-library/starwind-virtual-san-hyperconverged-2-node-scenario-with-hyper-v-cluster-on-windows-server-2016

    • uedgars's avatar
      uedgars
      Copper Contributor
      Thanks! I tried to use DiskSPD and it works good and it seems simulates workload quite near to real world.
  • Thanks for evaluating S2D Clusters on Server 2019.

    This configuration does not meet the fundamental requirement of S2D, as:

    • SSDs used are non-PLP, and
    • Nodes are heterogeneous.

     Please go over this blog, https://blogs.technet.microsoft.com/filecab/2016/11/18/dont-do-it-consumer-ssd for more details.

     

    Also, please refer this article as well, on evaluating storage perf:

    https://blogs.technet.microsoft.com/josebda/2014/08/18/using-file-copy-to-measure-storage-performance-why-its-not-a-good-idea-and-what-you-should-do-instead/

     

    ~Girdhar Beriwal

    • uedgars's avatar
      uedgars
      Copper Contributor
      Hello!

      Thank You about Your comment! I understand, that my lab setup does not meet these requirements, but still I believe that fundamental thinks should work with such a setup too. The main point for me was to check if this tehnology works before invest into new and quite expensive parts.

      Anyway, now I have rebuilt my setup using two intel S2600WTF/Y boxes and Intel CPUs. Initially each of them had two 512 GB SSD drives for S2D. I configured S2D with automatic settings successfully. After providing some performance tests I got much better results than earlier. Actually really acceptable results (even up to little more than 200MB/s write speed).

      Next I moved some VMs to the S2D and enabled High Availability for them. I provided some crush tests as well and they succeed, all worked great.

      BUT then I faced new problems. I wanted to add four new 1 TB SSD drives per each node and extend my pool. I did reset all these drives and connected to servers.
      1) First strange thing was, that they automatically were added to my S2D pool even I was previously disabled autopooling (Get-StorageSubSystem Clu* | Set-StorageHealthSetting -Name “System.Storage.PhysicalDisk.AutoPool.Enabled” -Value False).
      2) Second and the most important - my SSD tier statistics shows available space only for 670 GB, but I connected 8 x 1TB SSD drives and using mirrored storage it should be able to allocate around 4 TB! I run Optimize-StoragePool and this did not helped.
      3) I connected another SSD drive for other purposes and it again automatically got pooled. I tried to remove it from S2D pool, but this also was unsuccessful. The disk stuck into Primordial Pool. Things I did to try to get disk out of pool:
      $pool = Get-StoragePool S2D*
      $disk = Get-PhysicalDisk -SerialNumber "XXXXXXXXXXXXXXXXXXX"

      $disk | Set-PhysicalDisk -Usage Retired

      $vdisk=Get-VirtualDisk

      Repair-VirtualDisk $vdisk.FriendlyName

      Get-StorageJob

      Get-StoragePool S2D* | Remove-PhysicalDisk -PhysicalDisks $disk

      Set-ClusterS2DDisk -CanBeClaimed $true -PhysicalDiskGuid $disk.UniqueId

      $disk | Reset-PhysicalDisk
      • Girdhar's avatar
        Girdhar
        Icon for Microsoft rankMicrosoft

        Can you please send the output of following cmdlets:

        1. Get-StoragePool

        2. Get-PhysicalDisk

         

        ~Girdhar

Resources