Server 2019: Ppor Storage Spaces Direct (S2D) Performance

Copper Contributor

Hello,

I am trying to setup a two-node cluster for S2D for hyperconverged infrastructure. I am underwhelmed by the performance when compared with storage performance when the drives are not part of an S2D cluster and some older hardware we have on hand.

I want to ensure I am not missing anything.

Environment

Node 1:

System: Dell PowerEdge R730

                CPU: (2) Intel Xeon E5-2660

                RAM: 256 GB

Node 2:

                System: Dell PowerEdge R630

                CPU: (2) Intel Xeon E5-2698 v4

                RAM: 256GB

OS: Windows Server 2019 Data Center

Network:

10G Ubiquiti USW-Pro-Aggregation switch

Each node has (1) Intel Ethernet Converged Network Adapter X710 set as the management NIC

Each node has (4) QLogic 10GE QL41164HMCU CNA ports connected to the Ubiquiti switch

The 4 NICs are put into a Switch Embedded Teaming (SET) switch

RDMA is enabled and set to iWarp. Have verified RDMA connections

 

Storage:

Each node contains the following drives:

(2) Intel Optane DC 4800X SSDPED1K375GA  375GB SSD PCIe

(4) Samsung MZILS3T8HMLH0D3 3.49TB SSD SAS

(2) Toshiba KPM5XRUG3T84 3.49TB SSD SAS

(5) Toshiba AL15SEB18EP 1.64TB HDD SAS

 

I have been using DISKSPD and the output is below for the cluster using the entire storage pool. I would have expected some higher MiB/s and I/O per s. If I change the config and use only SSDs like even just one of the Samsung drives I get about 900MB/s and 15K+ I/O per s. The performance is about the same as isolating one of the Toshiba HDDs and running off there which I tried as well. I would have expected it to be much faster.

* This specific example is where I am trying to test write performance but I am similarly underwhelmed by read performance as well.

 

diskspd -b60K -d60 -h -L -o32 -t1 -s -w100 -c1G C:\ClusterStorage\HV1\LogIO.dat

 

Total IO

thread |       bytes     |     I/Os     |    MiB/s   |  I/O per s |  AvgLat  | LatStdDev |  file

-----------------------------------------------------------------------------------------------------

    0 |     21557882880 |       350877 |     342.66 |    5847.99 |    5.471 |     1.226 | C:\..\LogIO.dat (1024MiB)

-----------------------------------------------------------------------------------------------------

total:       21557882880 |       350877 |     342.66 |    5847.99 |    5.471 |     1.226

 

Read IO

thread |       bytes     |     I/Os     |    MiB/s   |  I/O per s |  AvgLat  | LatStdDev |  file

-----------------------------------------------------------------------------------------------------

     0 |               0 |            0 |       0.00 |       0.00 |    0.000 |       N/A | C:\..\LogIO.dat (1024MiB)

-----------------------------------------------------------------------------------------------------

total:                 0 |            0 |       0.00 |       0.00 |    0.000 |       N/A

 

Write IO

thread |       bytes     |     I/Os     |    MiB/s   |  I/O per s |  AvgLat  | LatStdDev |  file

-----------------------------------------------------------------------------------------------------

     0 |     21557882880 |       350877 |     342.66 |    5847.99 |    5.471 |     1.226 | C:\..\LogIO.dat (1024MiB)

-----------------------------------------------------------------------------------------------------

total:       21557882880 |       350877 |     342.66 |    5847.99 |    5.471 |     1.226

 

I would appreciate any help. Thanks!.

 

 

 

 

0 Replies