I'm working on trying to troubleshoot some S2D performance issues (or so I assume they are issues). I'm seeing very low write speeds.
4x Nodes with each node having
- 4x 2TB Samsung 850 Pro
- 8x 4TB HGST 7.2k RPD rust
- 64GB Memory
- E5-2670v2 processors
- LSI/Avago/Broadcom/Whoever they are today 9300-8i HBA running latest firmware
- 2X Mellanox ConnextX3 flashed with 2.4.5030 firmware (read the latest was buggy)
My initial configuration was 100% 3-way mirror of HDD with SSD as cache. Saw poor performance so I yanked all the HDD and created a 3-way mirror out of just SSD. Performance wasn't much better.
Read performance isn't horrible, however when transferring a VHDX to S2D, I was only getting about 75-100MB/s. I realize that file transfers is not an optimal test, however at those speeds it will literally take ages from my to live migrate VM's to storage with this performance. I could also tell the VM's running on S2D were impacted by the import of other VM's as well so I'm fairly certain something wrong.
All servers are reporting RDMA = True on all NIC's.
If I do a live migration from source storage to a different destination, transfer rates are 600-700MB/s, so I know the source is not the bottleneck.