Home
%3CLINGO-SUB%20id%3D%22lingo-sub-425970%22%20slang%3D%22en-US%22%3EStorage%20Spaces%20Direct%20throughput%20with%20iWARP%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-425970%22%20slang%3D%22en-US%22%3E%0A%20%26lt%3Bmeta%20http-equiv%3D%22Content-Type%22%20content%3D%22text%2Fhtml%3B%20charset%3DUTF-8%22%20%2F%26gt%3B%3CSTRONG%3EFirst%20published%20on%20TECHNET%20on%20Mar%2013%2C%202017%20%3C%2FSTRONG%3E%20%3CBR%20%2F%3E%20Hello%2C%20Claus%20here%20again.%20It%20has%20been%20a%20while%20since%20I%20last%20posted%20here%20and%20a%20few%20things%20have%20changed%20since%20last%20time.%20Windows%20Server%20has%20been%20moved%20into%20the%20Windows%20and%20Devices%20Group%2C%20we%20have%20moved%20to%20a%20new%20building%20with%20a%20better%20caf%C3%A9%2C%20but%20a%20worse%20view%20%3Asmiling_face_with_smiling_eyes%3A%3C%2Fimg%3E.%20On%20a%20personal%20note%2C%20I%20can%20be%20seen%20waddling%20the%20hallways%20as%20I%20have%20had%20foot%20surgery.%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20At%20Microsoft%20Ignite%202016%20I%20did%20a%20demo%20at%20the%2028-minute%20mark%20as%20part%20of%20the%20%3CA%20href%3D%22https%3A%2F%2Fmyignite.microsoft.com%2Fvideos%2F3199%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noreferrer%20noopener%20noreferrer%22%3E%20Meet%20Windows%20Server%202016%20and%20System%20Center%202016%20%3C%2FA%3E%20session.%20I%20showed%20how%20Storage%20Spaces%20Direct%20can%20deliver%20massive%20amounts%20of%20IOPS%20to%20many%20virtual%20machines%20with%20various%20storage%20QoS%20settings.%20I%20encourage%20you%20to%20watch%20it%2C%20if%20you%20haven%E2%80%99t%20already%2C%20or%20go%20watch%20it%20again%20%3Asmiling_face_with_smiling_eyes%3A%3C%2Fimg%3E.%20In%20the%20demo%2C%20we%20used%20a%2016-node%20cluster%20connected%20over%20iWARP%20using%20the%2040GbE%20Chelsio%20iWARP%20T580CR%20adapters%2C%20showing%206M%2B%20read%20IOPS.%20Since%20then%2C%20Chelsio%20has%20released%20their%20100GbE%20T6%20NIC%20adapter%2C%20and%20we%20wanted%20to%20take%20a%20peek%20at%20what%20kind%20of%20network%20throughput%20would%20be%20possible%20with%20this%20new%20adapter.%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20We%20used%20the%20following%20hardware%20configuration%3A%20%3CBR%20%2F%3E%3CUL%3E%3CBR%20%2F%3E%3CLI%3E4%20nodes%20of%20Dell%20R730xd%20%3CBR%20%2F%3E%3CUL%3E%3CBR%20%2F%3E%3CLI%3E2x%20E5-2660v3%202.6Ghz%2010c%2F20t%3C%2FLI%3E%3CBR%20%2F%3E%3CLI%3E256GiB%20DDR4%202133Mhz%20(16%2016GiB%20DIMM)%3C%2FLI%3E%3CBR%20%2F%3E%3CLI%3E2x%20Chelsio%20T6%20100Gb%20NIC%20(PCIe%203.0%20x16)%2C%20single%20port%20connected%2Feach%2C%20QSFP28%20passive%20copper%20cabling%3C%2FLI%3E%3CBR%20%2F%3E%3CLI%3EPerformance%20Power%20Plan%3C%2FLI%3E%3CBR%20%2F%3E%3CLI%3EStorage%3A%20%3CBR%20%2F%3E%3CUL%3E%3CBR%20%2F%3E%3CLI%3E4x%203.2TB%20NVME%20Samsung%20PM1725%20(PCIe%203.0%20x8)%3C%2FLI%3E%3CBR%20%2F%3E%3CLI%3E4x%20SSD%20%2B%2012x%20HDD%20(not%20in%20use%3A%20all%20load%20from%20Samsung%20PM1725)%3C%2FLI%3E%3CBR%20%2F%3E%3C%2FUL%3E%3CBR%20%2F%3E%3C%2FLI%3E%3CBR%20%2F%3E%3CLI%3EWindows%20Server%202016%20%2B%20Storage%20Spaces%20Direct%20%3CBR%20%2F%3E%3CUL%3E%3CBR%20%2F%3E%3CLI%3ECache%3A%20Samsung%20PM1725%3C%2FLI%3E%3CBR%20%2F%3E%3CLI%3ECapacity%3A%20SSD%20%2B%20HDD%20(not%20in%20use%3A%20all%20load%20from%20cache)%3C%2FLI%3E%3CBR%20%2F%3E%3CLI%3E4x%202TB%203-way%20mirrored%20virtual%20disks%2C%20one%20per%20cluster%20node%3C%2FLI%3E%3CBR%20%2F%3E%3CLI%3E20%20Azure%20A1-sized%20VMs%20(1%20VCPU%2C%201.75GiB%20RAM)%20per%20node%3C%2FLI%3E%3CBR%20%2F%3E%3CLI%3EOS%20High%20Performance%20Power%20Plan%3C%2FLI%3E%3CBR%20%2F%3E%3C%2FUL%3E%3CBR%20%2F%3E%3C%2FLI%3E%3CBR%20%2F%3E%3CLI%3ELoad%3A%20%3CBR%20%2F%3E%3CUL%3E%3CBR%20%2F%3E%3CLI%3EDISKSPD%20workload%20generator%3C%2FLI%3E%3CBR%20%2F%3E%3CLI%3EVM%20Fleet%20workload%20orchestrator%3C%2FLI%3E%3CBR%20%2F%3E%3CLI%3E80%20virtual%20machines%20with%2016GiB%20file%20in%20VHDX%3C%2FLI%3E%3CBR%20%2F%3E%3CLI%3E512KiB%20100%25%20random%20read%20at%20a%20queue%20depth%20of%203%20per%20VM%3C%2FLI%3E%3CBR%20%2F%3E%3C%2FUL%3E%3CBR%20%2F%3E%3C%2FLI%3E%3CBR%20%2F%3E%3C%2FUL%3E%3CBR%20%2F%3E%3C%2FLI%3E%3CBR%20%2F%3E%3C%2FUL%3E%3CBR%20%2F%3E%20We%20did%20not%20configure%20DCB%20(PFC)%20in%20our%20deployment%2C%20since%20it%20is%20not%20required%20in%20iWARP%20configurations.%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20Below%20is%20a%20screenshot%20from%20the%20VMFleet%20Watch-Cluster%20window%2C%20which%20reports%20IOPS%2C%20bandwidth%20and%20latency.%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20%3CIMG%20src%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F107507i08BEA5DE3D1B0C20%22%20%2F%3E%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20As%20you%20can%20see%20the%20aggregated%20bandwidth%20exceeded%2083GB%2Fs%2C%20which%20is%20very%20impressive.%20Each%20VM%20realized%20more%20than%201GB%2Fs%20of%20throughput%2C%20and%20notice%20the%20average%20read%20latency%20is%20%26lt%3B1.5ms.%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20Let%20me%20know%20what%20you%20think.%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20Until%20next%20time%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20%3CA%20href%3D%22https%3A%2F%2Ftwitter.com%2FClausJor%22%20target%3D%22_blank%22%20rel%3D%22nofollow%20noopener%20noreferrer%20noopener%20noreferrer%22%3E%20%40ClausJor%3C%2FA%3E%3C%2FLINGO-BODY%3E%3CLINGO-TEASER%20id%3D%22lingo-teaser-425970%22%20slang%3D%22en-US%22%3EFirst%20published%20on%20TECHNET%20on%20Mar%2013%2C%202017%20Hello%2C%20Claus%20here%20again.%3C%2FLINGO-TEASER%3E%3CLINGO-LABS%20id%3D%22lingo-labs-425970%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3Efailover%20clustering%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3Es2d%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3Estorage%20spaces%20direct%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3EWindows%20server%202016%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
First published on TECHNET on Mar 13, 2017
Hello, Claus here again. It has been a while since I last posted here and a few things have changed since last time. Windows Server has been moved into the Windows and Devices Group, we have moved to a new building with a better café, but a worse view :smiling_face_with_smiling_eyes:. On a personal note, I can be seen waddling the hallways as I have had foot surgery.

At Microsoft Ignite 2016 I did a demo at the 28-minute mark as part of the Meet Windows Server 2016 and System Center 2016 session. I showed how Storage Spaces Direct can deliver massive amounts of IOPS to many virtual machines with various storage QoS settings. I encourage you to watch it, if you haven’t already, or go watch it again :smiling_face_with_smiling_eyes:. In the demo, we used a 16-node cluster connected over iWARP using the 40GbE Chelsio iWARP T580CR adapters, showing 6M+ read IOPS. Since then, Chelsio has released their 100GbE T6 NIC adapter, and we wanted to take a peek at what kind of network throughput would be possible with this new adapter.

We used the following hardware configuration:

  • 4 nodes of Dell R730xd

    • 2x E5-2660v3 2.6Ghz 10c/20t

    • 256GiB DDR4 2133Mhz (16 16GiB DIMM)

    • 2x Chelsio T6 100Gb NIC (PCIe 3.0 x16), single port connected/each, QSFP28 passive copper cabling

    • Performance Power Plan

    • Storage:

      • 4x 3.2TB NVME Samsung PM1725 (PCIe 3.0 x8)

      • 4x SSD + 12x HDD (not in use: all load from Samsung PM1725)



    • Windows Server 2016 + Storage Spaces Direct

      • Cache: Samsung PM1725

      • Capacity: SSD + HDD (not in use: all load from cache)

      • 4x 2TB 3-way mirrored virtual disks, one per cluster node

      • 20 Azure A1-sized VMs (1 VCPU, 1.75GiB RAM) per node

      • OS High Performance Power Plan



    • Load:

      • DISKSPD workload generator

      • VM Fleet workload orchestrator

      • 80 virtual machines with 16GiB file in VHDX

      • 512KiB 100% random read at a queue depth of 3 per VM






We did not configure DCB (PFC) in our deployment, since it is not required in iWARP configurations.

Below is a screenshot from the VMFleet Watch-Cluster window, which reports IOPS, bandwidth and latency.



As you can see the aggregated bandwidth exceeded 83GB/s, which is very impressive. Each VM realized more than 1GB/s of throughput, and notice the average read latency is <1.5ms.

Let me know what you think.

Until next time

@ClausJor