performance
3 TopicsCPU Context Switches
I've done a fair bit of research on this topic but there are diverse opinions on what constitutes CPU contention as it relates to CPU Context Switches. I'm curious if anyone has specific thoughts on the topic. Do you measure contention based on a generic threshold value (say, 5000-10000 context switches per CPU)? I've read in some places that context switches should be evaluated on a per-server basis, with some servers having higher values than others while still not experiencing contention.1.1KViews0likes0CommentsRAID Array / Disk Performance with Bitlocker - unexpected results
I've doing some testing on a system to identify the impact that Bitlocker Encryption has on the read/write performance and the results have been interesting to say the least; I'm hoping someone can help to explain my findings. I'm using an entry level server with quad core Xeon E-2224 CPU and 16GB RAM which has Windows Sever 2016 Standard installed - and it's a totally fresh Windows install with all of the latest relevant drivers are installed. My storage array is configured through the integrated controller on the motherboard and is across 4 x 10TB WD Enterprise disks configured as follows: RAID Level RAID5 Legacy Disk Geometry (C/H/S) 65535/255/32 Strip Size 64 KiB Full Stripe Size 192 KiB Disk 0 WDC WD102KRYZ-0 Disk 1 WDC WD102KRYZ-0 Disk 2 WDC WD102KRYZ-0 Disk 3 WDC WD102KRYZ-0 I've provisioned a single large logical volume and I've created two partitions for my testing as follows: Drive Capacity File System Allocation Unit Size Y: 100 GB NTFS 4096 bytes Z: 25,600 GB NTFS 8192 bytes I ran an array of tests (results attached) on the volumes before encryption, then reformatted and encrypted the volumes and same tests again. The results have been very interesting, I was expecting to see a small performance hit across all tests due to Bitlocker, but what I found was that some tests had a drastic performance decrease, in some cases up to 50%, and surprisingly some tests showed an increase in performance, in one case an increase of 60%. Can anyone help to explain this? I'm thinking of running the tests again with another product to see if I get similar results, but what I'm using is fairly robust/reliable from my experience. Findings are in the spreadsheet attached.6.5KViews1like0CommentsServer 2019: Ppor Storage Spaces Direct (S2D) Performance
Hello, I am trying to setup a two-node cluster for S2D for hyperconverged infrastructure. I am underwhelmed by the performance when compared with storage performance when the drives are not part of an S2D cluster and some older hardware we have on hand. I want to ensure I am not missing anything. Environment Node 1: System: Dell PowerEdge R730 CPU: (2) Intel Xeon E5-2660 RAM: 256 GB Node 2: System: Dell PowerEdge R630 CPU: (2) Intel Xeon E5-2698 v4 RAM: 256GB OS: Windows Server 2019 Data Center Network: 10G Ubiquiti USW-Pro-Aggregation switch Each node has (1) Intel Ethernet Converged Network Adapter X710 set as the management NIC Each node has (4) QLogic 10GE QL41164HMCU CNA ports connected to the Ubiquiti switch The 4 NICs are put into a Switch Embedded Teaming (SET) switch RDMA is enabled and set to iWarp. Have verified RDMA connections Storage: Each node contains the following drives: (2) Intel Optane DC 4800X SSDPED1K375GA 375GB SSD PCIe (4) Samsung MZILS3T8HMLH0D3 3.49TB SSD SAS (2) Toshiba KPM5XRUG3T84 3.49TB SSD SAS (5) Toshiba AL15SEB18EP 1.64TB HDD SAS I have been using DISKSPD and the output is below for the cluster using the entire storage pool. I would have expected some higher MiB/s and I/O per s. If I change the config and use only SSDs like even just one of the Samsung drives I get about 900MB/s and 15K+ I/O per s. The performance is about the same as isolating one of the Toshiba HDDs and running off there which I tried as well. I would have expected it to be much faster. * This specific example is where I am trying to test write performance but I am similarly underwhelmed by read performance as well. diskspd -b60K -d60 -h -L -o32 -t1 -s -w100 -c1G C:\ClusterStorage\HV1\LogIO.dat Total IO thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | LatStdDev | file ----------------------------------------------------------------------------------------------------- 0 | 21557882880 | 350877 | 342.66 | 5847.99 | 5.471 | 1.226 | C:\..\LogIO.dat (1024MiB) ----------------------------------------------------------------------------------------------------- total: 21557882880 | 350877 | 342.66 | 5847.99 | 5.471 | 1.226 Read IO thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | LatStdDev | file ----------------------------------------------------------------------------------------------------- 0 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\..\LogIO.dat (1024MiB) ----------------------------------------------------------------------------------------------------- total: 0 | 0 | 0.00 | 0.00 | 0.000 | N/A Write IO thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | LatStdDev | file ----------------------------------------------------------------------------------------------------- 0 | 21557882880 | 350877 | 342.66 | 5847.99 | 5.471 | 1.226 | C:\..\LogIO.dat (1024MiB) ----------------------------------------------------------------------------------------------------- total: 21557882880 | 350877 | 342.66 | 5847.99 | 5.471 | 1.226 I would appreciate any help. Thanks!.3KViews0likes0Comments