Forum Discussion
Server 2019: Poor write performance on S2D two node cluster
- Dec 18, 2018
Hi ,
Your nodes don’t comply with S2D requirements. Additionally, I would not recommend to measure performance by windows file copying, you’ll find arguments here:
Better use DiskSPD from MS.
Regarding the storage solution, you can look at virtual SAN vendors. I have a good experience of using Starwind vSAN for 2 servers cluster. The performance is better and no problem with configuration. You can find guide here:
Hello!
Remove-PhysicalDisk worked for me even with asterisk (SSD*) and ssd get moved from s2d pool to primordial pool, but the problem is with the next step. I want to get out disks also from primordial pool to see them in disk management and use as standalone disks in windows system. To do this I understood I need to use command Set-ClusterS2DDisk -CanBeClaimed $false, but I got error. I provided command and error message below.
Shortly: I had 4x 512GB ssds (2 pcs in each server, I have two). Then I added 8 more ssd drives with size 1tb each (4 pcs in each server). Then I had problem, that I am unable to use all of these space (problem I described earlier). As I was no success to extend my volume, I decided to remove 512gb ssds out of the pool and see what happens. Then I run commands Set-PhysicalDisk -Usage Retired, Repair-VirtualDisk and Remove-Physical disk. So far all worked good. And then finally I wanted to get these 512GB disks out of the Primordial pool using Set-ClusterS2DDisk -CanBeClaimed $false, but it was unsuccessful. I got error.
! Interesting thing is, that after 512GB ssd removal my StorageTier allowed maximum size changed and now it is around 2.8TB As I already have 930 GB volume that I want to extend, it means Tiear allows around 3.7TB This sounds much better and I believe it is the maximum for 8x1tb drives in mirror. But it is still strange, that with 4x512gb + 8x1tb my tier max size was only around 1.5TB
$disk=get-physicaldisk -FriendlyName "*Samsung SSD*" | ? {$_.size -eq "512110190592" -and $_.deviceid -ne 0}
Set-ClusterS2DDisk -CanBeClaimed $false -PhysicalDisk $disk
Set-ClusterS2DDisk : Failed to set cache mode on disks connected to node 'h11'. Run cluster validation, including the Storage Spaces
Direct tests, to verify the configuration
At line:2 char:1
+ Set-ClusterS2DDisk -CanBeClaimed $false -PhysicalDisk $disk
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [Set-ClusterStorageSpacesDirectDisk], CimException
+ FullyQualifiedErrorId : HRESULT 0x8007139f,Microsoft.Management.Infrastructure.CimCmdlets.InvokeCimMethodCommand,Set-ClusterSt
orageSpacesDirectDisk
Physical Disks now looks like this.
PS C:\Windows\system32> get-physicaldisk -FriendlyName "*Samsung SSD*" | ? {$_.size -eq "512110190592" -and $_.deviceid -ne 0}
DeviceId FriendlyName SerialNumber MediaType CanPool OperationalStatus HealthStatus Usage Size
-------- ------------ ------------ --------- ------- ----------------- ------------ ----- ----
1004 Samsung SSD 850 PRO 512GB S250NSAG432476E SSD True OK Healthy Auto-Select 476.94 GB
1003 Samsung SSD 840 PRO Series S1AXNSAD800683Y SSD True OK Healthy Auto-Select 476.94 GB
2016 Samsung SSD 850 PRO 512GB S250NSAG432479X SSD True OK Healthy Auto-Select 476.94 GB
2015 Samsung SSD 840 PRO Series S1AXNSAF111936H SSD True OK Healthy Auto-Select 476.94 GB
PS C:\Windows\system32> get-physicaldisk -FriendlyName "*Samsung SSD*" | ? {$_.size -eq "512110190592" -and $_.deviceid -ne 0} | Get-StoragePool
FriendlyName OperationalStatus HealthStatus IsPrimordial IsReadOnly Size AllocatedSize
------------ ----------------- ------------ ------------ ---------- ---- -------------
Primordial OK Healthy True False 11.53 TB 7.45 TB
Primordial OK Healthy True False 11.53 TB 7.45 TB
Primordial OK Healthy True False 11.53 TB 7.45 TB
Primordial OK Healthy True False 11.53 TB 7.45 TB
And actually there is one more important question before I start to extend my storage.
At the beginning I had 4x512gb ssd drives. I enabled S2D without cache and it automatically created pool and tiers. When I started to add drives and faced problems posted here, I found there exists tier parameter column number. I figured out for the default tier template this parameter is set to auto. But for my tiered volume column number has value 2. Now, when I have 8 ssd drives (4 on each server), it was better for performance and drive wear equalization to set column number to 4 so all 4 drives at each server forms one stripe. Is it even possible to do? I was unable to find detailed specs about s2d operation in this level. And unfortunatelly some forums gave info that it is not possible to change column count after volume is created. Is it true? And if so, are there any technical recommendation how to choose this value?
- uedgarsJan 28, 2019Copper Contributor
So, If I have column count 2, it means S2D takes two drives for a stripe per server. And only when these drives are full, it starts to write into rest pair of drives. Right?
Then what happens, if I leave column count 2 and create a second tiered volume also with column count 2? Do S2D understands less loaded drives and distributes this volume around empty ones?
Or from performance perspective it is better to set column count of 4 for my setup? (I understand, that if it is set to 4, I can extend my tiered volume only if I add appropriate count of drives)
- GirdharJan 25, 2019Microsoft
Well as you figured it out, updating the Column count post volume creation is not possible.
Re-creating the volume should take the correct Column count.