SOLVED
Home

Server 2019: Poor write performance on S2D two node cluster

%3CLINGO-SUB%20id%3D%22lingo-sub-290238%22%20slang%3D%22en-US%22%3EServer%202019%3A%20Poor%20write%20performance%20on%20S2D%20two%20node%20cluster%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-290238%22%20slang%3D%22en-US%22%3E%3CP%3EHello!%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3EI%20am%20trying%20to%20set%20up%20S2D%20on%20two%20node%20cluster%20for%20Hyper%20converged%20infrastructure.%20Unfortunately%20I%20observe%20significant%20write%20performance%20drop%20if%20we%20compare%20S2D%20storage%20with%20slowest%20physical%20hard%20drive%20performance%20participating%20in%20cluster.%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3EWhat%20could%20cause%20this%3F%3C%2FP%3E%3CP%3EHow%20to%20get%20better%20results%3F%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3EMy%20test%20environment%3C%2FP%3E%3CP%3EOS%3A%20Windows%20Server%202019%20Datacenter%20Build%2017723.rs5_release.180720-1452%3C%2FP%3E%3CP%3EBoth%20nodes%20are%20connected%20directly%20using%20one%2010%20Gbps%20link%20for%26nbsp%3BS2D%3C%2FP%3E%3CP%3EEach%20node%20have%201%20Gbps%20link%20for%20management%3C%2FP%3E%3CP%3ES2D%20two%20node%20cluster%20configured%20with%20Cache%20disabled%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3ENode%201%3C%2FP%3E%3CP%3ESystem%3A%20Supermicro%26nbsp%3BX9SRH-7F%2F7TF%3C%2FP%3E%3CP%3ECPU%3A%20Intel%20Xeon%20E5-2620%202.00%20GHz%20(6CPUs)%3C%2FP%3E%3CP%3ERAM%3A%2032%20GB%20DDR3%3C%2FP%3E%3CP%3ENetwork%3A%20Intel%20X540-AT2%2010%20Gbps%20copper%3C%2FP%3E%3CP%3ESystem%20drive%3A%26nbsp%3BSamsung%20SSD%20840%20PRO%20512%20GB%3C%2FP%3E%3CP%3EStorage%20drives%3A%26nbsp%3BSamsung%20SSD%20850%20PRO%20512%20GB%2C%26nbsp%3BSamsung%20SSD%20840%20PRO%20512%20GB%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3ENode%202%3C%2FP%3E%3CP%3ESystem%3A%20Intel%26nbsp%3BS2600WTT%3C%2FP%3E%3CP%3ECPU%3A%26nbsp%3BGenuine%20Intel%20CPU%202.30%20GHz%20(ES)%20(28%20CPUs)%3C%2FP%3E%3CP%3ERAM%3A%2064%20GB%20DDR4%3C%2FP%3E%3CP%3E%3CSPAN%3ENetwork%3A%20Intel%20X540-AT2%2010%20Gbps%20copper%3C%2FSPAN%3E%3C%2FP%3E%3CP%3ESystem%20drive%3A%26nbsp%3BINTEL%20SSDSC2BB240G7%20240%20GB%3C%2FP%3E%3CP%3E%3CSPAN%3EStorage%20drives%3A%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%3ESamsung%20SSD%20850%20PRO%20512%20GB%2C%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%3ESamsung%20SSD%20840%20PRO%20512%20GB%3C%2FSPAN%3E%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3EBefore%20enabling%20S2D%20I%20turned%20off%20write%20cache%20for%20each%20SSD%20drive%20individually%20and%20tested%20their%20write%20performance%20by%20copying%2030%20GB%20large%20VHD%20file.%20Results%20were%20around%20130%20-%20160%20MB%2Fs%20for%20Samsung%20SSD%20840%20PRO%20drives%20and%20around%2060%20-%2070%20MB%2Fs%20for%20Samsung%20SSD%20850%20PRO%20drives.%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3EAfter%20enabling%20S2D%20write%20performance%20drops%20to%26nbsp%3B40%20-%2044%20MB%2Fs%20(see%20attachment)%3C%2FP%3E%3C%2FLINGO-BODY%3E%3CLINGO-LABS%20id%3D%22lingo-labs-290238%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EClustering%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3EStorage%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3EWindows%20Server%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E%3CLINGO-SUB%20id%3D%22lingo-sub-326427%22%20slang%3D%22en-US%22%3ERe%3A%20Server%202019%3A%20Poor%20write%20performance%20on%20S2D%20two%20node%20cluster%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-326427%22%20slang%3D%22en-US%22%3E%3CP%3ESo%2C%20If%20I%20have%20column%20count%202%2C%20it%20means%20S2D%20takes%20two%20drives%20for%20a%20stripe%20per%20server.%20And%20only%20when%20these%20drives%20are%20full%2C%20it%20starts%20to%20write%20into%20rest%20pair%20of%20drives.%20Right%3F%3C%2FP%3E%3CP%3EThen%20what%20happens%2C%20if%20I%20leave%20column%20count%202%20and%20create%20a%20second%20tiered%20volume%20also%20with%20column%20count%202%3F%20Do%20S2D%20understands%20less%20loaded%20drives%20and%20distributes%20this%20volume%20around%20empty%20ones%3F%3C%2FP%3E%3CP%3EOr%20from%20performance%20perspective%20it%20is%20better%20to%20set%20column%20count%20of%204%20for%20my%20setup%3F%20(I%20understand%2C%20that%20if%20it%20is%20set%20to%204%2C%20I%20can%20extend%20my%20tiered%20volume%20only%20if%20I%20add%20appropriate%20count%20of%20drives)%3C%2FP%3E%3C%2FLINGO-BODY%3E%3CLINGO-SUB%20id%3D%22lingo-sub-326423%22%20slang%3D%22en-US%22%3ERe%3A%20Server%202019%3A%20Poor%20write%20performance%20on%20S2D%20two%20node%20cluster%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-326423%22%20slang%3D%22en-US%22%3E%3CP%3EI%20tried%20to%20run%20command%20using%20column%20but%20it%20still%20returns%20the%20same%20error.%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3EPS%20C%3A%5CWindows%5Csystem32%26gt%3B%20%24disk%3Dget-physicaldisk%20-FriendlyName%20%22*Samsung%20SSD*%22%20%7C%20%3F%20%7B%24_.size%20-eq%20%22512110190592%22%20-and%20%24_.deviceid%20-ne%200%7D%3CBR%20%2F%3ESet-ClusterS2DDisk%20-CanBeClaimed%3A%24false%20-PhysicalDisk%20%24disk%3CBR%20%2F%3ESet-ClusterS2DDisk%20%3A%20Failed%20to%20set%20cache%20mode%20on%20disks%20connected%20to%20node%20'h11'.%20Run%20cluster%20validation%2C%20including%20the%20Storage%20Spaces%20Direct%20tests%2C%20to%20verify%20the%20configuration%3CBR%20%2F%3EAt%20line%3A2%20char%3A1%3CBR%20%2F%3E%2B%20Set-ClusterS2DDisk%20-CanBeClaimed%3A%24false%20-PhysicalDisk%20%24disk%3CBR%20%2F%3E%2B%20~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~%3CBR%20%2F%3E%2B%20CategoryInfo%20%3A%20NotSpecified%3A%20(%3A)%20%5BSet-ClusterStorageSpacesDirectDisk%5D%2C%20CimException%3CBR%20%2F%3E%2B%20FullyQualifiedErrorId%20%3A%20HRESULT%200x8007139f%2CMicrosoft.Management.Infrastructure.CimCmdlets.InvokeCimMethodCommand%2CSet-ClusterStorageSpacesDirectDisk%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3EI%20enabled%20S2D%20with%20cache%20disabled%20(Enable-ClusterS2D%20-cachestate%20disabled)%3C%2FP%3E%3CP%3EI%20even%20did%20not%20know%20about%20cache%20tiers.%20How%20can%20I%20check%20their%20status%3F%20If%20I%20use%20Get-StorageTiers%20I%20only%20see%20my%20storage%20tier%3A%3C%2FP%3E%3CP%3EPS%20C%3A%5CWindows%5Csystem32%26gt%3B%20get-storagetier%3C%2FP%3E%3CP%3EFriendlyName%20TierClass%20MediaType%20ResiliencySettingName%20FaultDomainRedundancy%20Size%20FootprintOnPool%20StorageEfficiency%3CBR%20%2F%3E------------%20---------%20---------%20---------------------%20---------------------%20----%20---------------%20-----------------%3CBR%20%2F%3ECapacity%20Unknown%20SSD%20Mirror%201%200%20B%200%20B%3CBR%20%2F%3EMirrorOnSSD%20Unknown%20SSD%20Mirror%201%200%20B%200%20B%3CBR%20%2F%3Essd-volume-1-MirrorOnSSD%20Capacity%20SSD%20Mirror%201%20930%20GB%201.82%20TB%2050.00%25%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3EOh%2C%20I%20remembered%20I%20have%20turned%20deduplication%20on.%20Might%20be%20it%20disturbs%20something%3F%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3EPS%20C%3A%5CWindows%5Csystem32%26gt%3B%20Get-DedupStatus%20%7C%20fl%20*%3C%2FP%3E%3CP%3E%3CBR%20%2F%3EObjectId%20%3A%20%5C%5C%3F%5CVolume%7B079e5b9b-7f17-4bea-bbd6-6de7bed066fd%7D%5C%3CBR%20%2F%3ECapacity%20%3A%20998512787456%3CBR%20%2F%3EFreeSpace%20%3A%20503428747264%3CBR%20%2F%3EInPolicyFilesCount%20%3A%2013%3CBR%20%2F%3EInPolicyFilesSize%20%3A%20868141497962%3CBR%20%2F%3ELastGarbageCollectionResult%20%3A%200%3CBR%20%2F%3ELastGarbageCollectionResultMessage%20%3A%20The%20operation%20completed%20successfully.%3CBR%20%2F%3ELastGarbageCollectionTime%20%3A%201%2F26%2F2019%204%3A39%3A45%20AM%3CBR%20%2F%3ELastOptimizationResult%20%3A%200%3CBR%20%2F%3ELastOptimizationResultMessage%20%3A%20The%20operation%20completed%20successfully.%3CBR%20%2F%3ELastOptimizationTime%20%3A%201%2F28%2F2019%2010%3A41%3A21%20AM%3CBR%20%2F%3ELastScrubbingResult%20%3A%200%3CBR%20%2F%3ELastScrubbingResultMessage%20%3A%20The%20operation%20completed%20successfully.%3CBR%20%2F%3ELastScrubbingTime%20%3A%201%2F26%2F2019%204%3A40%3A48%20AM%3CBR%20%2F%3EOptimizedFilesCount%20%3A%2013%3CBR%20%2F%3EOptimizedFilesSavingsRate%20%3A%2050%3CBR%20%2F%3EOptimizedFilesSize%20%3A%20868141497962%3CBR%20%2F%3ESavedSpace%20%3A%20440480093290%3CBR%20%2F%3ESavingsRate%20%3A%2047%3CBR%20%2F%3EUnoptimizedSize%20%3A%20935564133482%3CBR%20%2F%3EUsedSpace%20%3A%20495084040192%3CBR%20%2F%3EVolume%20%3A%20C%3A%5CClusterStorage%5Cssd-volume-1%3CBR%20%2F%3EVolumeId%20%3A%20%5C%5C%3F%5CVolume%7B079e5b9b-7f17-4bea-bbd6-6de7bed066fd%7D%5C%3CBR%20%2F%3EPSComputerName%20%3A%3CBR%20%2F%3ECimClass%20%3A%20ROOT%2FMicrosoft%2FWindows%2FDeduplication%3AMSFT_DedupVolumeStatus%3CBR%20%2F%3ECimInstanceProperties%20%3A%20%7BCapacity%2C%20FreeSpace%2C%20InPolicyFilesCount%2C%20InPolicyFilesSize...%7D%3CBR%20%2F%3ECimSystemProperties%20%3A%20Microsoft.Management.Infrastructure.CimSystemProperties%3C%2FP%3E%3C%2FLINGO-BODY%3E%3CLINGO-SUB%20id%3D%22lingo-sub-325939%22%20slang%3D%22en-US%22%3ERe%3A%20Server%202019%3A%20Poor%20write%20performance%20on%20S2D%20two%20node%20cluster%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-325939%22%20slang%3D%22en-US%22%3E%3CP%3ECan%20you%20try%26nbsp%3B%3CSPAN%20style%3D%22display%3A%20inline%20!important%3B%20float%3A%20none%3B%20background-color%3A%20%23ffffff%3B%20color%3A%20%23333333%3B%20font-family%3A%20'SegoeUI'%2C'Lato'%2C'Helvetica%20Neue'%2CHelvetica%2CArial%2Csans-serif%3B%20font-size%3A%2016px%3B%20font-style%3A%20normal%3B%20font-variant%3A%20normal%3B%20font-weight%3A%20300%3B%20letter-spacing%3A%20normal%3B%20orphans%3A%202%3B%20text-align%3A%20left%3B%20text-decoration%3A%20none%3B%20text-indent%3A%200px%3B%20text-transform%3A%20none%3B%20-webkit-text-stroke-width%3A%200px%3B%20white-space%3A%20normal%3B%20word-spacing%3A%200px%3B%22%3ESet-ClusterS2DDisk%20-CanBeClaimed%3CSTRONG%3E%3A%3C%2FSTRONG%3E%24false%20-PhysicalDisk%20%24disk%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3E%3CSPAN%20style%3D%22display%3A%20inline%20!important%3B%20float%3A%20none%3B%20background-color%3A%20%23ffffff%3B%20color%3A%20%23333333%3B%20font-family%3A%20'SegoeUI'%2C'Lato'%2C'Helvetica%20Neue'%2CHelvetica%2CArial%2Csans-serif%3B%20font-size%3A%2016px%3B%20font-style%3A%20normal%3B%20font-variant%3A%20normal%3B%20font-weight%3A%20300%3B%20letter-spacing%3A%20normal%3B%20orphans%3A%202%3B%20text-align%3A%20left%3B%20text-decoration%3A%20none%3B%20text-indent%3A%200px%3B%20text-transform%3A%20none%3B%20-webkit-text-stroke-width%3A%200px%3B%20white-space%3A%20normal%3B%20word-spacing%3A%200px%3B%22%3ENote%20the%20%3CSTRONG%3Ecolon%3C%2FSTRONG%3E.%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CSPAN%20style%3D%22display%3A%20inline%20!important%3B%20float%3A%20none%3B%20background-color%3A%20%23ffffff%3B%20color%3A%20%23333333%3B%20font-family%3A%20'SegoeUI'%2C'Lato'%2C'Helvetica%20Neue'%2CHelvetica%2CArial%2Csans-serif%3B%20font-size%3A%2016px%3B%20font-style%3A%20normal%3B%20font-variant%3A%20normal%3B%20font-weight%3A%20300%3B%20letter-spacing%3A%20normal%3B%20orphans%3A%202%3B%20text-align%3A%20left%3B%20text-decoration%3A%20none%3B%20text-indent%3A%200px%3B%20text-transform%3A%20none%3B%20-webkit-text-stroke-width%3A%200px%3B%20white-space%3A%20normal%3B%20word-spacing%3A%200px%3B%22%3EOn%20the%20error%3A%26nbsp%3BSet-ClusterS2DDisk%20%3A%20Failed%20to%20set%20%3CSTRONG%3Ecache%20mode%3C%2FSTRONG%3E%20on%20disks%20connected%20to%20node%20'h11'%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3EH%3CSPAN%20style%3D%22display%3A%20inline%20!important%3B%20float%3A%20none%3B%20background-color%3A%20%23ffffff%3B%20color%3A%20%23333333%3B%20font-family%3A%20'SegoeUI'%2C'Lato'%2C'Helvetica%20Neue'%2CHelvetica%2CArial%2Csans-serif%3B%20font-size%3A%2016px%3B%20font-style%3A%20normal%3B%20font-variant%3A%20normal%3B%20font-weight%3A%20300%3B%20letter-spacing%3A%20normal%3B%20orphans%3A%202%3B%20text-align%3A%20left%3B%20text-decoration%3A%20none%3B%20text-indent%3A%200px%3B%20text-transform%3A%20none%3B%20-webkit-text-stroke-width%3A%200px%3B%20white-space%3A%20normal%3B%20word-spacing%3A%200px%3B%22%3Eave%20your%20created%20cache%20tiers%20manually%3F%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CSPAN%20style%3D%22display%3A%20inline%20!important%3B%20float%3A%20none%3B%20background-color%3A%20%23ffffff%3B%20color%3A%20%23333333%3B%20font-family%3A%20'SegoeUI'%2C'Lato'%2C'Helvetica%20Neue'%2CHelvetica%2CArial%2Csans-serif%3B%20font-size%3A%2016px%3B%20font-style%3A%20normal%3B%20font-variant%3A%20normal%3B%20font-weight%3A%20300%3B%20letter-spacing%3A%20normal%3B%20orphans%3A%202%3B%20text-align%3A%20left%3B%20text-decoration%3A%20none%3B%20text-indent%3A%200px%3B%20text-transform%3A%20none%3B%20-webkit-text-stroke-width%3A%200px%3B%20white-space%3A%20normal%3B%20word-spacing%3A%200px%3B%22%3EAlso%2C%20after%20running%20set-ClusterS2dDisk%2C%20check%20the%20Get-Disk%20output%20to%20see%20available%20disk.%26nbsp%3B%3C%2FSPAN%3E%3C%2FP%3E%3C%2FLINGO-BODY%3E%3CLINGO-SUB%20id%3D%22lingo-sub-325933%22%20slang%3D%22en-US%22%3ERe%3A%20Server%202019%3A%20Poor%20write%20performance%20on%20S2D%20two%20node%20cluster%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-325933%22%20slang%3D%22en-US%22%3E%3CP%3EWell%20as%20you%20figured%20it%20out%2C%20updating%20the%20Column%20count%20post%20volume%20creation%20is%20not%20possible.%3C%2FP%3E%0A%3CP%3ERe-creating%20the%20volume%20should%20take%20the%20correct%20Column%20count.%26nbsp%3B%3C%2FP%3E%3C%2FLINGO-BODY%3E%3CLINGO-SUB%20id%3D%22lingo-sub-324579%22%20slang%3D%22en-US%22%3ERe%3A%20Server%202019%3A%20Poor%20write%20performance%20on%20S2D%20two%20node%20cluster%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-324579%22%20slang%3D%22en-US%22%3E%3CP%3EAnd%20actually%20there%20is%20one%20more%20important%20question%20before%20I%20start%20to%20extend%20my%20storage.%3C%2FP%3E%3CP%3EAt%20the%20beginning%20I%20had%204x512gb%20ssd%20drives.%20I%20enabled%20S2D%20without%20cache%20and%20it%20automatically%20created%20pool%20and%20tiers.%20When%20I%20started%20to%20add%20drives%20and%20faced%20problems%20posted%20here%2C%20I%20found%20there%20exists%20tier%20parameter%20column%20number.%20I%20figured%20out%20for%20the%20default%20tier%20template%20this%20parameter%20is%20set%20to%20auto.%20But%20for%20my%20tiered%20volume%20column%20number%20has%20value%202.%20Now%2C%20when%20I%20have%208%20ssd%20drives%20(4%20on%20each%20server)%2C%20it%20was%20better%20for%20performance%20and%20drive%20wear%20equalization%20to%20set%20column%20number%20to%204%20so%20all%204%20drives%20at%20each%20server%20forms%20one%20stripe.%20Is%20it%20even%20possible%20to%20do%3F%20I%20was%20unable%20to%20find%20detailed%20specs%20about%20s2d%20operation%20in%20this%20level.%20And%20unfortunatelly%20some%20forums%20gave%20info%20that%20it%20is%20not%20possible%20to%20change%20column%20count%20after%20volume%20is%20created.%20Is%20it%20true%3F%20And%20if%20so%2C%20are%20there%20any%20technical%20recommendation%20how%20to%20choose%20this%20value%3F%3C%2FP%3E%3C%2FLINGO-BODY%3E%3CLINGO-SUB%20id%3D%22lingo-sub-324572%22%20slang%3D%22en-US%22%3ERe%3A%20Server%202019%3A%20Poor%20write%20performance%20on%20S2D%20two%20node%20cluster%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-324572%22%20slang%3D%22en-US%22%3E%3CP%3EHello!%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3ERemove-PhysicalDisk%20worked%20for%20me%20even%20with%20asterisk%20(SSD*)%20and%20ssd%20get%20moved%20from%20s2d%20pool%20to%20primordial%20pool%2C%20but%20the%20problem%20is%20with%20the%20next%20step.%20I%20want%20to%20get%20out%20disks%20also%20from%20primordial%20pool%20to%20see%20them%20in%20disk%20management%20and%20use%20as%20standalone%20disks%20in%20windows%20system.%20To%20do%20this%20I%20understood%20I%20need%20to%20use%20command%26nbsp%3BSet-ClusterS2DDisk%20-CanBeClaimed%20%24false%2C%20but%20I%20got%20error.%20I%20provided%20command%20and%20error%20message%20below.%3C%2FP%3E%3CP%3EShortly%3A%20I%20had%204x%20512GB%20ssds%20(2%20pcs%20in%20each%20server%2C%20I%20have%20two).%20Then%20I%20added%208%20more%20ssd%20drives%20with%20size%201tb%20each%20(4%20pcs%20in%20each%20server).%20Then%20I%20had%20problem%2C%20that%20I%20am%20unable%20to%20use%20all%20of%20these%20space%20(problem%20I%20described%20earlier).%20As%20I%20was%20no%20success%20to%20extend%20my%20volume%2C%20I%20decided%20to%20remove%20512gb%20ssds%20out%20of%20the%20pool%20and%20see%20what%20happens.%20Then%20I%20run%20commands%20Set-PhysicalDisk%20-Usage%20Retired%2C%20Repair-VirtualDisk%20and%20Remove-Physical%20disk.%20So%20far%20all%20worked%20good.%20And%20then%20finally%20I%20wanted%20to%20get%20these%20512GB%20disks%20out%20of%20the%20Primordial%20pool%20using%20Set-ClusterS2DDisk%20-CanBeClaimed%20%24false%2C%20but%20it%20was%20unsuccessful.%20I%20got%20error.%3C%2FP%3E%3CP%3E!%20Interesting%20thing%20is%2C%20that%20after%20512GB%20ssd%20removal%20my%20StorageTier%20allowed%20maximum%20size%20changed%20and%20now%20it%20is%20around%202.8TB%20As%20I%20already%20have%20930%20GB%20volume%20that%20I%20want%20to%20extend%2C%20it%20means%20Tiear%20allows%20around%203.7TB%20This%20sounds%20much%20better%20and%20I%20believe%20it%20is%20the%20maximum%20for%208x1tb%20drives%20in%20mirror.%20But%20it%20is%20still%20strange%2C%20that%20with%204x512gb%20%2B%208x1tb%20my%20tier%20max%20size%20was%20only%20around%201.5TB%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3E%24disk%3Dget-physicaldisk%20-FriendlyName%20%22*Samsung%20SSD*%22%20%7C%20%3F%20%7B%24_.size%20-eq%20%22512110190592%22%20-and%20%24_.deviceid%20-ne%200%7D%3CBR%20%2F%3ESet-ClusterS2DDisk%20-CanBeClaimed%20%24false%20-PhysicalDisk%20%24disk%3CBR%20%2F%3ESet-ClusterS2DDisk%20%3A%20Failed%20to%20set%20cache%20mode%20on%20disks%20connected%20to%20node%20'h11'.%20Run%20cluster%20validation%2C%20including%20the%20Storage%20Spaces%3CBR%20%2F%3EDirect%20tests%2C%20to%20verify%20the%20configuration%3CBR%20%2F%3EAt%20line%3A2%20char%3A1%3CBR%20%2F%3E%2B%20Set-ClusterS2DDisk%20-CanBeClaimed%20%24false%20-PhysicalDisk%20%24disk%3CBR%20%2F%3E%2B%20~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~%3CBR%20%2F%3E%2B%20CategoryInfo%20%3A%20NotSpecified%3A%20(%3A)%20%5BSet-ClusterStorageSpacesDirectDisk%5D%2C%20CimException%3CBR%20%2F%3E%2B%20FullyQualifiedErrorId%20%3A%20HRESULT%200x8007139f%2CMicrosoft.Management.Infrastructure.CimCmdlets.InvokeCimMethodCommand%2CSet-ClusterSt%3CBR%20%2F%3EorageSpacesDirectDisk%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3EPhysical%20Disks%20now%20looks%20like%20this.%3C%2FP%3E%3CP%3EPS%20C%3A%5CWindows%5Csystem32%26gt%3B%20get-physicaldisk%20-FriendlyName%20%22*Samsung%20SSD*%22%20%7C%20%3F%20%7B%24_.size%20-eq%20%22512110190592%22%20-and%20%24_.deviceid%20-ne%200%7D%3C%2FP%3E%3CP%3EDeviceId%20FriendlyName%20SerialNumber%20MediaType%20CanPool%20OperationalStatus%20HealthStatus%20Usage%20Size%3CBR%20%2F%3E--------%20------------%20------------%20---------%20-------%20-----------------%20------------%20-----%20----%3CBR%20%2F%3E1004%20Samsung%20SSD%20850%20PRO%20512GB%20S250NSAG432476E%20SSD%20True%20OK%20Healthy%20Auto-Select%20476.94%20GB%3CBR%20%2F%3E1003%20Samsung%20SSD%20840%20PRO%20Series%20S1AXNSAD800683Y%20SSD%20True%20OK%20Healthy%20Auto-Select%20476.94%20GB%3CBR%20%2F%3E2016%20Samsung%20SSD%20850%20PRO%20512GB%20S250NSAG432479X%20SSD%20True%20OK%20Healthy%20Auto-Select%20476.94%20GB%3CBR%20%2F%3E2015%20Samsung%20SSD%20840%20PRO%20Series%20S1AXNSAF111936H%20SSD%20True%20OK%20Healthy%20Auto-Select%20476.94%20GB%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3EPS%20C%3A%5CWindows%5Csystem32%26gt%3B%20get-physicaldisk%20-FriendlyName%20%22*Samsung%20SSD*%22%20%7C%20%3F%20%7B%24_.size%20-eq%20%22512110190592%22%20-and%20%24_.deviceid%20-ne%200%7D%20%7C%20Get-StoragePool%3C%2FP%3E%3CP%3EFriendlyName%20OperationalStatus%20HealthStatus%20IsPrimordial%20IsReadOnly%20Size%20AllocatedSize%3CBR%20%2F%3E------------%20-----------------%20------------%20------------%20----------%20----%20-------------%3CBR%20%2F%3EPrimordial%20OK%20Healthy%20True%20False%2011.53%20TB%207.45%20TB%3CBR%20%2F%3EPrimordial%20OK%20Healthy%20True%20False%2011.53%20TB%207.45%20TB%3CBR%20%2F%3EPrimordial%20OK%20Healthy%20True%20False%2011.53%20TB%207.45%20TB%3CBR%20%2F%3EPrimordial%20OK%20Healthy%20True%20False%2011.53%20TB%207.45%20TB%3C%2FP%3E%3C%2FLINGO-BODY%3E%3CLINGO-SUB%20id%3D%22lingo-sub-324557%22%20slang%3D%22en-US%22%3ERe%3A%20Server%202019%3A%20Poor%20write%20performance%20on%20S2D%20two%20node%20cluster%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-324557%22%20slang%3D%22en-US%22%3E%3CP%3EHi%20Uedgars%2C%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EActually%20I%20was%20looking%20for%20physical%20disk%20and%20storage%20pools%20when%20you%20got%20into%20the%20bad%20state%2C%20to%20understand%20why%26nbsp%3B%3CSPAN%20style%3D%22display%3A%20inline%20!important%3B%20float%3A%20none%3B%20background-color%3A%20%23ffffff%3B%20color%3A%20%23333333%3B%20font-family%3A%20'SegoeUI'%2C'Lato'%2C'Helvetica%20Neue'%2CHelvetica%2CArial%2Csans-serif%3B%20font-size%3A%2016px%3B%20font-style%3A%20normal%3B%20font-variant%3A%20normal%3B%20font-weight%3A%20300%3B%20letter-spacing%3A%20normal%3B%20line-height%3A%201.7142%3B%20orphans%3A%202%3B%20overflow-wrap%3A%20break-word%3B%20text-align%3A%20left%3B%20text-decoration%3A%20none%3B%20text-indent%3A%200px%3B%20text-transform%3A%20none%3B%20-webkit-text-stroke-width%3A%200px%3B%20white-space%3A%20normal%3B%20word-spacing%3A%200px%3B%22%3ESystem.Storage.PhysicalDisk.AutoPool.Enabled%3C%2FSPAN%3E%20property%20value%20was%20not%20honored.%20Let%20me%20know%2C%20if%20you%20still%20face%20issues%20with%20it%20in%20future.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3ENow%20that%20you%20have%20fixed%2C%20the%20better%20way%20of%20removing%20the%20disk%20from%20the%20pool%20is%20Remove-PhysicalDisk%20as%20you%20tried%20earlier%2C%20though%20you%20have%20to%20specify%20the%20full%20FriendlyName%20of%20the%20S2D%20StoragePool%20rather%20than%20S2d*.%20Once%20this%20succeeds%2C%20you%20will%20see%20the%20CanPool%20value%20of%20the%20disk%20to%20be%20True.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3ELet%20me%20know%2C%20if%20this%20doesn't%20work%20for%20you.%3C%2FP%3E%0A%3CP%3EAlso%2C%20if%20you%20feel%2C%20system%20is%20not%20behaving%20as%20you%20expect%2C%20please%20follow%20this%20%3CA%20title%3D%22SDDC%20Diagnosticlogs%22%20href%3D%22https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fwindows-server%2Fstorage%2Fstorage-spaces%2Fdata-collection%22%20target%3D%22_self%22%20rel%3D%22noopener%20noreferrer%20noopener%20noreferrer%22%3Elink%3C%2FA%3E%20and%20share%20the%20zip%20file%20with%20us.%20We%20try%20to%20collect%20needed%20information%2C%20so%20that%20there%20is%20no%20back%20and%20forth%20%3A)%3C%2Fimg%3E%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EThanks%3C%2FP%3E%0A%3CP%3EGirdhar%3C%2FP%3E%3C%2FLINGO-BODY%3E%3CLINGO-SUB%20id%3D%22lingo-sub-320495%22%20slang%3D%22en-US%22%3ERe%3A%20Server%202019%3A%20Poor%20write%20performance%20on%20S2D%20two%20node%20cluster%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-320495%22%20slang%3D%22en-US%22%3E%3CP%3EHi%2C%20Girdhar!%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3EI%26nbsp%3Bphysically%20removed%26nbsp%3Bfrom%26nbsp%3Bserver%20the%20disk%26nbsp%3Bthat%20accidentally%26nbsp%3Bwent%20into%20S2D%20and%20then%20stuck%20in%20Primordial%20pool.%20Then%20I%20cleared%20it%20on%20another%20PC%20and%20created%20new%20partition.%20Then%20put%20back%20in%20cluter%20server%20and%20finally%20had%20option%20to%20use%20it%20without%20pooling.%3C%2FP%3E%3CP%3EBut%20I%20plan%20to%26nbsp%3Breplace%20s2d%26nbsp%3B512%20GB%20ssd%20driveswith%20larger%20ones%20so%20I%20still%20need%20to%20find%20option%20how%20to%20correctly%20remove%20disk%20from%20pool.%3C%2FP%3E%3CP%3E%3CFONT%20color%3D%22%23f5f5f5%22%20face%3D%22Lucida%20Console%22%20size%3D%221%22%3EPS%20C%3A%5CWindows%5Csystem32%26gt%3B%20Get-StoragePool%3C%2FFONT%3E%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3EFriendlyName%20OperationalStatus%20HealthStatus%20IsPrimordial%20IsReadOnly%20Size%20AllocatedSize%3C%2FP%3E%3CP%3E------------%20-----------------%20------------%20------------%20----------%20----%20-------------%3C%2FP%3E%3CP%3EPrimordial%20OK%20Healthy%20True%20False%2072.9%20TB%209.31%20TB%3C%2FP%3E%3CP%3ES2D%20on%20hc-cluster-1%20OK%20Healthy%20False%20False%209.31%20TB%201.84%20TB%3C%2FP%3E%3CP%3EPrimordial%20OK%20Healthy%20True%20False%2011.53%20TB%209.31%20TB%3C%2FP%3E%3CP%3E%E3%80%80%3C%2FP%3E%3CP%3E%E3%80%80%3C%2FP%3E%3CP%3EPS%20C%3A%5CWindows%5Csystem32%26gt%3B%20Get-PhysicalDisk%3C%2FP%3E%3CP%3EDeviceId%20FriendlyName%20SerialNumber%20MediaType%20CanPool%20OperationalStatus%20HealthStatus%20Usage%20Size%3C%2FP%3E%3CP%3E--------%20------------%20------------%20---------%20-------%20-----------------%20------------%20-----%20----%3C%2FP%3E%3CP%3E22%20ATA%20INTEL%20SSDSC2BB24%20PHDV7171021B240AGN%20SSD%20False%20OK%20Healthy%20Auto-Select%20223.57%20GB%3C%2FP%3E%3CP%3E1004%20Samsung%20SSD%20850%20PRO%20512GB%20S250NSAG432476E%20SSD%20False%20OK%20Healthy%20Auto-Select%20476.94%20GB%3C%2FP%3E%3CP%3E1003%20Samsung%20SSD%20840%20PRO%20Series%20S1AXNSAD800683Y%20SSD%20False%20OK%20Healthy%20Auto-Select%20476.94%20GB%3C%2FP%3E%3CP%3E1010%20Samsung%20SSD%20850%20PRO%201TB%20S252NWAG304907F%20SSD%20False%20OK%20Healthy%20Auto-Select%20953.87%20GB%3C%2FP%3E%3CP%3E2016%20Samsung%20SSD%20850%20PRO%20512GB%20S250NSAG432479X%20SSD%20False%20OK%20Healthy%20Auto-Select%20476.94%20GB%3C%2FP%3E%3CP%3E1009%20Samsung%20SSD%20850%20PRO%201TB%20S252NEAG301324Y%20SSD%20False%20OK%20Healthy%20Auto-Select%20953.87%20GB%3C%2FP%3E%3CP%3E2015%20Samsung%20SSD%20840%20PRO%20Series%20S1AXNSAF111936H%20SSD%20False%20OK%20Healthy%20Auto-Select%20476.94%20GB%3C%2FP%3E%3CP%3E1008%20Samsung%20SSD%20850%20PRO%201TB%20S252NWAG304891D%20SSD%20False%20OK%20Healthy%20Auto-Select%20953.87%20GB%3C%2FP%3E%3CP%3E2000%20ATA%20Samsung%20SSD%20850%20S1SRNWAF913328T%20SSD%20False%20OK%20Healthy%20Auto-Select%20953.87%20GB%3C%2FP%3E%3CP%3E1007%20Samsung%20SSD%20850%20PRO%201TB%20S252NWAG403194P%20SSD%20False%20OK%20Healthy%20Auto-Select%20953.87%20GB%3C%2FP%3E%3CP%3E2019%20ATA%20Samsung%20SSD%20850%20S1SRNWAF914370B%20SSD%20False%20OK%20Healthy%20Auto-Select%20953.87%20GB%3C%2FP%3E%3CP%3E2020%20ATA%20Samsung%20SSD%20850%20S2BBNEAG113774L%20SSD%20False%20OK%20Healthy%20Auto-Select%20953.87%20GB%3C%2FP%3E%3CP%3E2021%20ATA%20Samsung%20SSD%20850%20S2BBNEAG113775K%20SSD%20False%20OK%20Healthy%20Auto-Select%20953.87%20GB%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3C%2FLINGO-BODY%3E%3CLINGO-SUB%20id%3D%22lingo-sub-319688%22%20slang%3D%22en-US%22%3ERe%3A%20Server%202019%3A%20Poor%20write%20performance%20on%20S2D%20two%20node%20cluster%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-319688%22%20slang%3D%22en-US%22%3E%3CP%3ECan%20you%20please%20send%20the%20output%20of%20following%20cmdlets%3A%3C%2FP%3E%0A%3CP%3E1.%20Get-StoragePool%3C%2FP%3E%0A%3CP%3E2.%20Get-PhysicalDisk%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E~Girdhar%3C%2FP%3E%3C%2FLINGO-BODY%3E%3CLINGO-SUB%20id%3D%22lingo-sub-312621%22%20slang%3D%22en-US%22%3ERe%3A%20Server%202019%3A%20Poor%20write%20performance%20on%20S2D%20two%20node%20cluster%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-312621%22%20slang%3D%22en-US%22%3EThanks!%20I%20tried%20to%20use%20DiskSPD%20and%20it%20works%20good%20and%20it%20seems%20simulates%20workload%20quite%20near%20to%20real%20world.%3C%2FLINGO-BODY%3E%3CLINGO-SUB%20id%3D%22lingo-sub-312618%22%20slang%3D%22en-US%22%3ERe%3A%20Server%202019%3A%20Poor%20write%20performance%20on%20S2D%20two%20node%20cluster%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-312618%22%20slang%3D%22en-US%22%3EHello!%3CBR%20%2F%3E%3CBR%20%2F%3EThank%20You%20about%20Your%20comment!%20I%20understand%2C%20that%20my%20lab%20setup%20does%20not%20meet%20these%20requirements%2C%20but%20still%20I%20believe%20that%20fundamental%20thinks%20should%20work%20with%20such%20a%20setup%20too.%20The%20main%20point%20for%20me%20was%20to%20check%20if%20this%20tehnology%20works%20before%20invest%20into%20new%20and%20quite%20expensive%20parts.%3CBR%20%2F%3E%3CBR%20%2F%3EAnyway%2C%20now%20I%20have%20rebuilt%20my%20setup%20using%20two%20intel%20S2600WTF%2FY%20boxes%20and%20Intel%20CPUs.%20Initially%20each%20of%20them%20had%20two%20512%20GB%20SSD%20drives%20for%20S2D.%20I%20configured%20S2D%20with%20automatic%20settings%20successfully.%20After%20providing%20some%20performance%20tests%20I%20got%20much%20better%20results%20than%20earlier.%20Actually%20really%20acceptable%20results%20(even%20up%20to%20little%20more%20than%20200MB%2Fs%20write%20speed).%3CBR%20%2F%3E%3CBR%20%2F%3ENext%20I%20moved%20some%20VMs%20to%20the%20S2D%20and%20enabled%20High%20Availability%20for%20them.%20I%20provided%20some%20crush%20tests%20as%20well%20and%20they%20succeed%2C%20all%20worked%20great.%3CBR%20%2F%3E%3CBR%20%2F%3EBUT%20then%20I%20faced%20new%20problems.%20I%20wanted%20to%20add%20four%20new%201%20TB%20SSD%20drives%20per%20each%20node%20and%20extend%20my%20pool.%20I%20did%20reset%20all%20these%20drives%20and%20connected%20to%20servers.%3CBR%20%2F%3E1)%20First%20strange%20thing%20was%2C%20that%20they%20automatically%20were%20added%20to%20my%20S2D%20pool%20even%20I%20was%20previously%20disabled%20autopooling%20(Get-StorageSubSystem%20Clu*%20%7C%20Set-StorageHealthSetting%20-Name%20%E2%80%9CSystem.Storage.PhysicalDisk.AutoPool.Enabled%E2%80%9D%20-Value%20False).%3CBR%20%2F%3E2)%20Second%20and%20the%20most%20important%20-%20my%20SSD%20tier%20statistics%20shows%20available%20space%20only%20for%20670%20GB%2C%20but%20I%20connected%208%20x%201TB%20SSD%20drives%20and%20using%20mirrored%20storage%20it%20should%20be%20able%20to%20allocate%20around%204%20TB!%20I%20run%20Optimize-StoragePool%20and%20this%20did%20not%20helped.%3CBR%20%2F%3E3)%20I%20connected%20another%20SSD%20drive%20for%20other%20purposes%20and%20it%20again%20automatically%20got%20pooled.%20I%20tried%20to%20remove%20it%20from%20S2D%20pool%2C%20but%20this%20also%20was%20unsuccessful.%20The%20disk%20stuck%20into%20Primordial%20Pool.%20Things%20I%20did%20to%20try%20to%20get%20disk%20out%20of%20pool%3A%3CBR%20%2F%3E%24pool%20%3D%20Get-StoragePool%20S2D*%3CBR%20%2F%3E%24disk%20%3D%20Get-PhysicalDisk%20-SerialNumber%20%22XXXXXXXXXXXXXXXXXXX%22%3CBR%20%2F%3E%3CBR%20%2F%3E%24disk%20%7C%20Set-PhysicalDisk%20-Usage%20Retired%3CBR%20%2F%3E%3CBR%20%2F%3E%24vdisk%3DGet-VirtualDisk%3CBR%20%2F%3E%3CBR%20%2F%3ERepair-VirtualDisk%20%24vdisk.FriendlyName%3CBR%20%2F%3E%3CBR%20%2F%3EGet-StorageJob%3CBR%20%2F%3E%3CBR%20%2F%3EGet-StoragePool%20S2D*%20%7C%20Remove-PhysicalDisk%20-PhysicalDisks%20%24disk%3CBR%20%2F%3E%3CBR%20%2F%3ESet-ClusterS2DDisk%20-CanBeClaimed%20%24true%20-PhysicalDiskGuid%20%24disk.UniqueId%3CBR%20%2F%3E%3CBR%20%2F%3E%24disk%20%7C%20Reset-PhysicalDisk%3C%2FLINGO-BODY%3E%3CLINGO-SUB%20id%3D%22lingo-sub-303434%22%20slang%3D%22en-US%22%3ERe%3A%20Server%202019%3A%20Poor%20write%20performance%20on%20S2D%20two%20node%20cluster%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-303434%22%20slang%3D%22en-US%22%3E%3CP%3EHi%20%2C%3C%2FP%3E%3CP%3EYour%20nodes%20don%E2%80%99t%20comply%20with%20S2D%20requirements.%20Additionally%2C%20I%20would%20not%20recommend%20to%20measure%20performance%20by%20windows%20file%20copying%2C%20you%E2%80%99ll%20find%20arguments%20here%3A%3C%2FP%3E%3CP%3E%3CA%20href%3D%22https%3A%2F%2Fblogs.technet.microsoft.com%2Fjosebda%2F2014%2F08%2F18%2Fusing-file-copy-to-measure-storage-performance-why-its-not-a-good-idea-and-what-you-should-do-instead%2F%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noreferrer%20noopener%20noreferrer%22%3Ehttps%3A%2F%2Fblogs.technet.microsoft.com%2Fjosebda%2F2014%2F08%2F18%2Fusing-file-copy-to-measure-storage-performance-why-its-not-a-good-idea-and-what-you-should-do-instead%2F%3C%2FA%3E%3C%2FP%3E%3CP%3EBetter%20use%20DiskSPD%20from%20MS.%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3ERegarding%20the%20storage%20solution%2C%20you%20can%20look%20at%20virtual%20SAN%20vendors.%20I%20have%20a%20good%20experience%20of%20using%20Starwind%20vSAN%20for%202%20servers%20cluster.%20The%20performance%20is%20better%20and%20no%20problem%20with%20configuration.%20You%20can%20find%20guide%20here%3A%3C%2FP%3E%3CP%3E%3CA%20href%3D%22https%3A%2F%2Fwww.starwindsoftware.com%2Fresource-library%2Fstarwind-virtual-san-hyperconverged-2-node-scenario-with-hyper-v-cluster-on-windows-server-2016%22%20target%3D%22_self%22%20rel%3D%22nofollow%20noopener%20noreferrer%20noopener%20noreferrer%22%3Ehttps%3A%2F%2Fwww.starwindsoftware.com%2Fresource-library%2Fstarwind-virtual-san-hyperconverged-2-node-scenario-with-hyper-v-cluster-on-windows-server-2016%3C%2FA%3E%3C%2FP%3E%3C%2FLINGO-BODY%3E%3CLINGO-SUB%20id%3D%22lingo-sub-302191%22%20slang%3D%22en-US%22%3ERe%3A%20Server%202019%3A%20Poor%20write%20performance%20on%20S2D%20two%20node%20cluster%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-302191%22%20slang%3D%22en-US%22%3E%3CP%3EThanks%20for%20evaluating%20S2D%20Clusters%20on%20Server%202019.%3C%2FP%3E%0A%3CP%3EThis%20configuration%20does%20not%20meet%20the%20fundamental%20requirement%20of%20S2D%2C%20as%3A%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3ESSDs%20used%20are%20non-PLP%2C%20and%3C%2FLI%3E%0A%3CLI%3ENodes%20are%20heterogeneous.%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CP%3E%26nbsp%3BPlease%20go%20over%20this%20blog%2C%20%3CA%20href%3D%22https%3A%2F%2Fnam06.safelinks.protection.outlook.com%2F%3Furl%3Dhttps%253A%252F%252Fblogs.technet.microsoft.com%252Ffilecab%252F2016%252F11%252F18%252Fdont-do-it-consumer-ssd%26amp%3Bdata%3D02%257C01%257C%257C1c7513447d284c6e4a4f08d66203d14c%257C72f988bf86f141af91ab2d7cd011db47%257C1%257C0%257C636804165862166092%26amp%3Bsdata%3DUFy8bH7W02HBmy%252F98qX%252F0UkCaN%252FG90sRV7DTEsxqWcw%253D%26amp%3Breserved%3D0%22%20target%3D%22_blank%22%20rel%3D%22nofollow%20noopener%20noreferrer%20noopener%20noreferrer%22%3Ehttps%3A%2F%2Fblogs.technet.microsoft.com%2Ffilecab%2F2016%2F11%2F18%2Fdont-do-it-consumer-ssd%3C%2FA%3E%20for%20more%20details.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EAlso%2C%20please%20refer%20this%20article%20as%20well%2C%20on%20evaluating%20storage%20perf%3A%3C%2FP%3E%0A%3CP%3E%3CA%20href%3D%22https%3A%2F%2Fnam06.safelinks.protection.outlook.com%2F%3Furl%3Dhttps%253A%252F%252Fblogs.technet.microsoft.com%252Fjosebda%252F2014%252F08%252F18%252Fusing-file-copy-to-measure-storage-performance-why-its-not-a-good-idea-and-what-you-should-do-instead%252F%26amp%3Bdata%3D02%257C01%257C%257C1c7513447d284c6e4a4f08d66203d14c%257C72f988bf86f141af91ab2d7cd011db47%257C1%257C0%257C636804165862166092%26amp%3Bsdata%3DNiQHfXyTM5NPY7VnyQe2r4yi0egjMTztSrdzhqGyW9I%253D%26amp%3Breserved%3D0%22%20target%3D%22_blank%22%20rel%3D%22nofollow%20noopener%20noreferrer%20noopener%20noreferrer%22%3Ehttps%3A%2F%2Fblogs.technet.microsoft.com%2Fjosebda%2F2014%2F08%2F18%2Fusing-file-copy-to-measure-storage-performance-why-its-not-a-good-idea-and-what-you-should-do-instead%2F%3C%2FA%3E%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E~Girdhar%20Beriwal%3C%2FP%3E%3C%2FLINGO-BODY%3E
uedgars
Occasional Contributor

Hello!

 

I am trying to set up S2D on two node cluster for Hyper converged infrastructure. Unfortunately I observe significant write performance drop if we compare S2D storage with slowest physical hard drive performance participating in cluster.

 

What could cause this?

How to get better results?

 

My test environment

OS: Windows Server 2019 Datacenter Build 17723.rs5_release.180720-1452

Both nodes are connected directly using one 10 Gbps link for S2D

Each node have 1 Gbps link for management

S2D two node cluster configured with Cache disabled

 

Node 1

System: Supermicro X9SRH-7F/7TF

CPU: Intel Xeon E5-2620 2.00 GHz (6CPUs)

RAM: 32 GB DDR3

Network: Intel X540-AT2 10 Gbps copper

System drive: Samsung SSD 840 PRO 512 GB

Storage drives: Samsung SSD 850 PRO 512 GB, Samsung SSD 840 PRO 512 GB

 

Node 2

System: Intel S2600WTT

CPU: Genuine Intel CPU 2.30 GHz (ES) (28 CPUs)

RAM: 64 GB DDR4

Network: Intel X540-AT2 10 Gbps copper

System drive: INTEL SSDSC2BB240G7 240 GB

Storage drives: Samsung SSD 850 PRO 512 GB, Samsung SSD 840 PRO 512 GB

 

Before enabling S2D I turned off write cache for each SSD drive individually and tested their write performance by copying 30 GB large VHD file. Results were around 130 - 160 MB/s for Samsung SSD 840 PRO drives and around 60 - 70 MB/s for Samsung SSD 850 PRO drives.

 

After enabling S2D write performance drops to 40 - 44 MB/s (see attachment)

13 Replies

Thanks for evaluating S2D Clusters on Server 2019.

This configuration does not meet the fundamental requirement of S2D, as:

  • SSDs used are non-PLP, and
  • Nodes are heterogeneous.

 Please go over this blog, https://blogs.technet.microsoft.com/filecab/2016/11/18/dont-do-it-consumer-ssd for more details.

 

Also, please refer this article as well, on evaluating storage perf:

https://blogs.technet.microsoft.com/josebda/2014/08/18/using-file-copy-to-measure-storage-performanc...

 

~Girdhar Beriwal

Solution

Hi ,

Your nodes don’t comply with S2D requirements. Additionally, I would not recommend to measure performance by windows file copying, you’ll find arguments here:

https://blogs.technet.microsoft.com/josebda/2014/08/18/using-file-copy-to-measure-storage-performanc...

Better use DiskSPD from MS.

 

Regarding the storage solution, you can look at virtual SAN vendors. I have a good experience of using Starwind vSAN for 2 servers cluster. The performance is better and no problem with configuration. You can find guide here:

https://www.starwindsoftware.com/resource-library/starwind-virtual-san-hyperconverged-2-node-scenari...

Hello!

Thank You about Your comment! I understand, that my lab setup does not meet these requirements, but still I believe that fundamental thinks should work with such a setup too. The main point for me was to check if this tehnology works before invest into new and quite expensive parts.

Anyway, now I have rebuilt my setup using two intel S2600WTF/Y boxes and Intel CPUs. Initially each of them had two 512 GB SSD drives for S2D. I configured S2D with automatic settings successfully. After providing some performance tests I got much better results than earlier. Actually really acceptable results (even up to little more than 200MB/s write speed).

Next I moved some VMs to the S2D and enabled High Availability for them. I provided some crush tests as well and they succeed, all worked great.

BUT then I faced new problems. I wanted to add four new 1 TB SSD drives per each node and extend my pool. I did reset all these drives and connected to servers.
1) First strange thing was, that they automatically were added to my S2D pool even I was previously disabled autopooling (Get-StorageSubSystem Clu* | Set-StorageHealthSetting -Name “System.Storage.PhysicalDisk.AutoPool.Enabled” -Value False).
2) Second and the most important - my SSD tier statistics shows available space only for 670 GB, but I connected 8 x 1TB SSD drives and using mirrored storage it should be able to allocate around 4 TB! I run Optimize-StoragePool and this did not helped.
3) I connected another SSD drive for other purposes and it again automatically got pooled. I tried to remove it from S2D pool, but this also was unsuccessful. The disk stuck into Primordial Pool. Things I did to try to get disk out of pool:
$pool = Get-StoragePool S2D*
$disk = Get-PhysicalDisk -SerialNumber "XXXXXXXXXXXXXXXXXXX"

$disk | Set-PhysicalDisk -Usage Retired

$vdisk=Get-VirtualDisk

Repair-VirtualDisk $vdisk.FriendlyName

Get-StorageJob

Get-StoragePool S2D* | Remove-PhysicalDisk -PhysicalDisks $disk

Set-ClusterS2DDisk -CanBeClaimed $true -PhysicalDiskGuid $disk.UniqueId

$disk | Reset-PhysicalDisk
Thanks! I tried to use DiskSPD and it works good and it seems simulates workload quite near to real world.

Can you please send the output of following cmdlets:

1. Get-StoragePool

2. Get-PhysicalDisk

 

~Girdhar

Hi, Girdhar!

 

I physically removed from server the disk that accidentally went into S2D and then stuck in Primordial pool. Then I cleared it on another PC and created new partition. Then put back in cluter server and finally had option to use it without pooling.

But I plan to replace s2d 512 GB ssd driveswith larger ones so I still need to find option how to correctly remove disk from pool.

PS C:\Windows\system32> Get-StoragePool

 

FriendlyName OperationalStatus HealthStatus IsPrimordial IsReadOnly Size AllocatedSize

------------ ----------------- ------------ ------------ ---------- ---- -------------

Primordial OK Healthy True False 72.9 TB 9.31 TB

S2D on hc-cluster-1 OK Healthy False False 9.31 TB 1.84 TB

Primordial OK Healthy True False 11.53 TB 9.31 TB

 

 

PS C:\Windows\system32> Get-PhysicalDisk

DeviceId FriendlyName SerialNumber MediaType CanPool OperationalStatus HealthStatus Usage Size

-------- ------------ ------------ --------- ------- ----------------- ------------ ----- ----

22 ATA INTEL SSDSC2BB24 PHDV7171021B240AGN SSD False OK Healthy Auto-Select 223.57 GB

1004 Samsung SSD 850 PRO 512GB S250NSAG432476E SSD False OK Healthy Auto-Select 476.94 GB

1003 Samsung SSD 840 PRO Series S1AXNSAD800683Y SSD False OK Healthy Auto-Select 476.94 GB

1010 Samsung SSD 850 PRO 1TB S252NWAG304907F SSD False OK Healthy Auto-Select 953.87 GB

2016 Samsung SSD 850 PRO 512GB S250NSAG432479X SSD False OK Healthy Auto-Select 476.94 GB

1009 Samsung SSD 850 PRO 1TB S252NEAG301324Y SSD False OK Healthy Auto-Select 953.87 GB

2015 Samsung SSD 840 PRO Series S1AXNSAF111936H SSD False OK Healthy Auto-Select 476.94 GB

1008 Samsung SSD 850 PRO 1TB S252NWAG304891D SSD False OK Healthy Auto-Select 953.87 GB

2000 ATA Samsung SSD 850 S1SRNWAF913328T SSD False OK Healthy Auto-Select 953.87 GB

1007 Samsung SSD 850 PRO 1TB S252NWAG403194P SSD False OK Healthy Auto-Select 953.87 GB

2019 ATA Samsung SSD 850 S1SRNWAF914370B SSD False OK Healthy Auto-Select 953.87 GB

2020 ATA Samsung SSD 850 S2BBNEAG113774L SSD False OK Healthy Auto-Select 953.87 GB

2021 ATA Samsung SSD 850 S2BBNEAG113775K SSD False OK Healthy Auto-Select 953.87 GB

 

 

 

Hi Uedgars,

 

Actually I was looking for physical disk and storage pools when you got into the bad state, to understand why System.Storage.PhysicalDisk.AutoPool.Enabled property value was not honored. Let me know, if you still face issues with it in future.

 

Now that you have fixed, the better way of removing the disk from the pool is Remove-PhysicalDisk as you tried earlier, though you have to specify the full FriendlyName of the S2D StoragePool rather than S2d*. Once this succeeds, you will see the CanPool value of the disk to be True.

 

Let me know, if this doesn't work for you.

Also, if you feel, system is not behaving as you expect, please follow this link and share the zip file with us. We try to collect needed information, so that there is no back and forth :)

 

Thanks

Girdhar

Hello!

 

Remove-PhysicalDisk worked for me even with asterisk (SSD*) and ssd get moved from s2d pool to primordial pool, but the problem is with the next step. I want to get out disks also from primordial pool to see them in disk management and use as standalone disks in windows system. To do this I understood I need to use command Set-ClusterS2DDisk -CanBeClaimed $false, but I got error. I provided command and error message below.

Shortly: I had 4x 512GB ssds (2 pcs in each server, I have two). Then I added 8 more ssd drives with size 1tb each (4 pcs in each server). Then I had problem, that I am unable to use all of these space (problem I described earlier). As I was no success to extend my volume, I decided to remove 512gb ssds out of the pool and see what happens. Then I run commands Set-PhysicalDisk -Usage Retired, Repair-VirtualDisk and Remove-Physical disk. So far all worked good. And then finally I wanted to get these 512GB disks out of the Primordial pool using Set-ClusterS2DDisk -CanBeClaimed $false, but it was unsuccessful. I got error.

! Interesting thing is, that after 512GB ssd removal my StorageTier allowed maximum size changed and now it is around 2.8TB As I already have 930 GB volume that I want to extend, it means Tiear allows around 3.7TB This sounds much better and I believe it is the maximum for 8x1tb drives in mirror. But it is still strange, that with 4x512gb + 8x1tb my tier max size was only around 1.5TB

 

$disk=get-physicaldisk -FriendlyName "*Samsung SSD*" | ? {$_.size -eq "512110190592" -and $_.deviceid -ne 0}
Set-ClusterS2DDisk -CanBeClaimed $false -PhysicalDisk $disk
Set-ClusterS2DDisk : Failed to set cache mode on disks connected to node 'h11'. Run cluster validation, including the Storage Spaces
Direct tests, to verify the configuration
At line:2 char:1
+ Set-ClusterS2DDisk -CanBeClaimed $false -PhysicalDisk $disk
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [Set-ClusterStorageSpacesDirectDisk], CimException
+ FullyQualifiedErrorId : HRESULT 0x8007139f,Microsoft.Management.Infrastructure.CimCmdlets.InvokeCimMethodCommand,Set-ClusterSt
orageSpacesDirectDisk

 

Physical Disks now looks like this.

PS C:\Windows\system32> get-physicaldisk -FriendlyName "*Samsung SSD*" | ? {$_.size -eq "512110190592" -and $_.deviceid -ne 0}

DeviceId FriendlyName SerialNumber MediaType CanPool OperationalStatus HealthStatus Usage Size
-------- ------------ ------------ --------- ------- ----------------- ------------ ----- ----
1004 Samsung SSD 850 PRO 512GB S250NSAG432476E SSD True OK Healthy Auto-Select 476.94 GB
1003 Samsung SSD 840 PRO Series S1AXNSAD800683Y SSD True OK Healthy Auto-Select 476.94 GB
2016 Samsung SSD 850 PRO 512GB S250NSAG432479X SSD True OK Healthy Auto-Select 476.94 GB
2015 Samsung SSD 840 PRO Series S1AXNSAF111936H SSD True OK Healthy Auto-Select 476.94 GB

 

PS C:\Windows\system32> get-physicaldisk -FriendlyName "*Samsung SSD*" | ? {$_.size -eq "512110190592" -and $_.deviceid -ne 0} | Get-StoragePool

FriendlyName OperationalStatus HealthStatus IsPrimordial IsReadOnly Size AllocatedSize
------------ ----------------- ------------ ------------ ---------- ---- -------------
Primordial OK Healthy True False 11.53 TB 7.45 TB
Primordial OK Healthy True False 11.53 TB 7.45 TB
Primordial OK Healthy True False 11.53 TB 7.45 TB
Primordial OK Healthy True False 11.53 TB 7.45 TB

And actually there is one more important question before I start to extend my storage.

At the beginning I had 4x512gb ssd drives. I enabled S2D without cache and it automatically created pool and tiers. When I started to add drives and faced problems posted here, I found there exists tier parameter column number. I figured out for the default tier template this parameter is set to auto. But for my tiered volume column number has value 2. Now, when I have 8 ssd drives (4 on each server), it was better for performance and drive wear equalization to set column number to 4 so all 4 drives at each server forms one stripe. Is it even possible to do? I was unable to find detailed specs about s2d operation in this level. And unfortunatelly some forums gave info that it is not possible to change column count after volume is created. Is it true? And if so, are there any technical recommendation how to choose this value?

Well as you figured it out, updating the Column count post volume creation is not possible.

Re-creating the volume should take the correct Column count. 

Can you try Set-ClusterS2DDisk -CanBeClaimed:$false -PhysicalDisk $disk

Note the colon.

 

On the error: Set-ClusterS2DDisk : Failed to set cache mode on disks connected to node 'h11'

Have your created cache tiers manually?

 

Also, after running set-ClusterS2dDisk, check the Get-Disk output to see available disk. 

I tried to run command using column but it still returns the same error.

 

PS C:\Windows\system32> $disk=get-physicaldisk -FriendlyName "*Samsung SSD*" | ? {$_.size -eq "512110190592" -and $_.deviceid -ne 0}
Set-ClusterS2DDisk -CanBeClaimed:$false -PhysicalDisk $disk
Set-ClusterS2DDisk : Failed to set cache mode on disks connected to node 'h11'. Run cluster validation, including the Storage Spaces Direct tests, to verify the configuration
At line:2 char:1
+ Set-ClusterS2DDisk -CanBeClaimed:$false -PhysicalDisk $disk
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [Set-ClusterStorageSpacesDirectDisk], CimException
+ FullyQualifiedErrorId : HRESULT 0x8007139f,Microsoft.Management.Infrastructure.CimCmdlets.InvokeCimMethodCommand,Set-ClusterStorageSpacesDirectDisk

 

I enabled S2D with cache disabled (Enable-ClusterS2D -cachestate disabled)

I even did not know about cache tiers. How can I check their status? If I use Get-StorageTiers I only see my storage tier:

PS C:\Windows\system32> get-storagetier

FriendlyName TierClass MediaType ResiliencySettingName FaultDomainRedundancy Size FootprintOnPool StorageEfficiency
------------ --------- --------- --------------------- --------------------- ---- --------------- -----------------
Capacity Unknown SSD Mirror 1 0 B 0 B
MirrorOnSSD Unknown SSD Mirror 1 0 B 0 B
ssd-volume-1-MirrorOnSSD Capacity SSD Mirror 1 930 GB 1.82 TB 50.00%

 

Oh, I remembered I have turned deduplication on. Might be it disturbs something?

 

PS C:\Windows\system32> Get-DedupStatus | fl *


ObjectId : \\?\Volume{079e5b9b-7f17-4bea-bbd6-6de7bed066fd}\
Capacity : 998512787456
FreeSpace : 503428747264
InPolicyFilesCount : 13
InPolicyFilesSize : 868141497962
LastGarbageCollectionResult : 0
LastGarbageCollectionResultMessage : The operation completed successfully.
LastGarbageCollectionTime : 1/26/2019 4:39:45 AM
LastOptimizationResult : 0
LastOptimizationResultMessage : The operation completed successfully.
LastOptimizationTime : 1/28/2019 10:41:21 AM
LastScrubbingResult : 0
LastScrubbingResultMessage : The operation completed successfully.
LastScrubbingTime : 1/26/2019 4:40:48 AM
OptimizedFilesCount : 13
OptimizedFilesSavingsRate : 50
OptimizedFilesSize : 868141497962
SavedSpace : 440480093290
SavingsRate : 47
UnoptimizedSize : 935564133482
UsedSpace : 495084040192
Volume : C:\ClusterStorage\ssd-volume-1
VolumeId : \\?\Volume{079e5b9b-7f17-4bea-bbd6-6de7bed066fd}\
PSComputerName :
CimClass : ROOT/Microsoft/Windows/Deduplication:MSFT_DedupVolumeStatus
CimInstanceProperties : {Capacity, FreeSpace, InPolicyFilesCount, InPolicyFilesSize...}
CimSystemProperties : Microsoft.Management.Infrastructure.CimSystemProperties

So, If I have column count 2, it means S2D takes two drives for a stripe per server. And only when these drives are full, it starts to write into rest pair of drives. Right?

Then what happens, if I leave column count 2 and create a second tiered volume also with column count 2? Do S2D understands less loaded drives and distributes this volume around empty ones?

Or from performance perspective it is better to set column count of 4 for my setup? (I understand, that if it is set to 4, I can extend my tiered volume only if I add appropriate count of drives)

Related Conversations
Tabs and Dark Mode
cjc2112 in Discussions on
35 Replies
Extentions Synchronization
ChirmyRam in Discussions on
3 Replies
flashing a white screen while open new tab
Deleted in Discussions on
14 Replies
Stable version of Edge insider browser
HotCakeX in Discussions on
35 Replies