Storage Spaces Direct
6 TopicsCannot create separate storage pool on Windows 2019 Failover cluster.
Hello I am trying to get rid of S2D on our failover cluster and came across the inability to create separate storage pools (without using S2D) on Windows 2016/2019. Every time you set up a Failover Cluster, even without adding eligible disks to storage, it still sets up the Clustered Windows Storage subsystem and assigns all available disks to it. So, you have all the disks, but if you try to create a storage pool, it will fail. Tried to create all storage pools before forming the cluster - to no avail, the pools were created, but any attempt to create a virtual disk instantly fails. I tried to use Set-ClusterS2DDisk to stop the storage subsystem from using those disks - to no avail. Tried to unregister the Clustered Windows Storage subsystem - it's not possible. I have absolutely no idea what to do now. So, is there a way to set up a failover cluster without using S2D?2KViews0likes1CommentFeature request: s2d manual cache binding
Hello! Please make available tools for manual binding of cache drives to data drives. First problem: In configurations with multiple storage pools cache drives are bound to all physicall disks, not just to disks in pool. Use cases: binding problems. in that cases specialists are needed to dance with tambourine to workaround that.the're evicting nodes/retiring drives and so on. that is really PITA. Second problem: set-clusterstoragespacesdirct -cachestate enabled resets the states of journal drives. Use case: you want to dedicate some drives to hot-spare, after enabling cachestate, all drives became journal and bound, that is really not what you want if you're planning to use hotspare drives, especially if you're using M.2 SSDs942Views0likes2CommentsRemote Desktop Sevices and Storage Spaces Direct
Hi everyone. I am currently reconfiguring our RDS deployment, and would like some inputs on UPD. The RDS setup has a admin server running in Azure. This has the connection broker, web access, license server features. The rest of the setup is session hosts, which is used for shared access. The session host has a big disk (raid 1+0) for OS application etc. All the servers has a smaller disk which I was wondering if i could do anything smart with that in relation to the UPD. I had a quick look on storage spaces direct, and was wondering if that would be the way to go? The servers is connected with 10 gig and a 100 gig infiniband. Would it be a solution to run both the session host and the storage space direct or would that impact performance of the servers? Server spec: Windows server 2022 Datacenter Proliant DL385 gen10 CPU: 2x AMD EPYC 7502 (32 cores each) RAM: 1 TB Is there any better alternatives? BR.513Views0likes0CommentsWindows 2022 S2D Cache
Hello everyone I have a s2d cluster of 8 nodes. When I created the cluster I disabled the s2d cache on purpose. Now I have enabled the s2d cache on the cluster, Until cachemodelhdd is not indicated and cachemodelhdd is disabled. By mistake I have configured cachemodelssd as readonly and I would like to return to disabled state. But I can't find the way to return to the disabled state. I just want to enable cache for cachemodelhdd. Is there any way to return to the previous state or reset the cachemodel states to default? Best regards Translated with http://www.DeepL.com/Translator (free version)580Views0likes0CommentsServer 2019: Ppor Storage Spaces Direct (S2D) Performance
Hello, I am trying to setup a two-node cluster for S2D for hyperconverged infrastructure. I am underwhelmed by the performance when compared with storage performance when the drives are not part of an S2D cluster and some older hardware we have on hand. I want to ensure I am not missing anything. Environment Node 1: System: Dell PowerEdge R730 CPU: (2) Intel Xeon E5-2660 RAM: 256 GB Node 2: System: Dell PowerEdge R630 CPU: (2) Intel Xeon E5-2698 v4 RAM: 256GB OS: Windows Server 2019 Data Center Network: 10G Ubiquiti USW-Pro-Aggregation switch Each node has (1) Intel Ethernet Converged Network Adapter X710 set as the management NIC Each node has (4) QLogic 10GE QL41164HMCU CNA ports connected to the Ubiquiti switch The 4 NICs are put into a Switch Embedded Teaming (SET) switch RDMA is enabled and set to iWarp. Have verified RDMA connections Storage: Each node contains the following drives: (2) Intel Optane DC 4800X SSDPED1K375GA 375GB SSD PCIe (4) Samsung MZILS3T8HMLH0D3 3.49TB SSD SAS (2) Toshiba KPM5XRUG3T84 3.49TB SSD SAS (5) Toshiba AL15SEB18EP 1.64TB HDD SAS I have been using DISKSPD and the output is below for the cluster using the entire storage pool. I would have expected some higher MiB/s and I/O per s. If I change the config and use only SSDs like even just one of the Samsung drives I get about 900MB/s and 15K+ I/O per s. The performance is about the same as isolating one of the Toshiba HDDs and running off there which I tried as well. I would have expected it to be much faster. * This specific example is where I am trying to test write performance but I am similarly underwhelmed by read performance as well. diskspd -b60K -d60 -h -L -o32 -t1 -s -w100 -c1G C:\ClusterStorage\HV1\LogIO.dat Total IO thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | LatStdDev | file ----------------------------------------------------------------------------------------------------- 0 | 21557882880 | 350877 | 342.66 | 5847.99 | 5.471 | 1.226 | C:\..\LogIO.dat (1024MiB) ----------------------------------------------------------------------------------------------------- total: 21557882880 | 350877 | 342.66 | 5847.99 | 5.471 | 1.226 Read IO thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | LatStdDev | file ----------------------------------------------------------------------------------------------------- 0 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | C:\..\LogIO.dat (1024MiB) ----------------------------------------------------------------------------------------------------- total: 0 | 0 | 0.00 | 0.00 | 0.000 | N/A Write IO thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | LatStdDev | file ----------------------------------------------------------------------------------------------------- 0 | 21557882880 | 350877 | 342.66 | 5847.99 | 5.471 | 1.226 | C:\..\LogIO.dat (1024MiB) ----------------------------------------------------------------------------------------------------- total: 21557882880 | 350877 | 342.66 | 5847.99 | 5.471 | 1.226 I would appreciate any help. Thanks!.3KViews0likes0CommentsStorage Spaces Direct and Docker on Hyper-V
Hello, My IT department has received request from our developers to put Docker on our Storage Spaces Direct 4 host Hyper-Converged Cluster running Hyper-V. I don't mean docker engine running on Linux VM's on top of the cluster, but running Docker engine directly on the Cluster hosts, and docker would run Linux containers on Windows using Hyper-V. Any thoughts on this? My initial reaction is to keep the Storage Spaces Direct Cluster clean of any added functions apart from the Hyper-V for the virtual machines. Thank you1.2KViews0likes0Comments