Please, feedback about the limitation with S2D. Has anyone been able to configure S2D with nodes with different physical configuration?
I am moving to a cluster with 4 servers that up to this day been online in production independently. Now, I wanted to configure a high availability using 3 of these units under a Storage Space Direct configuration and the 4rth server running the Active directory . I have been strugling with this and wanted to get feedback on my scenarios.
The nodes are describes as follow. 2 units Dell R730XP and 1 node Dell R640. The 3 are running 2 CPU but not exarlty with the same amount of memory (526,480 and 256 GByte RAM). They also have desimilar HD disks in the local storage.
We have configured the S2D following the limited documentation and Youtube. The 4th server is a Dell R620 running Active Directory . Everythign is Windows server Data Center edition
These cluster run 6 Virtual machines running with HyperV and replicated with Fail Over manager.
At first , the VM are running ok. But the problem comes when I try to do a test that enages moving a VM from one node to another node. This operation disrupts the status of the other VM to Pause or fail status. Only few of the VM return to Running status after several minutes.
This is a terrible scenario and not ideal to provide the high availability environment.
Please, feedback if there is a limitation with S2D. Has anyone here able to configure S2D with nodes with different physical configuration?