clustering
138 TopicsFailover Cluster Manager error when not running as administrator (on a PAW)
I've finally been trying (hard) to use a PAW, where the user I'm signed into the PAW as does NOT have local admin privileges on that machine, but DOES have admin privileges on the servers I'm trying to manage. Most recent hiccup is that Failover Cluster Manager aka cluadmin.msc doesn't seem to work properly if you don't have admin privileges on the machine where you're running it from. Obviously on a PAW your server admin account is NOT supposed to be an admin on the PAW itself, you're just a standard user. The error I get when opening Failover Cluster Manager is as follows: Error The operation has failed. An unexpected error has occurred. Error Code: 0x800702e4 The requested operation requires elevation. [OK] Which is nice. I've never tried to run cluadmin as a non-admin, because historically everyone always just ran everything as a domain admin (right?) so you were an admin on everything. But this is not so in the land of PAW. I've run cluadmin on a different machine where I am a local admin, and it works fine. I do not need to run it elevated to make it work properly, it just works. e.g. open PowerShell, cluadmin <enter>. PowerShell has NOT been opened via "Run as administrator" (aka UAC). I've tried looking for some kind of access denied message via procmon but can't see anything obvious (to my eyes anyway). A different person on a different PAW has the same thing. Is anyone successfully able to run Failover Cluster Manager on a machine where you're just a standard user?1.1KViews1like2CommentsS2D 2 Nodes Hyper-V Cluster
The Resources: 2 physical Windows 2022 Data Center servers: 1- 2 sockets 24 physical CPU core 2- 64 GB RAM 3- 2 * 480GB SSD RAID 1 (OS Partition) 4- 4 * 3.37TB SSD RAID 5 the total approximately 10 or 11 TB after the RAID 5- 4 * 10gb Nic 1 Physical Windows 2016 standard server with DC role The Desired: I want to make 2 nodes Hyper V cluster with S2D to combine the 2 servers storage into 1 shared clustered volume and store the VMs on it and make a high availability solution which will fail over automatically when one of the nodes is down without using an external source for the storage such as SAS, is this scenario possible?168Views0likes0CommentsWhy can't I run Enable-ClusterS2D more than once per windows installation?
I have been trying to setup this bloody cluster storage spaces direct configuration for over a month and have re-installed everything on the server more than a dozen times. Every time I get to running the command for the first time, it makes some configuration that is not what I want and there is absolutely no way that I can undo the changes or fix them in any way, shape, or form except by completely wiping the server and reinstalling the entire operating system. I have tried to run Disable-ClusterS2D and then re-run the Enable-ClusterS2D and it will hang on "Waiting until all physical disks are reported by clustered storage subsystem". I try to completely detach the entire pool and wipe it manually and reclaim the disks and recreate the pool, and guess what, "Waiting until all physical disks are reported by clustered storage subsystem". I wipe the entire cluster configuration and recreate the cluster from scratch. Did that work? Of course not, "Waiting until all physical disks are reported by clustered storage subsystem". This has happened on both Windows Server 2022 and 2025, Datacenter editions. Why can I not run the command more than once per operating system??? If I need to replace a disk, am I just supposed to wipe the whole operating system??? Isn't this **bleep** thing supposed to be enterprise grade and just work???695Views0likes0CommentsHyper-V Replica Network for failover clustering
I have a Server 2019 Failover Cluster that we use for S2D and are starting to move VMs over to. I've added a dedicated network card for Hyper-V and that works fine. I am now trying to add a dedicated network card for Hyper-V replication. From what I can tell I am able to add it and the cluster sees it. I've made it available to the cluster and to clients. I gave each member of the cluster a static Ip with no gateway on that nic then installed the hyper-v replica role and gave it an IP on that same network. However it just doesn't seem to want to work correctly. Im getting Event ID 1205, 1069 and 21502. 1205 says the cluster service failed to bring clustered role 'RDS-Replica' completely online or offline 21502 says Hyper-V Replica broker RDS-Replica failed to start the network listener on destination node StorageNode02 no such host is known. I'm guessing this is due to I told it to not register the network card in DNS since this network isn't going to be able to reach anything other than other Hyper-V host. My end goal is to replicate these to a dedicated Hyper-V box at the same site and will have a dedicated adapter on the same layer 2 network. Any suggestions? Thank you1.3KViews0likes1CommentHow to manage garbage collection not to disrupt services on web application
Greetings, I need your expertise, to resolve a painful issue regarding Gc process always running on my stand-alone servers causing disruption. I limited my IIS to always run recycle after reaching 20gi of memory consumption, I also set the site to use server gc , but all the time my servers are undergoing Gc which disrupts service. Any solutions over there to get off this nightmare222Views0likes0CommentsExpanding drive in Failover Clustering
I've got a Storage Spaces direct cluster setup and I've got a volume I need to expand. It's a volume under the file server role in the cluster. It's an ReFS volume and I've tried using Windows Admin Center to expand it and it errors out, but I can expand a volume under the SoFS role with no issue. I also tried using Server manager, disk manager and diskpart with no luck. Any suggestions?204Views0likes0CommentsWindows Server 2022 Hyper-V Cluster: S2D Disk Add question
Hello, We have a Healthy three node Windows server 2022 HCI build using a Three-way S2D Mirror using SSDs. We initially built the server with only SSDs (4x 2GiB SSDs) per node. We would like to add SAS HDD to the disk pool so we can get a bigger bang for the buck for space (a lot of our VMs are calling for 1TB partitions) My question is that if we add HDDs (equal number, type, and capacity on each node) to the already built SSD only pool, will S2D automatically reconfigure the pool across all nodes to allow the HDDs in the pool to so can expand our Volumes.240Views0likes0CommentsCloud Witness WinRM
Hello everyone, New member of the community. I created this account to seek for some advice, i have been having some trouble adding a cloud witness to a cluster and have been getting an error message from WinRM. To add some background, we don't have any issues adding the Cloud witness to other clusters and its been working fine, it seems that its just this particular cluster. The servers of the cluster all have access through port 443 to the storage account, doing TNC to them works just fine with no issues. Already tried restarting WinRM services on both nodes and nothing. Any advice is welcome. Thanks.256Views0likes0CommentsNIC Team configuration issue in windows server 2019
Hi All, I've got a query. I successfully set up the NIC Team on my Windows Server 2019 with three network adapters. After finishing the configuration, all the adapters displayed an active state, but I couldn't establish an internet connection. Statically assigned the IP address, DNS, and GW. Still, the issue persists. Is anyone else experiencing a similar problem? Any insights would be greatly appreciated.2.9KViews0likes1CommentLiveMigration issue on a 4node S2D-Cluster (switchless)
Hi Community, we've build up a Windows 2022 (DC) hyperconverged Hyper-V S2D cluster with switchless topology and 4 nodes. Cluster-connectivity is made by 6x100GBE on each node, connecting with 2x100GBE to each other node (Intel 810 with RDMA/iWarp and configured DCB) - no SET teaming. VM-connectivity is made by 4x10GBE on each node (Intel 710, no RDMA) teamed to a SET. As usual in hyperconverged S2D there are no separated networks für cluster, storage, live-migration, etc.. In this setup, there are 12 cluster-networks which represents the "100GBE-RMDA-full-mesh", each containig two of the Intel 810 NICs. Now to the problem: The system is performing very well, despite of one problem with Live-Migration: LM ist set to use the SMB connections and limited to 5 GB/s or eq. 40GBit/s. Moving a VM from one node to another takes very different time depending on the source- and target-node. So, for example, moving "Test-VM" from node1 to node2 takes 10 seconds, starting at once. Moving from node2 to node3 also, but moving from node3 to node4 OR back from node3 to node2 takes 1-2 minutes, with 30-60 seconds delay after the migration was initiated. In case of the problem, no errors (cluster manager oder system protocoll are logged) After a longer investigation, I figured out the follwing: Under FailoverClusterManager -> networks -> LiveMigration networks, all 12 networks are listed an checked (management and VM-networks unchecked). The live-migrations-paths, that are represented by the first 3 or 4 networks are working as expected (fast), the other ones not (very slow - but still working without errors). So the problem can be affected here: Moving all networks of a single node to the top of the list (e.g. all networks on node3) will result in "normal" migration-speed to all other nodes, when node3 is the source of the migration. All other migrations (even back to node 3) are very slow. I've already controlled network configs on all nodes an set manual metrics for all connetions and networks (same for all smb-networks). So this seems to be a problem of switchless design rather then a bad config on a single node, since the behavior can be changed only by editing the order of the connections in cluster manager. Maybe anyone can help me with this - I'll be gratefull for any tips. Thanks, VoNovo1.3KViews0likes4Comments