clustering
148 TopicsProblems with file server role in windows failover cluster
I have a windows failover cluster that consist of 3 virtual servers under os windows server 2019 with 2 file server roles on it, so currently I have one server role on each server and one server in reserve. This file server roles have shared disks on each and connection with azure file share through azure file sync version 20.0. Recently I face a problem that for some reason during the work file server role seems to be stuck and partially unresponsive the only solution is to reboot the node that this file server role is currently on. The only prediction of this problem I've been able to find is Event ID:5783 that relates to NETLOGON The session setup to the Windows Domain Controller for the domain is not responsive. Maybe somebody also faced similar problem or may have some ideas what it might be, or how to fix that, I'll be glad to hear about.102Views0likes1CommentActive Directory documentation and best practices
Dear all, please, I have to fix a critical situation. I have recovered a DC VM on an old Hyper-V host. This DC is corrupted and seems that it is not working as well. I installed 2 new phisical host (cluster) with Windows Server Datacenter 2025, configured Hyper-V role, networks etc.. Also prepared (out of cluster) 2 new VM Win 2025 and I would like to prepare as new DC01 + DC02. I guess is better to configure the DC on the hosts to grant that the DC role is always available also if the cluster is not running (feel free to correct if I'm wrong, I will appreciate). I need link to video or documents to correctly perform the configuration of the new DC. The old DC are running a Windows Server 2012 R2 standard version. Is it possible to migrate to Windows 2025 datacenter ? Take note that at the end I have to assign to the new DC, the same IP address. thank you very much for your support. BR Alex.81Views0likes0CommentsHyperV 12 on Windows 2025 ready for production?
Fighting with three HyperV servers in a failover cluster. I understand that now you have to use SET teaming and lbfo is no longer supported. I also understand that SET is still quite buggy. I have created three teams with two adapters in each of them, but it confuses me how they appear in "Network and internet > Ethernet" vs "Advanced Network setting. I have set the vEthernets to the static IPs that I need. But the way the NIC shows up seems random... Some of them shows the IP of the vEthenets and they are all set to DHCP Network seems very unstable if I use more than one NIC in a team. As far as I can google, I am not the only one having these issues. SET networking does not work stable if more than 1 NIC - correct? So if one needs an HA setup - how to handle NIC teaming correct? Can HyperV 12 on Windows 2025 be used in a prof. live setup? thx...167Views0likes1CommentWorkgroup Failover Cluster backup service account
Hello, We have built a workgroup Hyper-V cluster. Live migration works well when taking a node. But the only account that we can use is the one used at the cluster creation. I found some post about account creating the same user / password on both node and grant cluster full access. But this account gets access denied in the cluster manager. But I would like to have specific account for backup and also a nominative account for administration. I just read Orin Thomas post , but it did not help. Have someone ever be able to use a different local local account to manager a workgroup cluster ? Or to achieve this need, I must stick to AD registered servers. Thanks for any help. Jean Marie34Views0likes0CommentsDynamic processor compatibility mode
Hi, I was reading up on the new Dynamic processor compatibility mode in 2025 and have been doing some testing and not happy with the result. We have about 400 blades and that comes to about 8 different CPU types in those blades. As our customers have very dynamic demands we're constantly resizing clusters and the blades give me a lot of flexibility in this. In the past the CPU compatibility setting gave us even more flexibility to live migrate between different CPU families, but it also set back the CPU to 1970 levels feature wise. Now with the new updated dynamic processor compatibility mode we have much more CPU functions that are exposed, which is good. The bad thing though is that the CPU level on the cluster is dynamic and my VMs could get different CPU features available with every power off - power on. For example when I start a new cluster with some fresh blades I just received from my supplier, the cluster will determine the common CPU level to be the latest (say XYZ). The VMs I run on it all have CPU compatibility enabled, so they see level XYZ. Now the customer asks for some quick expansion of the cluster and I have to add some older type of blade. My personal testing has learned that the cluster now determines the common level to be somewhat lower, say RST. The VMs that are already running will keep seeing the XYZ (as expected) but: - they can't live migrate to the older host - on next power off and restart, they will go back to level RST. This gives me two major issues. One is that I can't just update my clusters anymore without VM downtime since I can't move VMs to the older hosts. And the bigger issue is that VMs can sometimes have and sometimes not have a specific CPU feature set. Would love to have an option to manually set a CPU feature set for a cluster. I would take my oldest blade, get that feature set and apply it on all clusters and when that blade type is gone, I'd just update all clusters to a new lowest level. Also, I can't find anywhere how I can see through powershell or GUI, what the common CPU feature set for a cluster is. Love to hear everyone's thoughts about this.....53Views0likes0CommentsPowerShell counterpart for Failover Cluster Manager "Live Migration Settings"
In Failover Cluster Manager, there's "Live Migration Settings" where I can define what cluster networks I want to carry live migration traffic. Even after some research, I cannot find a PowerShell cmdlet that lets me do the same...52Views0likes0CommentsiSCSI Target servers - high availability?
Hello, I have two VMs configured as iSCSI target servers with a 600GB VDHX file on each, and then two VMs configured as file servers in a failover cluster. The iSCSI servers should serve out the data that is shared in the failover cluster file server. I would like to also configure the iSCSI target servers in high availability mode, so that they replicate their data and if one of the iSCSI target server go down, the shares and data is still accessible. How would I go about doing this? So far I've tried setting up storage replica, but since I only have one site, it doesn't allow me to replicate to the second iSCSI disk from the one that currently has data. I also tried iSCSI Target-role in the failover cluster manager, but it puts me back at the same situation where if the storage server with the virtual iSCSI disk goes down, I lose access to all data.1.4KViews1like1CommentBLOG: Windows Server / Azure Local keeps setting Live Migration to 1 - here is why
Affected products: Windows Server 2022, Windows Server 2025 Azure Local 21H2, Azure Local 22H2, Azure Local 23H2 Network ATC Dear Community, I have seen numerous reports from customers running Windows Server 2022 servers or Azure Local (Azure Stack HCI) that Live Migration settings are constantly changed to 1 per Hyper-V Host, as mirrored in PowerShell and Hyper-V Host Settings. The customer previously set the value to 4 via PowerShell, so he could prove it was a different value at a certain time. First, I didn't step into intense research why the configuration altered over time, but the stumbled across it, quite accidently, when fetching all parameters of Get-Cluster. According to an article a LCU back in September 2022 changed the default behaviour and allows to specify the live migrations at cluster level. The new live migration default appears to be 1 at cluster level and this forces to change the values on the Hyper-V nodes to 1 accordingly. In contrast to the commandlet documentation, the value is not 2, which would make more sense. Quite unknown, as not documented in the LCU KB5017381 itself, but only referenced in the documentation for the PowerShell commandlet Get-Cluster. Frankly, none of the aren't areas customers nor partners would check quite regularly to spot any of such relevant feature improvements or changes. "Beginning with the 2022-09 Cumulative Update, you can now configure the number of parallel live migrations within a cluster. For more information, see KB5017381 for Windows Server 2022 and KB5017382 for Azure Stack HCI (Azure Local), version 21H2. (Get-Cluster).MaximumParallelMigrations = 2 The example above sets the cluster property MaximumParallelMigrations to a value of 2, limiting the number of live migrations that a cluster node can participate in. Both existing and new cluster nodes inherit this value of 2 because it's a cluster property. Setting the cluster property overrides any values configured using the Set-VMHost command." Network ATC in Azure Local 22H2+ and Windows Server 2025+: When using Network ATC in Windows Server 2025 and Azure Local, it will set the live migration to 1 per default and enforce this across all cluster nodes. Disregarding the Cluster Settings above or Local Hyper-V Settings. To change the number of live migration you can specify a cluster-wide override in Network ATC. Conclusion: The default values for live migration have been changes. The global cluster setting or Network ATC forcing these down to the Hyper-V hosts based on Windows Server 2022+/ Azure Local nodes and ensure consistency. Previously we thought this would happen after using Windows Admin Center (WAC) when opening the WAC cluster settings, but this was not the initial cause. Finding references: Later the day, as my interest grew about this change I found an official announcement. In agreement to another article, on optimizing live migrations, the default value should be 2, but for some reason at most customers, even on fresh installations and clusters, it is set to 1. TLDR: 1. Stop bothering on changing the Livemigration setting manually or PowerShell or DSC / Policy. 2. Today and in future train your muscle memory to change live migration at cluster level with Get-Cluster, or via Network ATC overrides. These will be forced down quite immediately to all nodes and will be automatically corrected if there is any configuration drift on a node. 3. Check and set the live migration value to 2 as per default and follow these recommendations: Optimizing Hyper-V Live Migrations on an Hyperconverged Infrastructure | Microsoft Community Hub Optimizing your Hyper-V hosts | Microsoft Community Hub 4. You can stop blaming WAC or overeager colleagues for changing the LM settings to undesirable values over and over. Starting with Windows Admin Center (WAC) 2306, you can set the Live Migration Settings at cluster level in Cluster > Settings. Happy Clustering! 😀1.2KViews2likes0Commentsfeature Installation Error
I am facing this issue in Windows Server 2019 STD. i am also tried to solve this issue to select sources\sxs path from the OS media but still i am getting the same error. Mistakenly i have removed .Net framework from this server and after that i am facing this issue. please help me to solve this issue.58Views0likes0Comments