Forum Discussion
Two node Azure Local cluster updated to different versions
Yes — this can happen if one node successfully completed the cumulative update while the other node either:
failed during orchestration,
exited maintenance mode early,
lost connectivity,
or had the update staged but not committed.
Your versions indicate:
26100.32690 → updated node
26100.32522 → older node still pending previous CU
In Azure Local / Azure Stack HCI clusters, mixed patch levels are usually only tolerated temporarily during orchestrated updating. Once the process breaks, the cluster update service may refuse to continue because version consistency checks fail.
The good news is that you normally do NOT need to downgrade the updated node.
The usual approach is to manually bring the lagging node up to the same build.
I would recommend:
Pause and drain the older node:
Suspend-ClusterNode -Name NODE2 -Drain
Verify the currently installed updates:
Get-HotFix
Manually install the missing cumulative update (KB5082063) on the older node.
You can:
use Windows Update
Microsoft Update Catalog
or install the .msu manually
Reboot the node completely.
Verify the build now matches:
winver
or:
Get-ComputerInfo | Select OsName,OsVersion,OsBuildNumber
Resume the node:
Resume-ClusterNode -Name NODE2
Finally validate cluster health:
Test-Cluster
Get-ClusterNode
Also check whether any update orchestration service is stuck:
Get-Service *update*
Get-Service *cluster*
Microsoft-Windows-ClusterAwareUpdating
Microsoft-Windows-WindowsUpdateClient
FailoverClustering event logs
One important note:
Do NOT attempt to uninstall the newer CU from the updated node unless Microsoft specifically advises it. In most cases, bringing the older node forward is the supported recovery path.
If the older node refuses the CU because of cluster validation/version checks, you may need to:
temporarily evict the node,
update it standalone,
then rejoin it to the cluster.
But usually manual CU installation on the lagging node resolves the mismatch.