clustering
25 TopicsHow to opt-out of the windows insider program on windows server
is it possible to opt-out of the windows insider program on windows server core 17723 ? how to do it from powershell? currently , we have 2-nodes hyper-converged cluster, it is work stable, but i whant upgrade it to 1809 version.7.3KViews0likes1CommentISSUE: Server Manager remote management error for clustered servers
Dear Windows Server Insider Team, After long puzzles someone finally found a solution that Windows Server 2022 and vNext will appear "online" with no errors in Server Manager. Warning: do not recompile WMI MOFs, especially when not broken, there's is an article on techcommunity.microsoft.com about it. Refer [1] Scenario: Windows Server 2022 Azure Stack HCI 22H2 Windows Server vNext Part of Hyper-V fail-over cluster Issue: Server Manager reports nodes as Online. Yet indicating an error /limitation Refer [2] Reproducible: yes, 100% on cluster nodes Impact: Errors in event log Customers might needlessly try to troubleshoot or contact support Workaround: Add the Cluster computer account to the local Event Log Reader group of each Cluster nodes. References [1] https://techcommunity.microsoft.com/t5/ask-the-performance-team/wmi-repository-corruption-or-not/ba-p/375484 [2] https://techcommunity.microsoft.com/t5/windows-server-for-it-pro/server-manager-problem-online-data-retrieval-failures-occurred/m-p/3966709/highlight/false#M10789 [3] https://techcommunity.microsoft.com/t5/windows-server-for-it-pro/server-manager-problem-online-data-retrieval-failures-occurred/m-p/3935651/highlight/true#M105676.4KViews1like12CommentsHyper-V 2019 not finding network cards, but... they work and are there when loading the vm in 2016
I am trying to put some 17629 server into use to see the performance. For that I exported some of my development VM's, copied them over and imported them into the new cluster. They are exported from a 2016 server. Gen 2, config version 8. After import they loose the network card configuration. I take the same export, copy it on a Win 2016 and a Win 2019 cluster (just saying, maybe it is the cluster?) machine. Import the same image on both (through simple registration). * Win 2016 shows me one network adapter is there * Win 2019 shows me no network adapter present. get-vmnetworkadapter shows the same. Simply no adapter present. I start the WIn 2019 version (it is a 2016 server core) and do some commandline... lo and behold, the network card is still there. Something is not handling this well. Now I have a configuraiton I can not change (including changing switches, VLAN ID) in Hyper-V but the NIC is active.5.7KViews1like0CommentsS2D performance very very poor
We're facing a problem of very slow performance with an S2D three node cluster Lenovo SR650 7X05 Windows Server 2019 17738 384Gb RAM 4 HDD 6TB ST6000NM0115 2 NVMe 900GB PX04PMB096 2 Mellanox ConnectX-4 The only clue is this "Threshold Exceeded" warning on two of the NVMe physical disks: Get-PhysicalDisk DeviceId FriendlyName SerialNumber MediaType CanPool OperationalStatus HealthStatus Usage -------- ------------ ------------ --------- ------- ----------------- ------------ ----- 0 ThinkSystem M.2 VD b344a54d77360010 Unspecified False OK Healthy Auto-Select 2002 ATA ST6000NM0115 ZAD29F2X HDD False OK Healthy Auto-Select 2005 PX04PMB096 8CE3_8E07_0503_B800. SSD False {OK, Threshold Exceeded} Healthy Journal 3006 PX04PMB096 8CE3_8E07_0503_D600. SSD False {OK, Threshold Exceeded} Healthy Journal 3004 ATA ST6000NM0115 ZAD29YNC HDD False OK Healthy Auto-Select 3002 ATA ST6000NM0115 ZAD29V4X HDD False OK Healthy Auto-Select 2006 PX04PMB096 8CE3_8E07_0503_C000. SSD False OK Healthy Journal 1006 PX04PMB096 8CE3_8E07_0503_D200. SSD False OK Healthy Journal 3003 ATA ST6000NM0115 ZAD29S6W HDD False OK Healthy Auto-Select 1004 ATA ST6000NM0115 ZAD29SE4 HDD False OK Healthy Auto-Select 2004 ATA ST6000NM0115 ZAD29VAR HDD False OK Healthy Auto-Select 2003 ATA ST6000NM0115 ZAD29SMK HDD False OK Healthy Auto-Select 3001 ATA ST6000NM0115 ZAD29YL6 HDD False OK Healthy Auto-Select 2001 ATA ST6000NM0115 ZAD29YK2 HDD False OK Healthy Auto-Select 1003 ATA ST6000NM0115 ZAD29Z0A HDD False OK Healthy Auto-Select 1001 ATA ST6000NM0115 ZAD06HEJ HDD False OK Healthy Auto-Select 3005 PX04PMB096 8CE3_8E07_0503_D100. SSD False OK Healthy Journal 1005 PX04PMB096 8CE3_8E07_0503_D300. SSD False OK Healthy Journal 1002 ATA ST6000NM0115 ZAD29YQ5 HDD False OK Healthy Auto-Select Testing the disks with Lenovo SSD tool I see that the Life Remaining of the disks is still at 99%: (c)Copyright Lenovo 2016. Portions (c)Copyright IBM Corporation. SSDCLI -- Display SMART Info v:7.3.2[Tue Jan 9 15:24:37 2018] ------------------------------------------------------------------------- 1 PN:00YK145-01GT679 SN:S3YAM3EN FW:300BT11L Number bytes written to SSD: 98028.9GB Number bytes supported by warranty: 17520000GB Life Remaining Gauge: 99% SSD temperature:22(c) Spec Max: 70(c) PFA trip: No Warranty Exceed:No 2 PN:00YK145-01GT679 SN:S3YAM3FN FW:300BT11L Number bytes written to SSD: 96597.2GB Number bytes supported by warranty: 17520000GB Life Remaining Gauge: 99% SSD temperature:22(c) Spec Max: 70(c) PFA trip: No Warranty Exceed:No 2 Device(s) Found(SATA:0 SAS:0 NVME:2) Any help is really appreciated Alex5.6KViews0likes4Commentsb25110 Enabling nested virtualization will only be visible in PowerShell and WAC but not accepted
I have created new VMs with WAC 2110.2 GA and enabled Nested Virtualization Trying to create an Azure Stack HCI cluster with AzStack HCI 21H2 on WS vNext b25110. Problem: Nested virtualization is enabled and visible in PowerShell via Get-Processor but installing Hyper-V on the nested VM is not possible. Also the AzStackHCI Cluster Creation Wizard fails with an error. Both Install-WindowsFeature and the wizard behave as if Nested Virtualization is not enabled. Expected behaviour: According to the settings the requirements are set and as such it should be possible to install Hyper-V on the VMs. Error in WAC Cluster Creation wizard trying to add the feature manually With these commands I cannot see the enablement of the feature According to VMProcessor the feature is enabled. Same in WAC, says it is enabled.Solved3.5KViews0likes10CommentsCloud witness problem in 17093?
The cloud witness in a 17093 cluster fails when one node reboots or goes offline (even gracefully). This takes down the whole cluster. File share witness works. The cloud witness works on the LTSC Windows 2016, so it appears to be an insider preview problem.Solved2.4KViews0likes3CommentsFeedback --- [WSFC - CNO] : Provision to change ManagementPointNetworkType of an existing Cluster
Hey All, Please add on to the votes here [https://aka.ms/AA1etow]. It’s about making an enhancement on the new feature ManagementPointNetworkType introduced in RS3 which will improve Cluster Connectivity from the behavior what we see today. History: Until RS3, this wasn’t configurable & the cluster’s CNO will be what’s created using MPNT as Singleton. RS3 onwards this can be set as Distributed. You’ll ask: What's the difference? A CNO of type Singleton needs an IP address while the Distributed ones don’t. This is a boon for the AZURE based WSFC deployments, which are currently using 169.254.x.y and are useless for any remote connections or anything but taking space in AD otherwise in all cases eating an IP address. When we create a Cluster with CNO type of Distributed instead of Singleton, the benefits are; Remote Cluster Access Remote PerfMon RDP Ping tests etc… Save an IP Resource Get rid of NetBIOS All the above ‘ve been tested & go well. My ask from MS è This CNO (resource) property isn’t switchable. Once a cluster is created using either a Singleton (Network Name) or Distributed (Distributed Network Name) type it can’t be swapped. Please make amendments so we can flip-flop. Where it’ll help? Cluster migrations from On-Prem to Azure by keeping the relocated CNO usable (which currently aren’t). Here’s a nebulous illustration for Migrating an On-Prem SQL AOAG to Cloud. Sequence of steps… Cluster Node Name è N1 N2 aN1 aN2 Starting with an on-prem solution consisting of two nodes. · CNO (type singleton / Network Name) is remotely accessible, for its based upon a usable IP Address. CNO is accessible Extending the WSFC to AZURE for it will be finally migrated there. · Add an additional IP Resource as dependency under CNO and assign a link-local IP (169.254.x.y) to the Cluster’s Network Name for AZURE Network’s Subnet. · As a side effect when the CORE RES GRP is on aN1/2 the WSFC CAP can’t be used for operations & one has to use a LIVE NODE name. CNO is accessible using the On-Prem IP Remove the On-Prem Nodes thus leaving the Cluster Network names based on the link local IP only. · At this point CNO, hosted over 169.254.x.y isn’t usable. CNO isn’t accessible. What I’m asking is a feature addition to Flip the CNO type to Distributed. CNO is accessible By the end, if the CNO type can be changed (type Distributed) it will be remotely accessible & available for usage. This will certainly benefit all. The command-lets Remove-ClusterNameAccount & New-ClusterNameAccount are useful when migrating a cluster between domains. Incorporating the above in this command will do the job. This will also help in migration testing by doing flips before permanent settlement. Here under are few snaps to illustrate. CNO & dependent resources. As created today. Where ManagementPointNetworkType is set as DNN. Saves you from the perils of setting & living with dead load of 169. Cluster CNO Properties for AZURE based VMs. Current - Singleton RS3 – Distributed. Summary: WSFC builds on Azure using release RS3 onwards will automatically use ManagementPointNetworkType set as DNN. Both on-prem & in Azure this can be set to be created otherwise as well NN / DNN (CLI) but it can’t be changed. My ask is a feature so users can change (ManagementPointNetworkType between NN & DNN). Comment your say on this. Thanks.1.8KViews1like1CommentProject Honolulu - Build 1803 - Cluster Management
Not sure if this is my Lab or not, but I have built a 2 Node Hyper-V S2D Cluster and I'm seeing how it works in Project Honolulu. What is interesting is that the "Fail over Cluster Manager" seems to work with it fine, but the "Hyper-Converged Manager" can't get it together. Sits at "Getting things ready". Anyone seen this behavior? Oh, this is a Windows 2016 Platform.Solved1.8KViews0likes2Comments