VMQs and Windows Server 2019 cluster

Copper Contributor

We have a pair of identical servers running Windows Server 2019. These servers are clustered for Hyper-V and servicing our remote desktop infrastructure. Each server has a pair of onboard 1GB NICs, an X550-T2 10G 2port NIC, and an i350-T4 1G 4 port NIC.

 

Virtual NICs are shared by multiple VMs and generally work well until a VM is moved to one of the servers. Server A never sees these errors. Server B will see an error whenever a VM is moved to it but only when the vNIC is attached to a hardware NIC on the i350 card. The error is never seen on the onboard NICs or the 10G NIC.

 

Most of the time this error appears to not affect anything but on occasion it completely hangs up any VM attached to the particular i350 NIC. The VM will hang on shut down, will hang on moveing to a cluster partner, and hang other VMs from being moved. This persists until the server is rebooted.

 

My question is: Are VMQs necessary? The 10G NIC ports default to having them off and they appear to work fine. If I turn them off I assume it's best to turn them off on both servers, correct?

 

This is the error:

 

Source: Hyper-V-VMSwitch

Event ID: 113

Level: Error

User: NT VIRTUAL MACHINE\<VM Hyper-V ID>

Failed to allocate VMQ for NIC 035A5989-59D8-464B-9704-D8AEA45D558D--1262076C-2B37-4EC4-927C-0523947B3942 (Friendly Name: Network Adapter) on switch 01E94FE8-CF1A-4E7D-9C44-F293C0A59278 (Friendly Name: vNIC2). Reason - Maximum number of VMQs supported on the Protocol NIC is exceeded. Status = Insufficient system resources exist to complete the API.

0 Replies