Hyper-V
6 TopicsWhy the api NdisMEnableVirtualization returns NDIS_STATUS_RESOURCES?
In Windows Server 2022, when creating a Hyper-V VMSwitch, the NDIS miniport driver will use the api NdisMEnableVirtualization. But sometimes, the function returns NDIS_STATUS_RESOURCES. There is still a lot of remaining memory and cpu resources. Why the api returns NDIS_STATUS_RESOURCES? In the case, I restarted the operating system, this problem has disappeared. Is there a solution that does not require restarting the operating system?Multiple GPU Assignments to a Single Hyper-V VM with DDA
I recently configured Discrete Device Assignment (DDA) on my Windows Server with Hyper-V and successfully assigned a GPU to a virtual machine using the steps outlined in the following reference manuals https://docs.nvidia.com/grid/5.0/grid-vgpu-user-guide/index.html#using-gpu-pass-through-windows-server-hyper-v https://learn.microsoft.com/en-us/windows-server/virtualization/hyper-v/deploy/deploying-graphics-devices-using-dda https://learn.microsoft.com/en-us/windows-server/virtualization/hyper-v/plan/plan-for-deploying-devices-using-discrete-device-assignment My Setup: - Windows Server with Hyper-V - Multiple GPUs available (Example: NVIDIA RTX A400) What I've Done: Successfully assigned one GPU to a VM using DDA - Obtain the location path of the GPU that I want to assign to a VM: "PCIROOT(36)#PCI(0000)#PCI(0000)" - Dismount the device: Dismount-VMHostAssignableDevice -LocationPath "PCIROOT(36)#PCI(0000)#PCI(0000)" -Force - Assign the device to the VM: Add-VMAssignableDevice -LocationPath "PCIROOT(36)#PCI(0000)#PCI(0000)" -VMName Debian12_Dev Power on the VM, and the guest OS (Debian) is able to use the GPU. Now, I want to add multiple GPUs to a single VM using Hyper-V DDA. I tried the following: - Obtain the location path of GPU1 & GPU2 that I want to assign to a VM: - GPU1 device location path: `PCIROOT(80)#PCI(0200)#PCI(0000)#PCI(1000)#PCI(0000)` - GPU2 device location path: `PCIROOT(36)#PCI(0000)#PCI(0000)`- Dismount the devices: - Dismount the devices: Dismount-VMHostAssignableDevice -LocationPath "PCIROOT(80)#PCI(0200)#PCI(0000)#PCI(1000)#PCI(0000)" -Force Dismount-VMHostAssignableDevice -LocationPath "PCIROOT(36)#PCI(0000)#PCI(0000)" -Force - Assign the devices to the VM: Add-VMAssignableDevice -LocationPath "PCIROOT(36)#PCI(0000)#PCI(0000)" -VMName Debian12_Dev Add-VMAssignableDevice -LocationPath "PCIROOT(80)#PCI(0200)#PCI(0000)#PCI(1000)#PCI(0000)" -VMName Debian12_Dev Power on the VM, but the guest OS (Debian) identifies only one GPU. Question: Has anyone tried adding multiple GPUs to a single VM using Hyper-V DDA? If so, what steps did you follow, and did you encounter any challenges? I'm seeking to optimize GPU resources for specific workloads within a single VM and would appreciate any insights, experiences, or tips from the community. Thanks in advance!945Views0likes1CommentLooking for some guidance regarding networking configuration
Hello. We are a start-up, and we are using some very cool technology from Microsoft R&D called the Microsoft Graph Engine, not to be confused with the Microsoft Graph. The Graph Engine is a native very advanced, high-performing TCP/IP stack and I've written a native Windows Server Service wrapper around the Graph Engine Application Service. In our demo and beta phases now, I need to configure our small but safe network so that our native WPF (.NET 5.0) and WinUI 3 clients can access these windows services. We will deploy to Azure VMs with all the networking trimming in place but right now I need to configure our network so that our investor and beta clients can run the software. I've done this to save on cost and so it's necessary that we find a way to do this. I'll look at Azure Arc as a method and vehicle to move forward. This is what I've done thus far: Using Azure DNS to publish FQDN to the WWW Using ZyXEL firewall routers combined with MiroTek 10GB switches with Fiber connectivity to NAS Running Windows Server 2022 Datacenter, hosting 4 Hyper-V VM instances in the cluster configuration and filesystem failover. All machines running attached to Active Directory Graph Engine is an In-memory distributed Graph-powered memory cloud with Azure Service Fabric capability using Availability Groups for failover and load balancing. I can't run System Center because of cost limitations and I don't have compatible network appliances in place How should I proceed? I know you are crazy busy; thanks for your time and attention. Tavi726Views0likes0CommentsHID device redirected with RDP RemoteFX does not appear at Winlogon desktop
Hello, I have successfully setup HID device (U2F keys) redirection with RemoteFX to Windows 2019 Terminal Server (on premise) - https://docs.microsoft.com/en-us/troubleshoot/windows-client/remote/usb-devices-unavailable-remotefx-usb-redirection I have used a device GUID "{745a17a0-74d3-11d0-b6fe-00a0c90f57da}". And devices appear successfully when user gets into desktop, so desktop application may access the HID device successfully . Problem: HID device does not appear/listed on HID hardware list , while RDP session is still on Winlogon desktop state. I have developed Credential Provider that needs to interact with such redirected U2F device to continue user login into desktop session. Is there a solution for this problem? Is there a registry setting or other API for RemoteFX RDP to enable earlier device redirection, during Winlogon desktop state ? Thanks!1.4KViews0likes0CommentsVM to Parent Connectivity
I just installed a VM on a WinSrv 2019 which is a host also for DNS and AD. It has a dynamic IP address with Internet connectivity. However, it can't ping the parent although it can ping other systems on the parent's subnet. Researching, I found that the VM switch was using the same address as the parent. Thinking that it was the problem, I separated the server and switch to different subnet addresses. That didn't resolve the problem. Is there another setting needed to enable the connection? I want to add the VM to the domain.