GPU
3 TopicsWindows Server 2022 Standard - Limitation in RDP sessions with active GPU on Hyper-V DDA?
Hello community, we have a testsetup and try to find out is there any limitation in the number of users when a GPU is installed on a Windows Server 2022 single session host. Our test setup is a 32 Cores CPU AMD Epyc 256GB of RAM 2x Nvidia RTX A5000 2TB NVMe storage Hyper-V GPU DDA to one VM We have created 30 Testuser and set the following groupe policies: Disabled UDP protocol only TCP Disabled WDDM driver Set physical graphics adapter to use for all RDP sessions But currently we get the issue DWM.exe is crashing after the 19 session is opening and all other user 20,21,22,23 can never connect. User which is disconnecting and trying to connect get an error during the start of the RDP session. It makes no difference if 1 or 2 GPU are connected to the VM. But if we check the hardware usage, we have a lot of free resources. Is there any limitation or any idea what we can do? Splitting the VM to different smaller VM is in our case not an option regarding the running software what we need. When we deactivating the setting Set physical graphics adapter to use for all RDP sessions All users can login to server and it seems to be the GPU are working maybee for browser, office etc. but OpenGL, DirectX etc. is not available what is bad. I hope you can support here and explain if there is any settings, limitations etc. Thanks!1KViews0likes1CommentMultiple GPU Assignments to a Single Hyper-V VM with DDA
I recently configured Discrete Device Assignment (DDA) on my Windows Server with Hyper-V and successfully assigned a GPU to a virtual machine using the steps outlined in the following reference manuals https://docs.nvidia.com/grid/5.0/grid-vgpu-user-guide/index.html#using-gpu-pass-through-windows-server-hyper-v https://learn.microsoft.com/en-us/windows-server/virtualization/hyper-v/deploy/deploying-graphics-devices-using-dda https://learn.microsoft.com/en-us/windows-server/virtualization/hyper-v/plan/plan-for-deploying-devices-using-discrete-device-assignment My Setup: - Windows Server with Hyper-V - Multiple GPUs available (Example: NVIDIA RTX A400) What I've Done: Successfully assigned one GPU to a VM using DDA - Obtain the location path of the GPU that I want to assign to a VM: "PCIROOT(36)#PCI(0000)#PCI(0000)" - Dismount the device: Dismount-VMHostAssignableDevice -LocationPath "PCIROOT(36)#PCI(0000)#PCI(0000)" -Force - Assign the device to the VM: Add-VMAssignableDevice -LocationPath "PCIROOT(36)#PCI(0000)#PCI(0000)" -VMName Debian12_Dev Power on the VM, and the guest OS (Debian) is able to use the GPU. Now, I want to add multiple GPUs to a single VM using Hyper-V DDA. I tried the following: - Obtain the location path of GPU1 & GPU2 that I want to assign to a VM: - GPU1 device location path: `PCIROOT(80)#PCI(0200)#PCI(0000)#PCI(1000)#PCI(0000)` - GPU2 device location path: `PCIROOT(36)#PCI(0000)#PCI(0000)`- Dismount the devices: - Dismount the devices: Dismount-VMHostAssignableDevice -LocationPath "PCIROOT(80)#PCI(0200)#PCI(0000)#PCI(1000)#PCI(0000)" -Force Dismount-VMHostAssignableDevice -LocationPath "PCIROOT(36)#PCI(0000)#PCI(0000)" -Force - Assign the devices to the VM: Add-VMAssignableDevice -LocationPath "PCIROOT(36)#PCI(0000)#PCI(0000)" -VMName Debian12_Dev Add-VMAssignableDevice -LocationPath "PCIROOT(80)#PCI(0200)#PCI(0000)#PCI(1000)#PCI(0000)" -VMName Debian12_Dev Power on the VM, but the guest OS (Debian) identifies only one GPU. Question: Has anyone tried adding multiple GPUs to a single VM using Hyper-V DDA? If so, what steps did you follow, and did you encounter any challenges? I'm seeking to optimize GPU resources for specific workloads within a single VM and would appreciate any insights, experiences, or tips from the community. Thanks in advance!942Views0likes1CommentUse DDA to pass GPU to container
Dear community, I want to pass a Quadro GPU to a container on a Windows Server host. What I want to change in our build infrastructure are idleing machines. We got Linux machines and Windows machines. Different Linux distributions. Some of them are idleing most of the time. So I thought about moving everything, Linux and Windows, in a container and run the GPU tests (CUDA and OpenGL) there. Since Linux containers don’t have access to the GPU I was wondering if I can make use of and assign the GPU to any of my containers at runtime? If this is possible, can you please explain how this is done? I also asked in the docker forums: https://forums.docker.com/t/gpu-in-container/540584.2KViews0likes2Comments