Jul 13 2018 04:33 PM - edited Jul 13 2018 04:34 PM
Dear community,
I want to pass a Quadro GPU to a container on a Windows Server host.
What I want to change in our build infrastructure are idleing machines. We got Linux machines and Windows machines. Different Linux distributions. Some of them are idleing most of the time. So I thought about moving everything, Linux and Windows, in a container and run the GPU tests (CUDA and OpenGL) there. Since Linux containers don’t have access to the GPU I was wondering if I can make use of and assign the GPU to any of my containers at runtime?
If this is possible, can you please explain how this is done?
I also asked in the docker forums: https://forums.docker.com/t/gpu-in-container/54058
Aug 09 2018 08:25 AM
Hey Daniel,
I appreciate your question. At this time, we do not support passing GPUs to Windows Server containers or Linux containers on Windows. You stated this would be used for running GPU-related tests; I am curious to know what your expectation would around how the GPU gets shared.
You called out DDA--would you want to give the GPU exclusively to a container, as DDA would imply? Would you rather share it amongst several containers? Share the GPU between the container in the host? Or even something else.
Also curious to know the % deployment of Windows Server containers vs Linux containers for these GPU test workloads. Assuming Linux containers on Windows could support GPU, would that be better for your use cases, do you have a 50/50 split between OS types, etc.
Aug 09 2018 12:17 PM
Hi Craig,
Thank you for your reply. I also found your answer on uservoice about GPU support for WSL and tweeted to you but I guess you don't use Twitter that much :).
So more about the issue I'm currently facing. I assume DDA would be one possible solution. My expectation would be that I would exclusively hand the GPU over to one container which then runs GPU tests. Once this test is done I would take the GPU off this container and assign it to the next one. This means, only one container would have the GPU at a time. This would be sufficient for my scenario. This would also ensure that I test the actual driver for the OS which needs to be installed in the container, right?
As an example how this would look like if GPU access would be possible in containers: Each machine has a WindowsServerCore container and around 4 to 5 containers with different Linux distributions. The number of Linux containers might increase over time. Maybe also some can be dropped in the future. I guess there will never be more than 7 different distributions. It might be that we have more different images with different GPU drivers in the future but it is not necessary at the moment.
This means I could run any test on any machine at any time vs. now some machines are idling because, for example, older branches for older Linux distributions don't need all the fixes and are less frequent triggert for a build.
I hope you get the idea how I would use GPU access in containers. Are there any other ways you could think of to get the same benefits as with DDA? Any other suggestions?
There is also the nvidia docker. Unfortunately it is only available on Linux. I found a request on Github where someone asked for a Windows version but their statement was that they would need to make use of DDA and this feature is only available on Windows Server and not Windows 10. So I replied to this request and asked if they are working on a Windows version since the latest Windows 10 supports DDA. I didn't get any answer yet.