Blog Post

Security, Compliance, and Identity Blog
2 MIN READ

GPU Requirements for RemoteFX on Windows Server 2012 R2

MicrosoftSecurityandComplianceTeam's avatar
Sep 08, 2018
First published on CloudBlogs on Nov, 05 2013
Hi everyone! I’m Derrick Isoka, a program manager on the RemoteFX team responsible for the virtual GPU component. We’ve received a lot of feedback and questions regarding the cards we recommend for the RemoteFX virtual graphics processing unit (vGPU). In this blog post we’ll share our recommendations to help you understand the options available to you, and most importantly to help you make a decision on the cards that you can consider as you deploy a VDI solution with RemoteFX vGPU.

RemoteFX GPU Requirements

Use of RemoteFX with GPU acceleration on Windows Server 2012 R2 requires a compatible graphics card. Most likely, the servers hosting the RemoteFX workloads will be located in a datacenter and as such, we recommend using passively cooled, server class graphics cards. However, it’s also acceptable to use a workstation card for testing on small deployments depending on your needs. At a minimum, the requirements for graphics cards to be used with RemoteFX are: • DirectX 11.0 or later • WDDM 1.2 driver or later Support in Windows Server 2012 R2 is provided for DX 11.0, DirectCompute, and C++ AMP. Most of the latest graphics cards will support OpenGL 4.0 and OpenCL 1.1 or later, but these APIs are currently unsupported by RemoteFX in Windows Server 2012 R2.

RemoteFX-Compatible GPUs

The following list is not meant to be exhaustive but representative of the mainstream cards from NVIDIA and AMD.

Rank

NVIDIA

AMD

Best

NVIDIA Grid

1. Grid K1

2. Grid K2

AMD FirePro series

1. AMD FirePro™ S10000

2. AMD FirePro™ S9000

3. AMD FirePro™ S7000

Better

NVIDIA Quadro

1. Quadro K6000

2. Quadro K5000

AMD FirePro series

1. AMD FirePro™ V9800P

2. ATI FirePro™ V9800

Good

AMD FirePro series

1. ATI FirePro™ V8800

2. ATI FirePro™ V7800

3. AMD FirePro™ V7800P

4. ATI FirePro™ V5800

Note: 1. Best: These are server class cards, designed and certified for VDI workloads by hardware vendors like NVIDIA and AMD. They target the best application performance, experience, and virtual machine densities. Some of the cards are particularly recommended for designer and engineering workloads (such as Autodesk Inventor or AutoCad). 2. Better: These are workstation class cards that provide acceptable performance and densities. They are especially capable cards for knowledge worker workloads (such as Microsoft Office or Internet Explorer). 3. Good: These are lower-end cards that provide acceptable densities knowledge worker workloads.

Notes on Performance and Scale

In addition to a GPU’s total memory and power consumption, the performance and scale of a VDI system is determined by a variety of additional factors such as storage speed, system memory speed, amount of system memory, number of CPU cores, NUMA implementation, and CPU clock frequency. We're completing some tests for a select set of the cards mentioned in this post, and will be sharing those results in a separate blog post.

In conclusion, deploying a VDI solution requires assembling a lot of components. We hope this blog post helps in reducing the complexity and provides guidance in selecting the right GPU component to address the appropriate experience for your end users.

Published Sep 08, 2018
Version 1.0
  • Marc Chevez's avatar
    Marc Chevez
    Copper Contributor

    I just purchased and installed an Nvidia Tesla P4 8GB GPU for my HPE Proliant DL360 Gen9 Server.  On the Nvidia website it said this was a supported setup.  Now that it is installed, and the drivers are loaded,  I am not seeing it under Physical GPUs in the HyperV settings.  Do you have any advice?