Jan 10 2018 10:19 AM
According to the Azure GPU VM size guide (https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-gpu:(
When I deploy a NV instance, the M60 GPU defaults to TCC mode when I think that it should be in WDDM mode: TCC being useful for compute workloads (NC or ND size) vs WDDM being appropriate for graphics workloads (NV size). In my experience, when the GPU is in TCC mode, RDP sessions are not able to leverage the GPU. Using the nvidia-smi tool to change the mode to WDDM:
nvidia-smi -g {GPU_ID} -dm 0
and then rebooting the VM allows the RDP session to leverage the GPU. However, this setting does not persist. If the VM is shutdown/de-provisioned, when it is next started, the GPU is back in TCC mode and has to be switched to WDDM mode again. My research suggests that this is due to a setting stored in the EEPROM of the GPU.
Am I missing something from a configuration standpoint to either get an RDP session to use the GPU in TCC mode or to always start the VM in WDDM mode?