DirectML is a high-performance, hardware-accelerated DirectX 12 library for machine learning. DirectML provides GPU acceleration for common machine learning tasks across a broad range of supported hardware and drivers, including all DirectX 12-capable GPUs from vendors such as AMD, Intel, NVIDIA, and Qualcomm.
The preview of GPU compute is now available within WSL 2 to Windows Insiders (Build 20150 or higher)! This preview will initially support artificial intelligence (AI) and machine learning (ML) workflows, enabling professionals and students alike to run ML training workloads across the breadth of GPUs in the Windows ecosystem.
NVIDIA CUDA support
NVIDIA’s CUDA as the optimized path for GPU hardware acceleration is typically utilised to enable data scientists to use hardware-acceleration in their training scripts on NVIDIA GPUs.
NVIDIA CUDA support has been present on Windows for years. However, there is a variety of CUDA compute applications that only run in a native Linux environment. However there is now CUDA inside WSL 2. There is a preview of CUDA for WSL 2. This preview includes support for existing ML tools, libraries, and popular frameworks, including PyTorch and TensorFlow. As well as all the Docker and NVIDIA Container Toolkit support available in a native Linux environment, allowing containerized GPU workloads built to run on Linux to run as-is inside WSL 2.
Empowering educators and students through DirectML
The DirectML API enables accelerated inference for machine learning models on any DirectX 12 based GPU, and we are extending its capabilities to support training.
In addition, we intend to integrate DirectML with popular machine learning tools, libraries, and frameworks so that they can automatically use it as a hardware-acceleration backend on Windows.
DirectML works well in Windows or Windows Subsystem for Linux WSL. By doing so, our intent is to fully empower students to learn in the Windows or Linux environment that works for them, on the hardware they already have.
We have a preview package of TensorFlow with a DirectML backend. Students and beginners can start with theTensorFlow tutorial modelsorour examplesto start building the foundation for their future. In line with this, we are also engaging with the TensorFlow community through theirRFC process. We plan to open source our extension of the TensorFlow code base that works with DirectML.
When used standalone, the DirectML API is a low-level DirectX 12 library and is suitable for high-performance, low-latency applications such as frameworks, games, and other real-time applications. The seamless interoperability of DirectML with Direct3D 12 as well as its low overhead and conformance across hardware makes DirectML ideal for accelerating machine learning when both high performance is desired, and the reliability and predictability of results across hardware is critical.