GPU accelerated ML training in WSL (2024)

  • Article

Machine learning (ML) is becoming a key part of many development workflows. Whether you're a data scientist, ML engineer, or starting your learning journey with ML the Windows Subsystem for Linux (WSL) offers a great environment to run the most common and popular GPU accelerated ML tools.

There are lots of different ways to set up these tools. For example, NVIDIA CUDA in WSL, TensorFlow-DirectML and PyTorch-DirectML all offer different ways you can use your GPU for ML with WSL. To learn more about the reasons for choosing one versus another, see GPU accelerated ML training.

This guide will show how to set up:

  • NVIDIA CUDA if you have an NVIDIA graphics card and run a sample ML framework container
  • TensorFlow-DirectML and PyTorch-DirectML on your AMD, Intel, or NVIDIA graphics card

Prerequisites

Setting up NVIDIA CUDA with Docker

  1. Download and install the latest driver for your NVIDIA GPU

  2. Install Docker Desktop or install the Docker engine directly in WSL by running the following command

    curl https://get.docker.com | sh
    sudo service docker start
  3. If you installed the Docker engine directly then install the NVIDIA Container Toolkit following the steps below.

    Set up the stable repository for the NVIDIA Container Toolkit by running the following commands:

    distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
    curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-docker-keyring.gpg
    curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-docker-keyring.gpg] https://#g' | sudo tee /etc/apt/sources.list.d/nvidia-docker.list

    Install the NVIDIA runtime packages and dependencies by running the commands:

    sudo apt-get update
    sudo apt-get install -y nvidia-docker2
  4. Run a machine learning framework container and sample.

    To run a machine learning framework container and start using your GPU with this NVIDIA NGC TensorFlow container, enter the command:

    docker run --gpus all -it --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 nvcr.io/nvidia/tensorflow:20.03-tf2-py3

    GPU accelerated ML training in WSL (1)

    You can run a pre-trained model sample that is built into this container by running the commands:

    cd nvidia-examples/cnn/
    python resnet.py --batch_size=64

    GPU accelerated ML training in WSL (2)

Additional ways to get setup and utilize NVIDIA CUDA can be found in the NVIDIA CUDA on WSL User Guide.

Setting up TensorFlow-DirectML or PyTorch-DirectML

  1. Download and install the latest driver from your GPU vendors website: AMD, Intel, or NVIDIA.

  2. Setup a Python environment.

    We recommend setting up a virtual Python environment. There are many tools you can use to setup a virtual Python environment — for these instructions, we'll use Anaconda's Miniconda.

    wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
    bash Miniconda3-latest-Linux-x86_64.sh
    conda create --name directml python=3.7 -y
    conda activate directml
  3. Install the machine learning framework backed by DirectML of your choice.

    TensorFlow-DirectML:

    pip install tensorflow-directml

    PyTorch-DirectML:

    sudo apt install libblas3 libomp5 liblapack3
    pip install pytorch-directml
  4. Run a quick addition sample in an interactive Python session for TensorFlow-DirectML or PyTorch-DirectML to make sure everything is working.

If you have questions or run into issues, visit the DirectML repo on GitHub.

Multiple GPUs

If you have multiple GPUs on your machine you can also access them inside of WSL. However, you will only be able to access one at a time. To choose a specific GPU please set the environment variable below to the name of your GPU as it appears in device manager:

export MESA_D3D12_DEFAULT_ADAPTER_NAME="<NameFromDeviceManager>"

This will do a string match, so if you set it to "NVIDIA" it will match the first GPU that starts with "NVIDIA".

Additional Resources

GPU accelerated ML training in WSL (2024)
Top Articles
Latest Posts
Article information

Author: Francesca Jacobs Ret

Last Updated:

Views: 6138

Rating: 4.8 / 5 (68 voted)

Reviews: 83% of readers found this page helpful

Author information

Name: Francesca Jacobs Ret

Birthday: 1996-12-09

Address: Apt. 141 1406 Mitch Summit, New Teganshire, UT 82655-0699

Phone: +2296092334654

Job: Technology Architect

Hobby: Snowboarding, Scouting, Foreign language learning, Dowsing, Baton twirling, Sculpting, Cabaret

Introduction: My name is Francesca Jacobs Ret, I am a innocent, super, beautiful, charming, lucky, gentle, clever person who loves writing and wants to share my knowledge and understanding with you.