How to get your CUDA application running in a Docker container

This short tutorial shows you all the necessary steps to set up Docker on Ubuntu Linux that runs CUDA applications.


24 January 2020, by Boyang Xia


Nowadays, it's almost impossible to find any Machine Learning application that does not run on a NVIDIA GPU.

In this tutorial, we show you how to scale up your CUDA-enabled application with Docker.

Prequisites

  • Ubuntu 14.04 or newer (x86_64), required by NVIDIA container runtime
  • Docker Community Edition 19.03 or newer (with API v1.40) for built-in NVIDIA support
  • NVIDIA GPU with CUDA support

Setup Docker

  1. Install Docker:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository \
   "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
   $(lsb_release -cs) stable"
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io
  1. Add your user to the docker group:
sudo usermod -aG docker $USER

⚠️ You need to start a new session to update the groups.

Setup NVIDIA driver and runtime

  1. Verify the installation with the command nvidia-smi. You will see following output:
    +-----------------------------------------------------------------------------+
    | NVIDIA-SMI 390.116                Driver Version: 390.116                   |
    |-------------------------------+----------------------+----------------------+
    | GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
    | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
    |===============================+======================+======================|
    |   0  GeForce GTX 560 Ti  Off  | 00000000:01:00.0 N/A |                  N/A |
    | 40%   46C    P8    N/A /  N/A |    790MiB /  1217MiB |     N/A      Default |
    +-------------------------------+----------------------+----------------------+
                                                                                
    +-----------------------------------------------------------------------------+
    | Processes:                                                       GPU Memory |
    |  GPU       PID   Type   Process name                             Usage      |
    |=============================================================================|
    |    0                    Not Supported                                       |
    +-----------------------------------------------------------------------------+

If NVIDIA driver is not pre-installed with your Ubuntu distribution, then download the appropriate NVIDIA driver and execute the binary as sudo.

⚠️ Secure Boot: If you want to install the NVIDIA driver with UEFI Secure Boot enabled, checkout NVIDIA's official guide.

  1. Install NVIDIA container runtime:
curl -s -L https://nvidia.github.io/nvidia-container-runtime/gpgkey | sudo apt-key add -
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-container-runtime/$distribution/nvidia-container-runtime.list |\
    sudo tee /etc/apt/sources.list.d/nvidia-container-runtime.list
sudo apt-get update
sudo apt-get install nvidia-container-runtime
  1. Restart Docker:
sudo systemctl stop docker
sudo systemctl start docker

Now you are ready to run your first CUDA application in Docker!

Run CUDA in Docker

  1. Choose the right base image (tag will be in form of {version}-cudnn*-{devel|runtime}) for your application. The newest one is 10.2-cudnn7-devel.
  2. Check that NVIDIA runs in Docker with:
docker run --gpus all nvidia/cuda:10.2-cudnn7-devel nvidia-smi

💡 You can specify the number of GPUs and even the specific GPUs with the --gpus flag. Check out Docker's reference.

  1. Give yourself a pat on the back if you get the same output as of running nvidia-smi on the host machine.

Congrats! How you can hack away on your own Dockerfile and scale up your CUDA-fuelled application!