How to get your CUDA application running in a Docker container

This short tutorial shows you all the necessary steps to set up Docker on Ubuntu Linux that runs CUDA applications.

24 January 2020, by Boyang Xia

Updated on May 5th, 2020.

Nowadays, it's almost impossible to find any Machine Learning application that does not run on a NVIDIA GPU.

In this tutorial, we show you how to scale up your CUDA-enabled application with Docker.


  • Ubuntu 14.04 or newer (x86_64), required by NVIDIA container runtime
  • Docker Community Edition 19.03 or newer (with API v1.40) for built-in NVIDIA support
  • NVIDIA GPU with CUDA support

Setup Docker

  1. Install Docker:
curl -fsSL | sudo apt-key add -
sudo add-apt-repository \
   "deb [arch=amd64] \
   $(lsb_release -cs) stable"
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli
  1. Add your user to the docker group:
sudo usermod -aG docker $USER

⚠️ You need to start a new session to update the groups.

Setup NVIDIA driver and runtime

  1. Verify the installation with the command nvidia-smi. You will see following output:
    | NVIDIA-SMI 390.116                Driver Version: 390.116                   |
    | GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
    | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
    |   0  GeForce GTX 560 Ti  Off  | 00000000:01:00.0 N/A |                  N/A |
    | 40%   46C    P8    N/A /  N/A |    790MiB /  1217MiB |     N/A      Default |

    | Processes:                                                       GPU Memory |
    |  GPU       PID   Type   Process name                             Usage      |
    |    0                    Not Supported                                       |

If NVIDIA driver is not pre-installed with your Ubuntu distribution, you can install it with sudo apt install nvidia-XXX (XXX is the version, the newest one is 440) or download the appropriate NVIDIA driver and execute the binary as sudo.

⚠️ Secure Boot: If you want to install the NVIDIA driver with UEFI Secure Boot enabled, checkout NVIDIA's official guide.

  1. Install NVIDIA container runtime:
curl -s -L | sudo apt-key add -
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L$distribution/nvidia-container-runtime.list |\
    sudo tee /etc/apt/sources.list.d/nvidia-container-runtime.list
sudo apt-get update
sudo apt-get install nvidia-container-runtime
  1. Restart Docker:
sudo systemctl stop docker
sudo systemctl start docker

Now you are ready to run your first CUDA application in Docker!

Run CUDA in Docker

  1. Choose the right base image (tag will be in form of {version}-cudnn*-{devel|runtime}) for your application. The newest one is 10.2-cudnn7-devel.
  2. Check that NVIDIA runs in Docker with:
docker run --gpus all nvidia/cuda:10.2-cudnn7-devel nvidia-smi

💡 You can specify the number of GPUs and even the specific GPUs with the --gpus flag. Check out Docker's reference.

  1. Give yourself a pat on the back if you get the same output as of running nvidia-smi on the host machine.

Congrats! How you can hack away on your own Dockerfile and scale up your CUDA-fuelled application!

We are looking for fellow machine learners and software developers

If you are interested in creating state-of-the-art ML models and deploying them in a high-availabily and high-scalability cloud environment, drop us an email and have a chat with us!

machine learningcomputer visiondockerenglish
Contact Us

Latest Blog Posts

Who will take care of Data Privacy on Autonomous Vehicles?

Implications of data privacy once autonomous vehicles will be commercially available.

Using Mobile Mapping for Digital Twins

Benefits, privacy and security implications when collecting indoor imagery data for BIM models and digital twins.

Data Protection for Mobile Mapping

Mobile Mapping providers should be aware of data privacy implications when collecting, processing and sharing panorama and planar imagery.