How to get your CUDA application running in a Docker container
This short tutorial shows you all the necessary steps to set up Docker on Ubuntu Linux that runs CUDA applications.
24 January 2020, by Boyang XiaAsk a question
Updated on May 5th, 2020.
Nowadays, it's almost impossible to find any Machine Learning application that does not run on a NVIDIA GPU.
In this tutorial, we show you how to scale up your CUDA-enabled application with Docker.
Prerequisites
- Ubuntu 14.04 or newer (x86_64), required by NVIDIA container runtime
- Docker Community Edition 19.03 or newer (with API v1.40) for built-in NVIDIA support
- NVIDIA GPU with CUDA support
Setup Docker
- Install Docker:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable"
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io
- Add your user to the
docker
group:
sudo usermod -aG docker $USER
⚠️ You need to start a new session to update the groups.
Setup NVIDIA driver and runtime
- Verify the installation with the command
nvidia-smi
. You will see following output:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 390.116 Driver Version: 390.116 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 560 Ti Off | 00000000:01:00.0 N/A | N/A |
| 40% 46C P8 N/A / N/A | 790MiB / 1217MiB | N/A Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 Not Supported |
+-----------------------------------------------------------------------------+
If NVIDIA driver is not pre-installed with your Ubuntu distribution, you can install it with sudo apt install nvidia-XXX
(XXX is the version, the newest one is 440) or
download the appropriate NVIDIA driver and execute the binary as sudo.
⚠️ Secure Boot: If you want to install the NVIDIA driver with UEFI Secure Boot enabled, checkout NVIDIA's official guide.
- Install NVIDIA container runtime:
curl -s -L https://nvidia.github.io/nvidia-container-runtime/gpgkey | sudo apt-key add -
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-container-runtime/$distribution/nvidia-container-runtime.list |\
sudo tee /etc/apt/sources.list.d/nvidia-container-runtime.list
sudo apt-get update
sudo apt-get install nvidia-container-runtime
- Restart Docker:
sudo systemctl stop docker
sudo systemctl start docker
Now you are ready to run your first CUDA application in Docker!
Run CUDA in Docker
-
Choose the right base image (tag will be in form of {version}-cudnn*-{devel|runtime}) for your application. The newest one is 10.2-cudnn7-devel.
-
Check that NVIDIA runs in Docker with:
docker run --gpus all nvidia/cuda:10.2-cudnn7-devel nvidia-smi
Do your image datasets contain personal data like faces or license plates?
Try Celantur automated image and video blurring. Respect individuals' privacy, comply with data privacy laws and avoid hefty fines. Give it a try!💡 You can specify the number of GPUs and even the specific GPUs with the
--gpus
flag. Check out Docker's reference.
- Give yourself a pat on the back if you get the same output as of running
nvidia-smi
on the host machine.
Congrats! How you can hack away on your own Dockerfile and scale up your CUDA-fuelled application!
Expert in image processing optimization?
Take part of our performance engineering technical challenge and win € 150 Amazon voucher! Terms and conditions here.Ask us Anything. We'll get back to you shortly