This repository provides a reproducible VS Code DevContainer that can access NVIDIA GPUs immediately after build time—nvidia-smi works without additional manual steps, even when you run inside Kubernetes (for example via devpod or Rancher Desktop).
- .devcontainer/devcontainer.json: Uses an
nvidia/cuda:12.1.0-devel-ubuntu22.04
image and passes--gpus=all
to Docker. - .devcontainer/install-dev-tools.sh: Installs CUDA 12.1, cuDNN 8.9.7, and the exact nvidia-utils / libnvidia-compute 570.133.20 packages that match the host driver.
- docs/gpu-setup.md: Detailed information how to fix different NVML "driver / library mismatch" errors.
- docs/devpod-setup.md: Setting up Devpod with the correct provider options.
- notebooks/gpu-test.ipynb: Minimal Jupyter notebook that verifies PyTorch GPU visibility.
- Configure Devpod
If you haven’t done so, follow docs/devpod-setup.md to add the aihpi provider with GPU support. - Create a new workspace
- In the Devpod UI click Workspaces → Create → Git Repo
- Enter the following Workspace Source:
https://github.com/aihpi/devcontainer-nvidia.git
- Pick the provider
Select aihpi as the provider (the GPU-enabled provider you created in step 1). - Set the Devcontainer Path
.devcontainer/devcontainer.json
- Create the workspace
- Click Create Workspace and wait for the image to build.
- The build installs CUDA 12.1, cuDNN 8.9.7, and NVIDIA user-space libraries matching driver 570.133.20.
- Verify GPU access
Open a terminal inside the new VS Code window and run:nvidia-smi
You should see the GPU table (no NVML mismatch). If you don’t see a GPU, check that:
- The node has nvidia.com/gpu resources.
- The provider is configured with --gpus=all.
- The user-space library versions match the host driver (see docs/gpu-setup.md).
MIT
The AI Service Centre Berlin Brandenburg is funded by the Federal Ministry of Research, Technology and Space under the funding code 01IS22092.