Docker Images Reference

Pre-built Docker images commonly used with Clore.ai GPU rentals.

Official NVIDIA Images

CUDA Base Images

Image
Size
Use Case

nvidia/cuda:12.8.0-base-ubuntu22.04

~2GB

Minimal CUDA

nvidia/cuda:12.8.0-runtime-ubuntu22.04

~3GB

CUDA runtime

nvidia/cuda:12.8.0-devel-ubuntu22.04

~4GB

CUDA development

nvidia/cuda:11.8.0-base-ubuntu22.04

~2GB

Older CUDA

nvidia/cuda:11.8.0-cudnn8-devel-ubuntu22.04

~5GB

CUDA 11.8 + cuDNN

# Recommended for most workloads
nvidia/cuda:12.8.0-base-ubuntu22.04

PyTorch Images

Image
Size
PyTorch
CUDA

pytorch/pytorch:2.7.1-cuda12.8-cudnn9-runtime

~8GB

2.7.1

12.8

pytorch/pytorch:2.7.1-cuda12.8-cudnn9-devel

~10GB

2.7.1

12.8

pytorch/pytorch:2.0.1-cuda11.7-cudnn8-runtime

~5GB

2.0.1

11.7

pytorch/pytorch:1.13.1-cuda11.6-cudnn8-runtime

~5GB

1.13.1

11.6

# Recommended for ML training
pytorch/pytorch:2.7.1-cuda12.8-cudnn9-devel

TensorFlow Images

Image
Size
TensorFlow
CUDA

tensorflow/tensorflow:2.14.0-gpu

~6GB

2.14.0

11.8

tensorflow/tensorflow:2.13.0-gpu

~6GB

2.13.0

11.8

tensorflow/tensorflow:2.12.0-gpu-jupyter

~7GB

2.12.0

11.8

Hugging Face Images

Image
Size
Use Case

huggingface/transformers-pytorch-gpu

~10GB

Transformers + PyTorch

huggingface/transformers-tensorflow-gpu

~10GB

Transformers + TensorFlow

Inference Servers

Image
Use Case

vllm/vllm-openai:latest

vLLM inference server

ghcr.io/huggingface/text-generation-inference:latest

HF TGI

nvcr.io/nvidia/tritonserver:23.10-py3

Triton Inference

Development Environments

Image
Description

jupyter/pytorch-notebook

Jupyter + PyTorch

jupyter/tensorflow-notebook

Jupyter + TensorFlow

cschranz/gpu-jupyter

Full GPU Jupyter stack

Using Custom Images

You can use any public Docker image. For private images, use the CCR (Custom Container Registry) feature.

Build and Push

Use with Clore

Example Dockerfile

Image Selection Guide

For Training

Workload
Recommended Image

PyTorch training

pytorch/pytorch:2.7.1-cuda12.8-cudnn9-devel

TensorFlow training

tensorflow/tensorflow:2.14.0-gpu

LLM fine-tuning

pytorch/pytorch:2.7.1-cuda12.8-cudnn9-devel

Distributed training

pytorch/pytorch:2.7.1-cuda12.8-cudnn9-devel

For Inference

Workload
Recommended Image

LLM serving

vllm/vllm-openai:latest

General inference

nvidia/cuda:12.8.0-runtime-ubuntu22.04

API server

pytorch/pytorch:2.7.1-cuda12.8-cudnn9-runtime

For Development

Workload
Recommended Image

Jupyter notebooks

cschranz/gpu-jupyter

General dev

nvidia/cuda:12.8.0-devel-ubuntu22.04

Tips

  1. Use specific tags - Avoid :latest for reproducibility

  2. Choose runtime vs devel - Runtime is smaller, devel includes compilers

  3. Match CUDA versions - Ensure your code is compatible with the CUDA version

  4. Consider size - Smaller images deploy faster

See Also

Last updated

Was this helpful?