# Docker Images Reference

Pre-built Docker images commonly used with Clore.ai GPU rentals.

## Official NVIDIA Images

### CUDA Base Images

| Image                                         | Size  | Use Case          |
| --------------------------------------------- | ----- | ----------------- |
| `nvidia/cuda:12.8.0-base-ubuntu22.04`         | \~2GB | Minimal CUDA      |
| `nvidia/cuda:12.8.0-runtime-ubuntu22.04`      | \~3GB | CUDA runtime      |
| `nvidia/cuda:12.8.0-devel-ubuntu22.04`        | \~4GB | CUDA development  |
| `nvidia/cuda:11.8.0-base-ubuntu22.04`         | \~2GB | Older CUDA        |
| `nvidia/cuda:11.8.0-cudnn8-devel-ubuntu22.04` | \~5GB | CUDA 11.8 + cuDNN |

```bash
# Recommended for most workloads
nvidia/cuda:12.8.0-base-ubuntu22.04
```

## PyTorch Images

| Image                                            | Size   | PyTorch | CUDA |
| ------------------------------------------------ | ------ | ------- | ---- |
| `pytorch/pytorch:2.7.1-cuda12.8-cudnn9-runtime`  | \~8GB  | 2.7.1   | 12.8 |
| `pytorch/pytorch:2.7.1-cuda12.8-cudnn9-devel`    | \~10GB | 2.7.1   | 12.8 |
| `pytorch/pytorch:2.0.1-cuda11.7-cudnn8-runtime`  | \~5GB  | 2.0.1   | 11.7 |
| `pytorch/pytorch:1.13.1-cuda11.6-cudnn8-runtime` | \~5GB  | 1.13.1  | 11.6 |

```bash
# Recommended for ML training
pytorch/pytorch:2.7.1-cuda12.8-cudnn9-devel
```

## TensorFlow Images

| Image                                      | Size  | TensorFlow | CUDA |
| ------------------------------------------ | ----- | ---------- | ---- |
| `tensorflow/tensorflow:2.14.0-gpu`         | \~6GB | 2.14.0     | 11.8 |
| `tensorflow/tensorflow:2.13.0-gpu`         | \~6GB | 2.13.0     | 11.8 |
| `tensorflow/tensorflow:2.12.0-gpu-jupyter` | \~7GB | 2.12.0     | 11.8 |

```bash
# Recommended for TensorFlow
tensorflow/tensorflow:2.14.0-gpu
```

## Hugging Face Images

| Image                                     | Size   | Use Case                  |
| ----------------------------------------- | ------ | ------------------------- |
| `huggingface/transformers-pytorch-gpu`    | \~10GB | Transformers + PyTorch    |
| `huggingface/transformers-tensorflow-gpu` | \~10GB | Transformers + TensorFlow |

## Inference Servers

| Image                                                  | Use Case              |
| ------------------------------------------------------ | --------------------- |
| `vllm/vllm-openai:latest`                              | vLLM inference server |
| `ghcr.io/huggingface/text-generation-inference:latest` | HF TGI                |
| `nvcr.io/nvidia/tritonserver:23.10-py3`                | Triton Inference      |

## Development Environments

| Image                         | Description            |
| ----------------------------- | ---------------------- |
| `jupyter/pytorch-notebook`    | Jupyter + PyTorch      |
| `jupyter/tensorflow-notebook` | Jupyter + TensorFlow   |
| `cschranz/gpu-jupyter`        | Full GPU Jupyter stack |

## Using Custom Images

You can use any public Docker image. For private images, use the CCR (Custom Container Registry) feature.

### Build and Push

```bash
# Build your image
docker build -t myuser/my-ml-image:v1 .

# Push to Docker Hub
docker push myuser/my-ml-image:v1
```

### Use with Clore

```python
order_data = {
    "renting_server": server_id,
    "type": "on-demand",
    "currency": "CLORE-Blockchain",
    "image": "myuser/my-ml-image:v1",  # Your custom image
    "ports": {"22": "tcp"},
    "env": {"NVIDIA_VISIBLE_DEVICES": "all"},
    "ssh_password": "YourPassword123!"
}
```

## Example Dockerfile

```dockerfile
# Custom ML training image
FROM pytorch/pytorch:2.7.1-cuda12.8-cudnn9-devel

# Install dependencies
RUN pip install --upgrade pip && \
    pip install transformers datasets accelerate bitsandbytes wandb

# Install system tools
RUN apt-get update && apt-get install -y \
    git \
    vim \
    htop \
    && rm -rf /var/lib/apt/lists/*

# Set working directory
WORKDIR /workspace

# Default command
CMD ["/bin/bash"]
```

## Image Selection Guide

### For Training

| Workload             | Recommended Image                             |
| -------------------- | --------------------------------------------- |
| PyTorch training     | `pytorch/pytorch:2.7.1-cuda12.8-cudnn9-devel` |
| TensorFlow training  | `tensorflow/tensorflow:2.14.0-gpu`            |
| LLM fine-tuning      | `pytorch/pytorch:2.7.1-cuda12.8-cudnn9-devel` |
| Distributed training | `pytorch/pytorch:2.7.1-cuda12.8-cudnn9-devel` |

### For Inference

| Workload          | Recommended Image                               |
| ----------------- | ----------------------------------------------- |
| LLM serving       | `vllm/vllm-openai:latest`                       |
| General inference | `nvidia/cuda:12.8.0-runtime-ubuntu22.04`        |
| API server        | `pytorch/pytorch:2.7.1-cuda12.8-cudnn9-runtime` |

### For Development

| Workload          | Recommended Image                      |
| ----------------- | -------------------------------------- |
| Jupyter notebooks | `cschranz/gpu-jupyter`                 |
| General dev       | `nvidia/cuda:12.8.0-devel-ubuntu22.04` |

## Tips

1. **Use specific tags** - Avoid `:latest` for reproducibility
2. **Choose runtime vs devel** - Runtime is smaller, devel includes compilers
3. **Match CUDA versions** - Ensure your code is compatible with the CUDA version
4. **Consider size** - Smaller images deploy faster

## See Also

* [API Quick Reference](https://docs.clore.ai/dev/reference/api-reference)
* [Quick Start Guide](https://docs.clore.ai/dev/getting-started/quick-start)
