# Available Docker Images

When renting a GPU server on Clore.ai, you can choose from pre-configured Docker images or use your own custom images.

## Clore Official Images

Pre-built images maintained by Clore.ai, optimized for the decentralized GPU marketplace.

### General Purpose

| Docker Image                              | Description                       | Ports    | Updated     |
| ----------------------------------------- | --------------------------------- | -------- | ----------- |
| `cloreai/jupyter:ubuntu24.04-v2`          | Jupyter Lab + SSH on Ubuntu 24.04 | 22, 8888 | Jan 2025 ✅  |
| `cloreai/ml-tools:0.1`                    | Jupyter Lab + VS Code Web server  | 22, 8888 | Jul 2023 ⚠️ |
| `cloreai/ubuntu20.04-jupyter:latest`      | Ubuntu 20.04 + SSH + Jupyter      | 22, 8888 | Nov 2022 ⚠️ |
| `cloreai/ubuntu-20.04-remote-desktop:1.2` | Ubuntu with remote desktop GUI    | 22, 3389 | May 2023 ⚠️ |
| `cloreai/torch:2.0.1`                     | PyTorch 2.0.1 + CUDA              | 22       | Jul 2023 ⚠️ |

### AI/ML Inference

| Docker Image                            | Description                 | Ports | Updated     |
| --------------------------------------- | --------------------------- | ----- | ----------- |
| `cloreai/deepseek-r1-8b:latest`         | DeepSeek R1 8B ready to run | 8000  | Jan 2025 ✅  |
| `cloreai/stable-diffusion-webui:latest` | AUTOMATIC1111 SD WebUI      | 7860  | Sep 2023 ⚠️ |
| `cloreai/oobabooga:1.5`                 | Text Generation WebUI       | 7860  | Aug 2023 ⚠️ |

### Infrastructure & Mining

| Docker Image                | Description                 | Updated     |
| --------------------------- | --------------------------- | ----------- |
| `cloreai/monitoring:0.7`    | Server monitoring agent     | Sep 2024 ✅  |
| `cloreai/hiveos:0.4`        | HiveOS integration          | Feb 2025 ✅  |
| `cloreai/openvpn-proxy:0.2` | VPN/proxy forwarding        | Feb 2025 ✅  |
| `cloreai/proxy:0.2`         | Port forwarding system      | Jan 2024    |
| `cloreai/automining:0.1`    | Auto mining setup           | Jun 2023 ⚠️ |
| `cloreai/kuzco:latest`      | Kuzco distributed inference | Jun 2025 ✅  |
| `cloreai/partner:nastya-01` | Partner integration tools   | Apr 2025 ✅  |

> ⚠️ Images marked with ⚠️ haven't been updated in over a year. For ML workloads, consider using the **community images** below which offer newer CUDA and framework versions.

## Recommended Community Images

Battle-tested images from the broader AI/ML community with active maintenance and recent versions.

### Deep Learning Frameworks

| Image                   | Tag                              | Description                   | Use Case                         | Ports              |
| ----------------------- | -------------------------------- | ----------------------------- | -------------------------------- | ------------------ |
| `pytorch/pytorch`       | `2.10.0-cuda13.0-cudnn9-runtime` | Latest PyTorch with CUDA 13.0 | Deep learning training/inference | 8888 (Jupyter)     |
| `nvidia/cuda`           | `13.1.1-runtime-ubuntu22.04`     | NVIDIA CUDA runtime           | Custom CUDA applications         | -                  |
| `nvidia/cuda`           | `13.1.1-devel-ubuntu22.04`       | CUDA development tools        | Building CUDA projects           | -                  |
| `tensorflow/tensorflow` | `2.15.0-gpu`                     | TensorFlow GPU support        | TensorFlow workflows             | 8888 (TensorBoard) |

### LLM Inference Engines

| Image                                           | Tag      | Description                  | Use Case                  | Ports |
| ----------------------------------------------- | -------- | ---------------------------- | ------------------------- | ----- |
| `vllm/vllm-openai`                              | `latest` | High-performance LLM serving | Production LLM APIs       | 8000  |
| `ghcr.io/huggingface/text-generation-inference` | `3.3.5`  | Hugging Face TGI             | Enterprise LLM serving    | 80    |
| `ollama/ollama`                                 | `latest` | Local LLM runner             | Local/edge LLM deployment | 11434 |

### Image Generation

| Image                             | Tag      | Description            | Use Case                 | Ports |
| --------------------------------- | -------- | ---------------------- | ------------------------ | ----- |
| `goolashe/automatic1111-sd-webui` | `latest` | Stable Diffusion WebUI | AI art generation        | 7860  |
| `sinfallas/comfyui`               | `latest` | ComfyUI node-based SD  | Advanced image workflows | 8188  |

### Development Environments

| Image                         | Tag                            | Description          | Use Case               | Ports |
| ----------------------------- | ------------------------------ | -------------------- | ---------------------- | ----- |
| `jupyter/tensorflow-notebook` | `latest`                       | Jupyter + TensorFlow | ML experimentation     | 8888  |
| `jupyter/pytorch-notebook`    | `latest`                       | Jupyter + PyTorch    | Deep learning research | 8888  |
| `cschranz/gpu-jupyter`        | `v1.10_cuda-12.9_ubuntu-24.04` | GPU-enabled Jupyter  | GPU computing          | 8888  |

## Selecting an Image

### Via Web Interface

1. Go to **Marketplace** → Find a server → **Rent**
2. In the order form, select **Docker Image** from dropdown
3. Choose from pre-configured options or enter custom image
4. Configure exposed ports (comma-separated: `8888,7860,8000`)
5. Add environment variables if needed
6. Submit order

### Via API

```json
{
  "image": "pytorch/pytorch:2.10.0-cuda13.0-cudnn9-runtime",
  "ports": [8888, 8000],
  "env": {
    "JUPYTER_ENABLE_LAB": "yes"
  }
}
```

## Custom Docker Images

### Supported Registries

* **Docker Hub**: `username/image:tag`
* **GitHub Container Registry**: `ghcr.io/user/image:tag`
* **NVIDIA NGC**: `nvcr.io/nvidia/image:tag`
* **Google Container Registry**: `gcr.io/project/image:tag`

### Examples

```bash
# PyTorch latest with CUDA 12.1
pytorch/pytorch:2.1.0-cuda12.1-cudnn8-runtime

# NVIDIA CUDA base
nvidia/cuda:12.2.0-base-ubuntu22.04

# Hugging Face Transformers
huggingface/transformers-pytorch-gpu:4.35.0

# Custom image
your-username/my-ai-model:v1.0
```

### Requirements for Custom Images

* ✅ Must be publicly accessible
* ✅ Should include NVIDIA GPU support for GPU instances
* ✅ Base on CUDA-enabled images for GPU acceleration
* ✅ Include necessary drivers and libraries
* ⚠️ Large images (>10GB) may take longer to download

## Port Configuration

Expose ports for your applications to make them accessible from outside:

| Port  | Common Use           | Framework             |
| ----- | -------------------- | --------------------- |
| 22    | SSH access           | System                |
| 8888  | Jupyter Notebook/Lab | Jupyter               |
| 7860  | Gradio interfaces    | SD WebUI, Gradio apps |
| 8000  | API servers          | vLLM, FastAPI         |
| 3000  | Web applications     | React, Node.js        |
| 8080  | HTTP services        | General web services  |
| 11434 | Ollama API           | Ollama                |
| 8188  | ComfyUI interface    | ComfyUI               |

### Setting Ports in Order Form

```
8888,7860,8000
```

## Environment Variables

Pass configuration to your containers:

### Common Examples

```bash
# Hugging Face token
HF_TOKEN=hf_your_token_here

# Model configuration
MODEL_NAME=meta-llama/Llama-2-7b-chat-hf
CUDA_VISIBLE_DEVICES=0

# Jupyter configuration
JUPYTER_ENABLE_LAB=yes
JUPYTER_TOKEN=your_secure_token

# vLLM configuration
VLLM_WORKER_MULTIPROC_METHOD=spawn
```

### Security Notes

* ❌ Don't put secrets in environment variables
* ✅ Use temporary tokens or API keys
* ✅ Mount secrets as volumes when possible

## Persistent Storage

### Storage Locations

* `/workspace` - Usually persistent during rental period
* `/root` - May be reset on container restart
* `/tmp` - Temporary storage, not persistent

### Best Practices

* Store important data in `/workspace`
* Use external storage for backups (S3, GCS, etc.)
* Download models to persistent directories
* Use volume mounts for large datasets

## Best Practices

### Image Selection

1. **Use recent tags** - Avoid `latest` in production, prefer versioned tags
2. **Choose appropriate base** - Match CUDA version with your framework
3. **Consider image size** - Smaller images download faster
4. **Check maintenance** - Prefer actively maintained images

### Security

1. **Expose minimal ports** - Only expose ports you need
2. **Use authentication** - Set tokens for Jupyter/web interfaces
3. **Update regularly** - Use recent image versions
4. **Monitor access** - Check who connects to your services

### Performance

1. **GPU compatibility** - Verify CUDA version matches your needs
2. **Pre-download models** - Include models in custom images for faster startup
3. **Optimize containers** - Use multi-stage builds, minimize layers
4. **Cache management** - Leverage Docker layer caching

## Troubleshooting

### Image Won't Start

```bash
# Check if image exists and is public
docker pull your-image:tag

# Verify CUDA compatibility
nvidia-smi
nvcc --version
```

**Common Issues:**

* Image doesn't exist or is private
* Incompatible CUDA version
* Insufficient disk space
* Wrong architecture (arm64 vs x86\_64)

### GPU Not Accessible

```bash
# Check GPU availability
nvidia-smi

# Test CUDA in container
python -c "import torch; print(torch.cuda.is_available())"
```

**Solutions:**

* Use GPU-compatible base images
* Ensure NVIDIA Container Toolkit is available
* Check CUDA driver compatibility

### Can't Access Exposed Ports

1. Wait for container to fully start (check logs)
2. Verify service is running inside container: `netstat -tlnp`
3. Check if service binds to 0.0.0.0, not 127.0.0.1
4. Confirm port is exposed in order form

### Performance Issues

* Use local SSD storage for model weights
* Optimize batch sizes for available GPU memory
* Monitor GPU utilization: `nvidia-smi -l 1`
* Check CPU/memory usage: `htop`

## Quick Start Examples

### Deploy Jupyter with PyTorch

```bash
Image: pytorch/pytorch:2.10.0-cuda13.0-cudnn9-runtime
Ports: 8888
Command: jupyter lab --ip=0.0.0.0 --allow-root --no-browser
```

### Deploy vLLM API Server

```bash
Image: vllm/vllm-openai:latest
Ports: 8000
Env: MODEL_NAME=microsoft/DialoGPT-medium
Command: python -m vllm.entrypoints.openai.api_server --model $MODEL_NAME --host 0.0.0.0
```

### Deploy Stable Diffusion WebUI

```bash
Image: goolashe/automatic1111-sd-webui:latest
Ports: 7860
Command: --listen --api --xformers
```

### Deploy Ollama

```bash
Image: ollama/ollama:latest
Ports: 11434
Command: ollama serve
# Then: ollama run llama2 (inside container)
```


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.clore.ai/for-renters/docker-images.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
