Available Docker Images

When renting a GPU server on Clore.ai, you can choose from pre-configured Docker images or use your own custom images.

Clore Official Images

Pre-built images maintained by Clore.ai, optimized for the decentralized GPU marketplace.

General Purpose

Docker Image
Description
Ports
Updated

cloreai/jupyter:ubuntu24.04-v2

Jupyter Lab + SSH on Ubuntu 24.04

22, 8888

Jan 2025 ✅

cloreai/ml-tools:0.1

Jupyter Lab + VS Code Web server

22, 8888

Jul 2023 ⚠️

cloreai/ubuntu20.04-jupyter:latest

Ubuntu 20.04 + SSH + Jupyter

22, 8888

Nov 2022 ⚠️

cloreai/ubuntu-20.04-remote-desktop:1.2

Ubuntu with remote desktop GUI

22, 3389

May 2023 ⚠️

cloreai/torch:2.0.1

PyTorch 2.0.1 + CUDA

22

Jul 2023 ⚠️

AI/ML Inference

Docker Image
Description
Ports
Updated

cloreai/deepseek-r1-8b:latest

DeepSeek R1 8B ready to run

8000

Jan 2025 ✅

cloreai/stable-diffusion-webui:latest

AUTOMATIC1111 SD WebUI

7860

Sep 2023 ⚠️

cloreai/oobabooga:1.5

Text Generation WebUI

7860

Aug 2023 ⚠️

Infrastructure & Mining

Docker Image
Description
Updated

cloreai/monitoring:0.7

Server monitoring agent

Sep 2024 ✅

cloreai/hiveos:0.4

HiveOS integration

Feb 2025 ✅

cloreai/openvpn-proxy:0.2

VPN/proxy forwarding

Feb 2025 ✅

cloreai/proxy:0.2

Port forwarding system

Jan 2024

cloreai/automining:0.1

Auto mining setup

Jun 2023 ⚠️

cloreai/kuzco:latest

Kuzco distributed inference

Jun 2025 ✅

cloreai/partner:nastya-01

Partner integration tools

Apr 2025 ✅

⚠️ Images marked with ⚠️ haven't been updated in over a year. For ML workloads, consider using the community images below which offer newer CUDA and framework versions.

Battle-tested images from the broader AI/ML community with active maintenance and recent versions.

Deep Learning Frameworks

Image
Tag
Description
Use Case
Ports

pytorch/pytorch

2.10.0-cuda13.0-cudnn9-runtime

Latest PyTorch with CUDA 13.0

Deep learning training/inference

8888 (Jupyter)

nvidia/cuda

13.1.1-runtime-ubuntu22.04

NVIDIA CUDA runtime

Custom CUDA applications

-

nvidia/cuda

13.1.1-devel-ubuntu22.04

CUDA development tools

Building CUDA projects

-

tensorflow/tensorflow

2.15.0-gpu

TensorFlow GPU support

TensorFlow workflows

8888 (TensorBoard)

LLM Inference Engines

Image
Tag
Description
Use Case
Ports

vllm/vllm-openai

latest

High-performance LLM serving

Production LLM APIs

8000

ghcr.io/huggingface/text-generation-inference

3.3.5

Hugging Face TGI

Enterprise LLM serving

80

ollama/ollama

latest

Local LLM runner

Local/edge LLM deployment

11434

Image Generation

Image
Tag
Description
Use Case
Ports

goolashe/automatic1111-sd-webui

latest

Stable Diffusion WebUI

AI art generation

7860

sinfallas/comfyui

latest

ComfyUI node-based SD

Advanced image workflows

8188

Development Environments

Image
Tag
Description
Use Case
Ports

jupyter/tensorflow-notebook

latest

Jupyter + TensorFlow

ML experimentation

8888

jupyter/pytorch-notebook

latest

Jupyter + PyTorch

Deep learning research

8888

cschranz/gpu-jupyter

v1.10_cuda-12.9_ubuntu-24.04

GPU-enabled Jupyter

GPU computing

8888

Selecting an Image

Via Web Interface

  1. Go to Marketplace → Find a server → Rent

  2. In the order form, select Docker Image from dropdown

  3. Choose from pre-configured options or enter custom image

  4. Configure exposed ports (comma-separated: 8888,7860,8000)

  5. Add environment variables if needed

  6. Submit order

Via API

Custom Docker Images

Supported Registries

  • Docker Hub: username/image:tag

  • GitHub Container Registry: ghcr.io/user/image:tag

  • NVIDIA NGC: nvcr.io/nvidia/image:tag

  • Google Container Registry: gcr.io/project/image:tag

Examples

Requirements for Custom Images

  • ✅ Must be publicly accessible

  • ✅ Should include NVIDIA GPU support for GPU instances

  • ✅ Base on CUDA-enabled images for GPU acceleration

  • ✅ Include necessary drivers and libraries

  • ⚠️ Large images (>10GB) may take longer to download

Port Configuration

Expose ports for your applications to make them accessible from outside:

Port
Common Use
Framework

22

SSH access

System

8888

Jupyter Notebook/Lab

Jupyter

7860

Gradio interfaces

SD WebUI, Gradio apps

8000

API servers

vLLM, FastAPI

3000

Web applications

React, Node.js

8080

HTTP services

General web services

11434

Ollama API

Ollama

8188

ComfyUI interface

ComfyUI

Setting Ports in Order Form

Environment Variables

Pass configuration to your containers:

Common Examples

Security Notes

  • ❌ Don't put secrets in environment variables

  • ✅ Use temporary tokens or API keys

  • ✅ Mount secrets as volumes when possible

Persistent Storage

Storage Locations

  • /workspace - Usually persistent during rental period

  • /root - May be reset on container restart

  • /tmp - Temporary storage, not persistent

Best Practices

  • Store important data in /workspace

  • Use external storage for backups (S3, GCS, etc.)

  • Download models to persistent directories

  • Use volume mounts for large datasets

Best Practices

Image Selection

  1. Use recent tags - Avoid latest in production, prefer versioned tags

  2. Choose appropriate base - Match CUDA version with your framework

  3. Consider image size - Smaller images download faster

  4. Check maintenance - Prefer actively maintained images

Security

  1. Expose minimal ports - Only expose ports you need

  2. Use authentication - Set tokens for Jupyter/web interfaces

  3. Update regularly - Use recent image versions

  4. Monitor access - Check who connects to your services

Performance

  1. GPU compatibility - Verify CUDA version matches your needs

  2. Pre-download models - Include models in custom images for faster startup

  3. Optimize containers - Use multi-stage builds, minimize layers

  4. Cache management - Leverage Docker layer caching

Troubleshooting

Image Won't Start

Common Issues:

  • Image doesn't exist or is private

  • Incompatible CUDA version

  • Insufficient disk space

  • Wrong architecture (arm64 vs x86_64)

GPU Not Accessible

Solutions:

  • Use GPU-compatible base images

  • Ensure NVIDIA Container Toolkit is available

  • Check CUDA driver compatibility

Can't Access Exposed Ports

  1. Wait for container to fully start (check logs)

  2. Verify service is running inside container: netstat -tlnp

  3. Check if service binds to 0.0.0.0, not 127.0.0.1

  4. Confirm port is exposed in order form

Performance Issues

  • Use local SSD storage for model weights

  • Optimize batch sizes for available GPU memory

  • Monitor GPU utilization: nvidia-smi -l 1

  • Check CPU/memory usage: htop

Quick Start Examples

Deploy Jupyter with PyTorch

Deploy vLLM API Server

Deploy Stable Diffusion WebUI

Deploy Ollama

Last updated

Was this helpful?