Quickstart

circle-check

Step 1: Create Account & Add Funds

  1. Go to clore.aiarrow-up-rightSign Up

  2. Verify your email

  3. Go to AccountDeposit

  4. Add funds via CLORE, BTC, USDT, or USDC (minimum ~$5 to start)

Step 2: Pick a GPU

Go to the Marketplacearrow-up-right and choose based on your task:

What I Want To Do
Minimum GPU
Budget/Day

Chat with AI (7B models)

RTX 3060 12GB

~$0.15

Chat with AI (32B models)

RTX 4090 24GB

~$0.50

Generate images (FLUX)

RTX 3090 24GB

~$0.30

Generate videos

RTX 4090 24GB

~$0.50

Generate music

Any GPU 4GB+

~$0.15

Voice cloning / TTS

RTX 3060 6GB+

~$0.15

Transcribe audio

RTX 3060 8GB+

~$0.15

Fine-tune a model

RTX 4090 24GB

~$0.50

Run 70B+ models

A100 80GB

~$2.00

triangle-exclamation

Quick GPU Guide

GPU
VRAM
Price
Sweet Spot For

RTX 3060

12GB

$0.15–0.30/day

TTS, music, small models

RTX 3090

24GB

$0.30–1.00/day

Image gen, 32B models

RTX 4090

24GB

$0.50–2.00/day

Everything up to 35B, fast inference

RTX 5090

32GB

$1.50–3.00/day

70B quantized, fastest

A100 80GB

80GB

$2.00–4.00/day

70B FP16, serious training

H100 80GB

80GB

$3.00–6.00/day

400B+ MoE models

Step 3: Deploy

Click Rent on your chosen server, then configure:

  • Order type: On-Demand (guaranteed) or Spot (30–50% cheaper, can be interrupted)

  • Docker image: See recipes below

  • Ports: Always include 22/tcp (SSH) + your app port

  • Environment: Add any API keys needed

🚀 One-Click Recipes

Chat with AI (Ollama + Open WebUI)

The easiest way to run local AI — ChatGPT-like interface with any open model.

After deploy, open the HTTP URL → create account → pick a model (Llama 4 Scout, Gemma 3, Qwen3.5) → chat!

Image Generation (ComfyUI)

Node-based workflow for FLUX, Stable Diffusion, and more.

Image Generation (Stable Diffusion WebUI)

Classic UI for Stable Diffusion, SDXL, and SD 3.5.

LLM API Server (vLLM)

Production-grade serving with OpenAI-compatible API.

Music Generation (ACE-Step)

Generate full songs with vocals — works on any 4GB+ GPU!

SSH in, then:

Step 4: Connect

After your order starts:

  1. Go to My Orders → find your active order

  2. Web UI: Click the HTTP URL (e.g., https://xxx.clorecloud.net)

  3. SSH: ssh -p <port> root@<proxy-address>

circle-exclamation
Deploy
Typical Startup

Ollama + Open WebUI

3–5 min

ComfyUI

10–15 min

vLLM

5–15 min (depends on model size)

SD WebUI

10–20 min

Step 5: Start Creating

Once your service is running, explore the guides for your specific use case:

🤖 Language Models (Chat, Code, Reasoning)

  • Ollama — easiest model management

  • Llama 4 Scout — Meta's latest, 10M context

  • Gemma 3 — Google's 27B that beats 405B models

  • Qwen3.5 — beat Claude 4.5 on math (Feb 2026!)

  • DeepSeek-R1 — chain-of-thought reasoning

  • vLLM — production API serving

🎨 Image Generation

🎬 Video Generation

🔊 Audio & Voice

  • Qwen3-TTS — voice cloning, 10+ languages

  • WhisperX — transcription + speaker diarization

  • Dia TTS — multi-speaker dialog

  • Kokoro — tiny TTS, only 2GB VRAM

🎵 Music

💻 AI Coding

  • TabbyML — self-hosted Copilot for $4.50/month

  • Aider — terminal AI coding assistant

🧠 Training

  • Unsloth — 2x faster, 70% less VRAM

  • Axolotl — YAML-based fine-tuning

💡 Tips for Beginners

  1. Start with Ollama — it's the easiest way to try AI locally

  2. RTX 4090 is the sweet spot — handles 90% of use cases at $0.50–2/day

  3. Use Spot orders for experiments — 30–50% cheaper

  4. Use On-Demand for important work — guaranteed, no interruptions

  5. Download your outputs before the order ends — files are deleted after

  6. Pay with CLORE token — often better rates than stablecoins

  7. Check RAM and network — low RAM is the #1 cause of failures

Troubleshooting

Problem
Solution

HTTP 502 for a long time

Wait 10–20 min for first startup; check RAM ≥ 16GB

Service won't start

RAM too low (need 16GB+) or VRAM too small for the model

Slow model download

Normal on first run; prefer 500Mbps+ servers

CUDA out of memory

Use smaller model or bigger GPU; try quantized versions

Can't SSH

Check port is 22/tcp in config; wait for server to fully start

Need Help?

Last updated

Was this helpful?