# Quickstart

{% hint style="success" %}
No prior GPU or AI experience needed. This guide gets you from zero to running AI in 5 minutes.
{% endhint %}

## Step 1: Create Account & Add Funds

1. Go to [clore.ai](https://clore.ai) → **Sign Up**
2. Verify your email
3. Go to **Account** → **Deposit**
4. Add funds via **CLORE**, **BTC**, **USDT**, or **USDC** (minimum \~$5 to start)

## Step 2: Pick a GPU

Go to the [Marketplace](https://clore.ai/marketplace) and choose based on your task:

| What I Want To Do         | Minimum GPU   | Budget/Day |
| ------------------------- | ------------- | ---------- |
| Chat with AI (7B models)  | RTX 3060 12GB | \~$0.15    |
| Chat with AI (32B models) | RTX 4090 24GB | \~$0.50    |
| Generate images (FLUX)    | RTX 3090 24GB | \~$0.30    |
| Generate videos           | RTX 4090 24GB | \~$0.50    |
| Generate music            | Any GPU 4GB+  | \~$0.15    |
| Voice cloning / TTS       | RTX 3060 6GB+ | \~$0.15    |
| Transcribe audio          | RTX 3060 8GB+ | \~$0.15    |
| Fine-tune a model         | RTX 4090 24GB | \~$0.50    |
| Run 70B+ models           | A100 80GB     | \~$2.00    |

{% hint style="danger" %}
**Important — check more than just GPU!**

* **RAM:** 16GB+ minimum for most AI workloads
* **Network:** 500Mbps+ recommended (models download from HuggingFace)
* **Disk:** 50GB+ free space for model storage
  {% endhint %}

### Quick GPU Guide

| GPU           | VRAM | Price          | Sweet Spot For                       |
| ------------- | ---- | -------------- | ------------------------------------ |
| **RTX 3060**  | 12GB | $0.15–0.30/day | TTS, music, small models             |
| **RTX 3090**  | 24GB | $0.30–1.00/day | Image gen, 32B models                |
| **RTX 4090**  | 24GB | $0.50–2.00/day | Everything up to 35B, fast inference |
| **RTX 5090**  | 32GB | $1.50–3.00/day | 70B quantized, fastest               |
| **A100 80GB** | 80GB | $2.00–4.00/day | 70B FP16, serious training           |
| **H100 80GB** | 80GB | $3.00–6.00/day | 400B+ MoE models                     |

## Step 3: Deploy

Click **Rent** on your chosen server, then configure:

* **Order type:** On-Demand (guaranteed) or Spot (30–50% cheaper, can be interrupted)
* **Docker image:** See recipes below
* **Ports:** Always include `22/tcp` (SSH) + your app port
* **Environment:** Add any API keys needed

### 🚀 One-Click Recipes

#### Chat with AI (Ollama + Open WebUI)

The easiest way to run local AI — ChatGPT-like interface with any open model.

```
Image: ghcr.io/open-webui/open-webui:ollama
Ports: 22/tcp, 8080/http
```

After deploy, open the HTTP URL → create account → pick a model (Llama 4 Scout, Gemma 3, Qwen3.5) → chat!

#### Image Generation (ComfyUI)

Node-based workflow for FLUX, Stable Diffusion, and more.

```
Image: yanwk/comfyui-boot:cu126-slim
Ports: 22/tcp, 8188/http
Environment: CLI_ARGS=--listen 0.0.0.0
```

#### Image Generation (Stable Diffusion WebUI)

Classic UI for Stable Diffusion, SDXL, and SD 3.5.

```
Image: universonic/stable-diffusion-webui:latest
Ports: 22/tcp, 8080/http
```

#### LLM API Server (vLLM)

Production-grade serving with OpenAI-compatible API.

```
Image: vllm/vllm-openai:latest
Ports: 22/tcp, 8000/http
Command: vllm serve Qwen/Qwen3.5-9B-Instruct --host 0.0.0.0 --max-model-len 8192
```

#### Music Generation (ACE-Step)

Generate full songs with vocals — works on any 4GB+ GPU!

```
Ports: 22/tcp, 7860/http
```

SSH in, then:

```bash
git clone https://github.com/ACE-Step/ACE-Step-1.5.git && cd ACE-Step-1.5
pip install -r requirements.txt
python app.py --port 7860 --listen 0.0.0.0
```

## Step 4: Connect

After your order starts:

1. Go to **My Orders** → find your active order
2. **Web UI:** Click the HTTP URL (e.g., `https://xxx.clorecloud.net`)
3. **SSH:** `ssh -p <port> root@<proxy-address>`

{% hint style="warning" %}
**First launch takes 5–20 minutes** — the server downloads AI models from HuggingFace. HTTP 502 errors during this time are normal. Wait and refresh.
{% endhint %}

| Deploy              | Typical Startup                  |
| ------------------- | -------------------------------- |
| Ollama + Open WebUI | 3–5 min                          |
| ComfyUI             | 10–15 min                        |
| vLLM                | 5–15 min (depends on model size) |
| SD WebUI            | 10–20 min                        |

## Step 5: Start Creating

Once your service is running, explore the guides for your specific use case:

### 🤖 Language Models (Chat, Code, Reasoning)

* [**Ollama**](/guides/language-models/ollama.md) — easiest model management
* [**Llama 4 Scout**](/guides/language-models/llama4.md) — Meta's latest, 10M context
* [**Gemma 3**](/guides/language-models/gemma3.md) — Google's 27B that beats 405B models
* [**Qwen3.5**](/guides/language-models/qwen35.md) — beat Claude 4.5 on math (Feb 2026!)
* [**DeepSeek-R1**](/guides/language-models/deepseek-r1.md) — chain-of-thought reasoning
* [**vLLM**](/guides/language-models/vllm.md) — production API serving

### 🎨 Image Generation

* [**FLUX.2 Klein**](/guides/image-generation/flux2-klein.md) — < 0.5 sec per image!
* [**ComfyUI**](/guides/image-generation/comfyui.md) — node-based workflows
* [**FLUX.1**](/guides/image-generation/flux.md) — highest quality with LoRA + ControlNet
* [**Stable Diffusion 3.5**](/guides/image-generation/stable-diffusion-3-5.md) — best text rendering

### 🎬 Video Generation

* [**FramePack**](/guides/video-generation/framepack.md) — only 6GB VRAM needed!
* [**Wan2.1**](/guides/video-generation/wan-video.md) — high quality T2V + I2V
* [**LTX-2**](/guides/video-generation/ltx-video-2.md) — video WITH audio
* [**CogVideoX**](/guides/video-generation/cogvideox.md) — Zhipu AI's video model

### 🔊 Audio & Voice

* [**Qwen3-TTS**](/guides/audio-and-voice/qwen3-tts.md) — voice cloning, 10+ languages
* [**WhisperX**](/guides/audio-and-voice/whisperx.md) — transcription + speaker diarization
* [**Dia TTS**](/guides/audio-and-voice/dia-tts.md) — multi-speaker dialog
* [**Kokoro**](/guides/audio-and-voice/kokoro-tts.md) — tiny TTS, only 2GB VRAM

### 🎵 Music

* [**ACE-Step**](/guides/music-generation/ace-step.md) — full songs on < 4GB VRAM

### 💻 AI Coding

* [**TabbyML**](/guides/ai-coding-tools/tabby.md) — self-hosted Copilot for $4.50/month
* [**Aider**](/guides/ai-coding-tools/aider.md) — terminal AI coding assistant

### 🧠 Training

* [**Unsloth**](/guides/training/unsloth-finetune.md) — 2x faster, 70% less VRAM
* [**Axolotl**](/guides/training/axolotl-training.md) — YAML-based fine-tuning

## 💡 Tips for Beginners

1. **Start with Ollama** — it's the easiest way to try AI locally
2. **RTX 4090 is the sweet spot** — handles 90% of use cases at $0.50–2/day
3. **Use Spot orders** for experiments — 30–50% cheaper
4. **Use On-Demand** for important work — guaranteed, no interruptions
5. **Download your outputs** before the order ends — files are deleted after
6. **Pay with CLORE token** — often better rates than stablecoins
7. **Check RAM and network** — low RAM is the #1 cause of failures

## Troubleshooting

| Problem                  | Solution                                                         |
| ------------------------ | ---------------------------------------------------------------- |
| HTTP 502 for a long time | Wait 10–20 min for first startup; check RAM ≥ 16GB               |
| Service won't start      | RAM too low (need 16GB+) or VRAM too small for the model         |
| Slow model download      | Normal on first run; prefer 500Mbps+ servers                     |
| CUDA out of memory       | Use smaller model or bigger GPU; try quantized versions          |
| Can't SSH                | Check port is `22/tcp` in config; wait for server to fully start |

## 🐍 Python SDK & CLI (Recommended)

Prefer code over clicking? Install the official SDK:

```bash
pip install clore-ai
clore search --gpu "RTX 4090" --max-price 5.0
clore deploy 123 --image cloreai/ubuntu22.04-cuda12 --type on-demand --currency bitcoin --ssh-password mypass --port 22:tcp
clore ssh 456
```

Or use Python directly:

```python
from clore_ai import CloreAI

client = CloreAI()
servers = client.marketplace(gpu="RTX 4090", max_price_usd=5.0)
order = client.create_order(server_id=servers[0].id, image="cloreai/ubuntu22.04-cuda12", type="on-demand", currency="bitcoin")
```

→ [Full Python Quickstart](/guides/getting-started/python-quickstart.md) | [SDK Guide](/guides/advanced/python-sdk.md) | [CLI Automation](/guides/advanced/cli-automation.md)

## Need Help?

* 📖 [Full Troubleshooting Guide](/guides/getting-started/clore-troubleshooting.md)
* 📊 [GPU Comparison Chart](/guides/getting-started/gpu-comparison.md)
* 💰 [Pricing Reference](/guides/getting-started/pricing.md)
* 💬 [Discord](https://discord.com/invite/clore-ai)
* 💬 [Telegram](https://t.me/clorechat)
* 📧 <support@clore.ai>


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.clore.ai/guides/quickstart.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
