# InvokeAI

InvokeAI is a professional-grade Stable Diffusion toolkit featuring an advanced node-based canvas editor, full SDXL/SD1.5/SD2.x support, ControlNet, IP-Adapter, LoRA management, and a polished web UI. It's designed for artists and creative professionals who need precise control over their image generation workflow. CLORE.AI's high-VRAM GPUs let you run SDXL at full resolution with multiple ControlNets simultaneously.

{% hint style="success" %}
All examples can be run on GPU servers rented through [CLORE.AI Marketplace](https://clore.ai/marketplace).
{% endhint %}

## Server Requirements

| Parameter | Minimum              | Recommended              |
| --------- | -------------------- | ------------------------ |
| RAM       | 12 GB                | 32 GB+                   |
| VRAM      | 6 GB                 | 12 GB+                   |
| Disk      | 40 GB                | 200 GB+                  |
| GPU       | NVIDIA GTX 1060 6GB+ | RTX 3090, RTX 4090, A100 |

{% hint style="info" %}
For SDXL (1024×1024) without compromises, 12 GB VRAM is recommended. For SD1.5 (512×512 or 768×768), 6 GB VRAM is sufficient. More VRAM = higher resolution, faster generation, and more ControlNets simultaneously.
{% endhint %}

## Quick Deploy on CLORE.AI

**Docker Image:** `ghcr.io/invoke-ai/invokeai:latest`

**Ports:** `22/tcp`, `9090/http`

**Environment Variables:**

| Variable        | Example     | Description                           |
| --------------- | ----------- | ------------------------------------- |
| `INVOKEAI_ROOT` | `/invokeai` | Root directory for models and outputs |

## Step-by-Step Setup

### 1. Rent a GPU Server on CLORE.AI

Visit [CLORE.AI Marketplace](https://clore.ai/marketplace) and look for:

* **Budget creative work**: RTX 3080/3090 (10–24 GB VRAM)
* **Professional SDXL**: RTX 4090 (24 GB VRAM)
* **Maximum quality**: A100 80GB — run multiple models simultaneously

### 2. SSH into Your Server

```bash
ssh -p <PORT> root@<SERVER_IP>
```

### 3. Create the InvokeAI Directory Structure

```bash
mkdir -p /root/invokeai
```

### 4. Pull the InvokeAI Docker Image

```bash
docker pull ghcr.io/invoke-ai/invokeai:latest
```

### 5. Launch InvokeAI

**Basic launch:**

```bash
docker run -d \
  --name invokeai \
  --gpus all \
  -p 9090:9090 \
  -v /root/invokeai:/invokeai \
  -e INVOKEAI_ROOT=/invokeai \
  ghcr.io/invoke-ai/invokeai:latest \
  invokeai-web --host 0.0.0.0 --port 9090
```

**With custom root and increased resources:**

```bash
docker run -d \
  --name invokeai \
  --gpus all \
  --shm-size 8g \
  -p 9090:9090 \
  -v /root/invokeai:/invokeai \
  -v /root/models:/root/models \
  -e INVOKEAI_ROOT=/invokeai \
  ghcr.io/invoke-ai/invokeai:latest \
  invokeai-web --host 0.0.0.0 --port 9090
```

**With specific GPU (multi-GPU server):**

```bash
docker run -d \
  --name invokeai \
  --gpus '"device=0"' \
  -p 9090:9090 \
  -v /root/invokeai:/invokeai \
  -e INVOKEAI_ROOT=/invokeai \
  -e CUDA_VISIBLE_DEVICES=0 \
  ghcr.io/invoke-ai/invokeai:latest \
  invokeai-web --host 0.0.0.0 --port 9090
```

### 6. Wait for Initialization

```bash
docker logs -f invokeai
```

Look for: `Uvicorn running on http://0.0.0.0:9090`

### 7. Access via CLORE.AI HTTP Proxy

Open your CLORE.AI dashboard and find the `http_pub` URL for port 9090:

```
https://<order-id>-9090.clore.ai/
```

This opens the full InvokeAI web interface in your browser.

### 8. Download Your First Model

In the InvokeAI UI:

1. Click **Model Manager** (cube icon in left sidebar)
2. Click **Add Model → HuggingFace**
3. Enter model ID (e.g., `stabilityai/stable-diffusion-xl-base-1.0`)
4. Click **Add Model**

Or download from CivitAI directly:

1. Go to **Model Manager → Add Model → URL**
2. Paste the CivitAI download URL
3. Set model type (Checkpoint, LoRA, etc.)

***

## Usage Examples

### Example 1: Basic Image Generation via Web UI

1. Open InvokeAI at your CLORE.AI http\_pub URL
2. Click **Text to Image** in the workflow selector
3. Enter a prompt: `"a majestic dragon perched on a crystal mountain, digital art, 4k"`
4. Set negative prompt: `"blurry, low quality, watermark"`
5. Set resolution to `1024x1024` (SDXL) or `512x512` (SD1.5)
6. Click **Invoke**

### Example 2: Using the Node-Based Canvas

The Workflow editor is InvokeAI's signature feature:

1. Click **Workflows** in the top nav
2. Click **New Workflow**
3. Add nodes: **Text → Image**, connect to **Save Image**
4. Add a **ControlNet** node for guided generation:
   * Right-click → Add Node → **ControlNet**
   * Connect your reference image
   * Select processor: `Canny`, `Depth`, `Pose`, etc.
5. Click **Invoke** to run the full pipeline

### Example 3: LoRA Usage

1. Download a LoRA from CivitAI (via Model Manager → URL import)
2. In the generation panel, find **LoRA** section
3. Click **+** and select your LoRA
4. Set weight (0.5–1.0 typical)
5. Add trigger word to prompt (listed on CivitAI model page)

Example prompt with LoRA trigger:

```
portrait of a woman, <lora:detail-tweaker:0.8>, hyperrealistic, studio lighting
```

### Example 4: Using IP-Adapter for Style Transfer

1. Enable **IP-Adapter** in the generation panel
2. Upload a reference style image
3. Set weight (0.5 = subtle influence, 1.0 = strong influence)
4. Generate with any prompt — output will match the reference style

### Example 5: API Usage (Headless)

InvokeAI exposes a REST API for programmatic use:

```python
import requests
import time
import base64

BASE_URL = "http://localhost:9090"  # or your CLORE.AI http_pub URL

# List available models
models = requests.get(f"{BASE_URL}/api/v1/models").json()
print("Available models:", [m["name"] for m in models.get("items", [])])

# Queue a generation
payload = {
    "batch": {
        "graph": {
            "nodes": {
                "text_encoder": {
                    "type": "compel",
                    "id": "text_encoder",
                    "prompt": "a futuristic city at sunset, photorealistic",
                },
                "noise": {
                    "type": "noise",
                    "id": "noise",
                    "width": 1024,
                    "height": 1024,
                    "seed": 42,
                },
                "denoise_latents": {
                    "type": "denoise_latents",
                    "id": "denoise_latents",
                    "steps": 30,
                    "cfg_scale": 7.5,
                    "scheduler": "dpmpp_2m",
                },
                "l2i": {
                    "type": "l2i",
                    "id": "l2i",
                },
            },
            "edges": [],
        }
    }
}

# Simpler: use the queue API
response = requests.post(f"{BASE_URL}/api/v1/queue/default/enqueue_batch", json=payload)
print(response.status_code)
```

***

## Configuration

### invokeai.yaml Configuration File

Located at `/root/invokeai/invokeai.yaml`:

```yaml
InvokeAI:
  Web Server:
    host: 0.0.0.0
    port: 9090
    allow_origins: []
    
  Features:
    esrgan: true          # ESRGAN upscaling
    internet_available: true
    
  Memory/Performance:
    ram: 12.0             # Max RAM for model cache (GB)
    vram: 0.25            # Fraction of VRAM for model cache
    lazy_offload: true    # Offload models to CPU when not in use
    
  Paths:
    models_path: /invokeai/models
    db_path: /invokeai/databases/invokeai.db
    outdir: /invokeai/outputs
```

### Recommended Settings by GPU

**RTX 3090 / 4090 (24 GB VRAM):**

```yaml
Memory/Performance:
  ram: 24.0
  vram: 0.5   # Keep 50% for active model
  lazy_offload: false  # Don't offload — enough VRAM
```

**RTX 3080 (10 GB VRAM):**

```yaml
Memory/Performance:
  ram: 16.0
  vram: 0.25
  lazy_offload: true
```

**Smaller GPUs (6-8 GB VRAM):**

```yaml
Memory/Performance:
  ram: 8.0
  vram: 0.1
  lazy_offload: true
```

***

## Performance Tips

### 1. Use SDXL-Turbo or SDXL-Lightning for Fast Generation

Instead of SDXL base (25–50 steps), use:

* **SDXL-Turbo**: 1–4 steps, realtime generation
* **SDXL-Lightning**: 4–8 steps, near-SDXL quality

Download via Model Manager → HuggingFace:

* `stabilityai/sdxl-turbo`
* `ByteDance/SDXL-Lightning`

### 2. Choose the Right Scheduler

| Scheduler      | Quality   | Speed  | Best For        |
| -------------- | --------- | ------ | --------------- |
| `euler_a`      | Good      | Fast   | General use     |
| `dpmpp_2m`     | Excellent | Fast   | Photorealistic  |
| `dpmpp_2m_sde` | Excellent | Medium | High detail     |
| `ddim`         | Good      | Fast   | Inpainting      |
| `lms`          | Good      | Fast   | Artistic styles |

### 3. Enable xFormers Memory Optimization

InvokeAI enables this automatically when available. Verify in logs:

```
xFormers is available
```

### 4. Use Model Caching

Keep your most-used models in the cache. In invokeai.yaml:

```yaml
ram: 32.0  # Larger = more models cached
```

### 5. Tile for Large Resolutions

For images larger than your VRAM allows (e.g., 2048×2048 on 12 GB GPU):

* Use **Tiled VAE** in the workflow editor
* Or generate at 1024×1024 then upscale with **ESRGAN**

***

## Troubleshooting

### Problem: "CUDA out of memory"

```
RuntimeError: CUDA out of memory
```

**Solutions:**

1. Lower resolution (1024→768 or 512)
2. Reduce batch size to 1
3. Enable lazy offloading in invokeai.yaml
4. Use a smaller model (SD1.5 instead of SDXL)

### Problem: Web UI not accessible

```bash
# Check if the container is running
docker ps | grep invokeai

# Check logs for errors
docker logs invokeai 2>&1 | tail -50

# Verify port mapping
docker port invokeai
```

Make sure port 9090 is listed in your CLORE.AI order's port configuration.

### Problem: Model download fails inside container

```bash
# Download manually via exec
docker exec -it invokeai bash
cd /invokeai/models/main
wget "https://civitai.com/api/download/models/XXX" -O mymodel.safetensors
```

### Problem: Slow generation (< 1 it/s)

* Check GPU utilization: `docker exec -it invokeai nvidia-smi`
* Make sure xFormers is enabled in logs
* Try `euler_a` scheduler (fastest)

### Problem: Black/broken images

Usually a VAE issue. Try:

1. Model Manager → Edit model → Change VAE to `sdxl-vae-fp16-fix`
2. Or add `--fp32-vae` flag

### Problem: Container won't start

```bash
docker logs invokeai
# Common: port 9090 already in use
# Fix:
docker stop $(docker ps -q --filter "publish=9090")
docker start invokeai
```

***

## Links

* [GitHub](https://github.com/invoke-ai/InvokeAI)
* [Documentation](https://invoke-ai.github.io/InvokeAI/)
* [Docker Hub / GHCR](https://github.com/invoke-ai/InvokeAI/pkgs/container/invokeai)
* [CivitAI (Models)](https://civitai.com)
* [CLORE.AI Marketplace](https://clore.ai/marketplace)

***

## Clore.ai GPU Recommendations

| Use Case            | Recommended GPU | Est. Cost on Clore.ai |
| ------------------- | --------------- | --------------------- |
| Development/Testing | RTX 3090 (24GB) | \~$0.12/gpu/hr        |
| Production          | RTX 4090 (24GB) | \~$0.70/gpu/hr        |
| Large Scale         | A100 80GB       | \~$1.20/gpu/hr        |

> 💡 All examples in this guide can be deployed on [Clore.ai](https://clore.ai/marketplace) GPU servers. Browse available GPUs and rent by the hour — no commitments, full root access.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.clore.ai/guides/image-generation/invokeai.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
