InvokeAI

Run InvokeAI professional Stable Diffusion toolkit with node-based canvas on Clore.ai GPUs

InvokeAI is a professional-grade Stable Diffusion toolkit featuring an advanced node-based canvas editor, full SDXL/SD1.5/SD2.x support, ControlNet, IP-Adapter, LoRA management, and a polished web UI. It's designed for artists and creative professionals who need precise control over their image generation workflow. CLORE.AI's high-VRAM GPUs let you run SDXL at full resolution with multiple ControlNets simultaneously.

circle-check

Server Requirements

Parameter
Minimum
Recommended

RAM

12 GB

32 GB+

VRAM

6 GB

12 GB+

Disk

40 GB

200 GB+

GPU

NVIDIA GTX 1060 6GB+

RTX 3090, RTX 4090, A100

circle-info

For SDXL (1024×1024) without compromises, 12 GB VRAM is recommended. For SD1.5 (512×512 or 768×768), 6 GB VRAM is sufficient. More VRAM = higher resolution, faster generation, and more ControlNets simultaneously.

Quick Deploy on CLORE.AI

Docker Image: ghcr.io/invoke-ai/invokeai:latest

Ports: 22/tcp, 9090/http

Environment Variables:

Variable
Example
Description

INVOKEAI_ROOT

/invokeai

Root directory for models and outputs

Step-by-Step Setup

1. Rent a GPU Server on CLORE.AI

Visit CLORE.AI Marketplacearrow-up-right and look for:

  • Budget creative work: RTX 3080/3090 (10–24 GB VRAM)

  • Professional SDXL: RTX 4090 (24 GB VRAM)

  • Maximum quality: A100 80GB — run multiple models simultaneously

2. SSH into Your Server

3. Create the InvokeAI Directory Structure

4. Pull the InvokeAI Docker Image

5. Launch InvokeAI

Basic launch:

With custom root and increased resources:

With specific GPU (multi-GPU server):

6. Wait for Initialization

Look for: Uvicorn running on http://0.0.0.0:9090

7. Access via CLORE.AI HTTP Proxy

Open your CLORE.AI dashboard and find the http_pub URL for port 9090:

This opens the full InvokeAI web interface in your browser.

8. Download Your First Model

In the InvokeAI UI:

  1. Click Model Manager (cube icon in left sidebar)

  2. Click Add Model → HuggingFace

  3. Enter model ID (e.g., stabilityai/stable-diffusion-xl-base-1.0)

  4. Click Add Model

Or download from CivitAI directly:

  1. Go to Model Manager → Add Model → URL

  2. Paste the CivitAI download URL

  3. Set model type (Checkpoint, LoRA, etc.)


Usage Examples

Example 1: Basic Image Generation via Web UI

  1. Open InvokeAI at your CLORE.AI http_pub URL

  2. Click Text to Image in the workflow selector

  3. Enter a prompt: "a majestic dragon perched on a crystal mountain, digital art, 4k"

  4. Set negative prompt: "blurry, low quality, watermark"

  5. Set resolution to 1024x1024 (SDXL) or 512x512 (SD1.5)

  6. Click Invoke

Example 2: Using the Node-Based Canvas

The Workflow editor is InvokeAI's signature feature:

  1. Click Workflows in the top nav

  2. Click New Workflow

  3. Add nodes: Text → Image, connect to Save Image

  4. Add a ControlNet node for guided generation:

    • Right-click → Add Node → ControlNet

    • Connect your reference image

    • Select processor: Canny, Depth, Pose, etc.

  5. Click Invoke to run the full pipeline

Example 3: LoRA Usage

  1. Download a LoRA from CivitAI (via Model Manager → URL import)

  2. In the generation panel, find LoRA section

  3. Click + and select your LoRA

  4. Set weight (0.5–1.0 typical)

  5. Add trigger word to prompt (listed on CivitAI model page)

Example prompt with LoRA trigger:

Example 4: Using IP-Adapter for Style Transfer

  1. Enable IP-Adapter in the generation panel

  2. Upload a reference style image

  3. Set weight (0.5 = subtle influence, 1.0 = strong influence)

  4. Generate with any prompt — output will match the reference style

Example 5: API Usage (Headless)

InvokeAI exposes a REST API for programmatic use:


Configuration

invokeai.yaml Configuration File

Located at /root/invokeai/invokeai.yaml:

RTX 3090 / 4090 (24 GB VRAM):

RTX 3080 (10 GB VRAM):

Smaller GPUs (6-8 GB VRAM):


Performance Tips

1. Use SDXL-Turbo or SDXL-Lightning for Fast Generation

Instead of SDXL base (25–50 steps), use:

  • SDXL-Turbo: 1–4 steps, realtime generation

  • SDXL-Lightning: 4–8 steps, near-SDXL quality

Download via Model Manager → HuggingFace:

  • stabilityai/sdxl-turbo

  • ByteDance/SDXL-Lightning

2. Choose the Right Scheduler

Scheduler
Quality
Speed
Best For

euler_a

Good

Fast

General use

dpmpp_2m

Excellent

Fast

Photorealistic

dpmpp_2m_sde

Excellent

Medium

High detail

ddim

Good

Fast

Inpainting

lms

Good

Fast

Artistic styles

3. Enable xFormers Memory Optimization

InvokeAI enables this automatically when available. Verify in logs:

4. Use Model Caching

Keep your most-used models in the cache. In invokeai.yaml:

5. Tile for Large Resolutions

For images larger than your VRAM allows (e.g., 2048×2048 on 12 GB GPU):

  • Use Tiled VAE in the workflow editor

  • Or generate at 1024×1024 then upscale with ESRGAN


Troubleshooting

Problem: "CUDA out of memory"

Solutions:

  1. Lower resolution (1024→768 or 512)

  2. Reduce batch size to 1

  3. Enable lazy offloading in invokeai.yaml

  4. Use a smaller model (SD1.5 instead of SDXL)

Problem: Web UI not accessible

Make sure port 9090 is listed in your CLORE.AI order's port configuration.

Problem: Model download fails inside container

Problem: Slow generation (< 1 it/s)

  • Check GPU utilization: docker exec -it invokeai nvidia-smi

  • Make sure xFormers is enabled in logs

  • Try euler_a scheduler (fastest)

Problem: Black/broken images

Usually a VAE issue. Try:

  1. Model Manager → Edit model → Change VAE to sdxl-vae-fp16-fix

  2. Or add --fp32-vae flag

Problem: Container won't start



Clore.ai GPU Recommendations

Use Case
Recommended GPU
Est. Cost on Clore.ai

Development/Testing

RTX 3090 (24GB)

~$0.12/gpu/hr

Production

RTX 4090 (24GB)

~$0.70/gpu/hr

Large Scale

A100 80GB

~$1.20/gpu/hr

💡 All examples in this guide can be deployed on Clore.aiarrow-up-right GPU servers. Browse available GPUs and rent by the hour — no commitments, full root access.

Last updated

Was this helpful?