# ComfyUI

Node-based interface for Stable Diffusion with ultimate flexibility on CLORE.AI GPUs.

{% hint style="success" %}
All examples can be run on GPU servers rented through [CLORE.AI Marketplace](https://clore.ai/marketplace).
{% endhint %}

## Server Requirements

| Parameter    | Minimum      | Recommended |
| ------------ | ------------ | ----------- |
| RAM          | 16GB         | 32GB+       |
| VRAM         | 8GB (SDXL)   | 12GB+       |
| Network      | 500Mbps      | 1Gbps+      |
| Startup Time | 5-10 minutes | -           |

{% hint style="warning" %}
**Startup Time:** First launch downloads dependencies and models (5-10 minutes depending on network speed). HTTP 502 during this time is normal.
{% endhint %}

{% hint style="danger" %}
**Important:** ComfyUI with FLUX models requires 16GB+ VRAM. For SDXL with ControlNet, ensure at least 10GB VRAM.
{% endhint %}

## Why ComfyUI?

* **Node-based workflow** - Visual programming for image generation
* **Maximum control** - Fine-tune every step of the pipeline
* **Efficient** - Lower VRAM usage than alternatives
* **Extensible** - Huge ecosystem of custom nodes
* **Workflow sharing** - Import/export as JSON

## Quick Deploy on CLORE.AI

**Docker Image:**

```
yanwk/comfyui-boot:cu126-slim
```

**Ports:**

```
22/tcp
8188/http
```

**Environment:**

```
CLI_ARGS=--listen 0.0.0.0
```

### Verify It's Working

After deployment, find your `http_pub` URL in **My Orders**:

```bash
# Check if UI is accessible (may take 5-10 min on first run)
curl https://your-http-pub.clorecloud.net/
```

{% hint style="info" %}
If you get HTTP 502 for more than 15 minutes, check:

1. Server has 16GB+ RAM
2. Server has 8GB+ VRAM for SDXL, 16GB+ for FLUX
3. Network speed is adequate for downloading models
   {% endhint %}

## Accessing Your Service

When deployed on CLORE.AI, access ComfyUI via the `http_pub` URL:

* **Web UI:** `https://your-http-pub.clorecloud.net/`
* **API:** `https://your-http-pub.clorecloud.net/prompt`
* **WebSocket:** `wss://your-http-pub.clorecloud.net/ws`

{% hint style="info" %}
All `localhost:8188` examples below work when connected via SSH. For external access, replace with your `https://your-http-pub.clorecloud.net/` URL.
{% endhint %}

## Installation

### Using Docker (Recommended)

```bash
docker run -d --gpus all \
  -p 8188:8188 \
  -v comfyui-data:/root \
  -e CLI_ARGS="--listen 0.0.0.0" \
  yanwk/comfyui-boot:cu126-slim
```

### Manual Installation

```bash
# Clone repository
git clone https://github.com/comfyanonymous/ComfyUI.git
cd ComfyUI

# Create virtual environment
python -m venv venv
source venv/bin/activate

# Install PyTorch with CUDA
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu124

# Install requirements
pip install -r requirements.txt

# Run
python main.py --listen 0.0.0.0
```

## Directory Structure

```
ComfyUI/
├── models/
│   ├── checkpoints/     # SD models (.safetensors)
│   ├── loras/           # LoRA models
│   ├── vae/             # VAE models
│   ├── controlnet/      # ControlNet models
│   ├── upscale_models/  # Upscalers
│   └── clip/            # CLIP models
├── input/               # Input images
├── output/              # Generated images
└── custom_nodes/        # Extensions
```

## Download Models

### Stable Diffusion Checkpoints

```bash
cd ComfyUI/models/checkpoints

# SDXL Base
wget https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_base_1.0.safetensors

# SDXL Refiner
wget https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/resolve/main/sd_xl_refiner_1.0.safetensors

# SD 1.5 (smaller, faster)
wget https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.safetensors

# Realistic Vision (photorealistic)
wget https://huggingface.co/SG161222/Realistic_Vision_V6.0_B1_noVAE/resolve/main/Realistic_Vision_V6.0_B1_fp16.safetensors
```

### VAE

```bash
cd ComfyUI/models/vae

# SDXL VAE
wget https://huggingface.co/stabilityai/sdxl-vae/resolve/main/sdxl_vae.safetensors

# SD 1.5 VAE (better colors)
wget https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/main/vae-ft-mse-840000-ema-pruned.safetensors
```

### LoRAs

```bash
cd ComfyUI/models/loras

# Download from CivitAI or HuggingFace
# Place .safetensors files here
```

## Basic Workflow

### Text to Image

1. Add nodes:
   * **Load Checkpoint** → select model
   * **CLIP Text Encode** (x2) → positive & negative prompts
   * **Empty Latent Image** → set dimensions
   * **KSampler** → connect all
   * **VAE Decode** → latent to image
   * **Save Image** → output
2. Connect:

```
[Checkpoint] → MODEL → [KSampler]
[Checkpoint] → CLIP → [CLIP Text Encode +]
[Checkpoint] → CLIP → [CLIP Text Encode -]
[Checkpoint] → VAE → [VAE Decode]
[Text Encode +] → CONDITIONING → [KSampler]
[Text Encode -] → CONDITIONING → [KSampler]
[Empty Latent] → LATENT → [KSampler]
[KSampler] → LATENT → [VAE Decode]
[VAE Decode] → IMAGE → [Save Image]
```

### Image to Image

Replace **Empty Latent Image** with:

1. **Load Image** → your source image
2. **VAE Encode** → convert to latent
3. Adjust **denoise** in KSampler (0.5-0.8)

## ComfyUI Manager

ComfyUI Manager is an **essential extension** that adds a GUI for installing, updating, and managing custom nodes. It is the standard way to extend ComfyUI.

### Installation

```bash
cd ComfyUI/custom_nodes
git clone https://github.com/ltdrdata/ComfyUI-Manager.git
# Restart ComfyUI — a "Manager" button will appear in the toolbar
```

### Using ComfyUI Manager

After restart, a **Manager** button appears in the top-right of the ComfyUI interface.

**Key features:**

| Feature               | How to Access                          |
| --------------------- | -------------------------------------- |
| Install custom nodes  | Manager → Install Custom Nodes         |
| Update all nodes      | Manager → Update All                   |
| Disable/enable nodes  | Manager → Custom Nodes Manager         |
| Install missing nodes | Manager → Install Missing Custom Nodes |
| Fetch model info      | Manager → Model Manager                |
| Restore snapshot      | Manager → Snapshot Manager             |

**Workflow: installing a new node pack**

1. Click **Manager** button
2. Select **Install Custom Nodes**
3. Search by name (e.g., "FLUX", "AnimateDiff")
4. Click **Install** on the desired pack
5. Click **Restart** when prompted
6. New nodes appear in the right-click add menu

**Auto-install missing nodes:** When you import a workflow JSON that uses nodes you don't have, Manager detects them and offers to install automatically via **Install Missing Custom Nodes**.

### Keeping Nodes Updated

```bash
# From CLI (alternative to GUI):
cd ComfyUI/custom_nodes/ComfyUI-Manager
git pull

# Or use Manager → Update All in the UI
```

***

## FLUX Workflow in ComfyUI

FLUX uses a different node structure than standard SD models. Below is a complete FLUX.1-dev workflow.

### Required Files

Before running the workflow, download:

```bash
# FLUX model (dev or schnell)
cd ComfyUI/models/unet
wget https://huggingface.co/black-forest-labs/FLUX.1-dev/resolve/main/flux1-dev.safetensors

# Text encoders
cd ../clip
wget https://huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/clip_l.safetensors
wget https://huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/t5xxl_fp16.safetensors

# VAE
cd ../vae
wget https://huggingface.co/black-forest-labs/FLUX.1-dev/resolve/main/ae.safetensors
```

### FLUX.1-dev Workflow JSON

Save as `flux_dev_workflow.json` and import via **Load** button in ComfyUI:

```json
{
  "last_node_id": 12,
  "last_link_id": 20,
  "nodes": [
    {
      "id": 1,
      "type": "UNETLoader",
      "pos": [100, 100],
      "size": [300, 60],
      "inputs": [],
      "outputs": [{"name": "MODEL", "type": "MODEL", "links": [1]}],
      "properties": {},
      "widgets_values": ["flux1-dev.safetensors", "default"]
    },
    {
      "id": 2,
      "type": "DualCLIPLoader",
      "pos": [100, 200],
      "size": [350, 80],
      "inputs": [],
      "outputs": [{"name": "CLIP", "type": "CLIP", "links": [2, 3]}],
      "properties": {},
      "widgets_values": ["clip_l.safetensors", "t5xxl_fp16.safetensors", "flux"]
    },
    {
      "id": 3,
      "type": "CLIPTextEncode",
      "pos": [500, 150],
      "size": [425, 180],
      "inputs": [{"name": "clip", "type": "CLIP", "link": 2}],
      "outputs": [{"name": "CONDITIONING", "type": "CONDITIONING", "links": [4]}],
      "properties": {},
      "widgets_values": ["A stunning photorealistic landscape, golden hour lighting, 8K"]
    },
    {
      "id": 4,
      "type": "EmptySD3LatentImage",
      "pos": [100, 350],
      "size": [300, 100],
      "inputs": [],
      "outputs": [{"name": "LATENT", "type": "LATENT", "links": [5]}],
      "properties": {},
      "widgets_values": [1024, 1024, 1]
    },
    {
      "id": 5,
      "type": "ModelSamplingFlux",
      "pos": [500, 350],
      "size": [300, 80],
      "inputs": [{"name": "model", "type": "MODEL", "link": 1}],
      "outputs": [{"name": "MODEL", "type": "MODEL", "links": [6]}],
      "properties": {},
      "widgets_values": [1.15, 0.5, 1024, 1024]
    },
    {
      "id": 6,
      "type": "KSampler",
      "pos": [850, 250],
      "size": [350, 240],
      "inputs": [
        {"name": "model", "type": "MODEL", "link": 6},
        {"name": "positive", "type": "CONDITIONING", "link": 4},
        {"name": "negative", "type": "CONDITIONING", "link": 7},
        {"name": "latent_image", "type": "LATENT", "link": 5}
      ],
      "outputs": [{"name": "LATENT", "type": "LATENT", "links": [8]}],
      "properties": {},
      "widgets_values": [42, "fixed", 20, 3.5, "euler", "simple", 1.0]
    },
    {
      "id": 7,
      "type": "CLIPTextEncode",
      "pos": [500, 500],
      "size": [300, 60],
      "inputs": [{"name": "clip", "type": "CLIP", "link": 3}],
      "outputs": [{"name": "CONDITIONING", "type": "CONDITIONING", "links": [7]}],
      "properties": {},
      "widgets_values": [""]
    },
    {
      "id": 8,
      "type": "VAELoader",
      "pos": [100, 500],
      "size": [300, 60],
      "inputs": [],
      "outputs": [{"name": "VAE", "type": "VAE", "links": [9]}],
      "properties": {},
      "widgets_values": ["ae.safetensors"]
    },
    {
      "id": 9,
      "type": "VAEDecode",
      "pos": [1250, 300],
      "size": [210, 46],
      "inputs": [
        {"name": "samples", "type": "LATENT", "link": 8},
        {"name": "vae", "type": "VAE", "link": 9}
      ],
      "outputs": [{"name": "IMAGE", "type": "IMAGE", "links": [10]}],
      "properties": {}
    },
    {
      "id": 10,
      "type": "SaveImage",
      "pos": [1500, 300],
      "size": [300, 270],
      "inputs": [{"name": "images", "type": "IMAGE", "link": 10}],
      "outputs": [],
      "properties": {},
      "widgets_values": ["flux_output"]
    }
  ],
  "links": [
    [1, 1, 0, 5, 0, "MODEL"],
    [2, 2, 0, 3, 0, "CLIP"],
    [3, 2, 0, 7, 0, "CLIP"],
    [4, 3, 0, 6, 1, "CONDITIONING"],
    [5, 4, 0, 6, 3, "LATENT"],
    [6, 5, 0, 6, 0, "MODEL"],
    [7, 7, 0, 6, 2, "CONDITIONING"],
    [8, 6, 0, 9, 0, "LATENT"],
    [9, 8, 0, 9, 1, "VAE"],
    [10, 9, 0, 10, 0, "IMAGE"]
  ],
  "groups": [],
  "config": {},
  "extra": {"ds": {"scale": 0.8, "offset": [0, 0]}},
  "version": 0.4
}
```

### FLUX.1-schnell Workflow (4 Steps)

For schnell, change KSampler settings in the JSON above:

* `num_inference_steps`: `4`
* `cfg`: `1.0`
* `scheduler`: `"simple"`
* Model file: `flux1-schnell.safetensors`

Or set via UI: KSampler → steps: **4**, cfg: **1.0**, sampler: **euler**, scheduler: **simple**

### Key Node Differences: FLUX vs SD

| Node            | SD/SDXL            | FLUX                            |
| --------------- | ------------------ | ------------------------------- |
| Model loader    | Load Checkpoint    | UNETLoader                      |
| Text encoder    | CLIPTextEncode     | DualCLIPLoader + CLIPTextEncode |
| Latent          | Empty Latent Image | EmptySD3LatentImage             |
| Extra           | —                  | ModelSamplingFlux               |
| Negative prompt | Required           | Optional (leave empty)          |

***

## Essential Custom Nodes

### Recommended Node Packs

| Node Pack                | GitHub                                  | Use Case                         |
| ------------------------ | --------------------------------------- | -------------------------------- |
| **ComfyUI-Manager**      | ltdrdata/ComfyUI-Manager                | Install & manage all other nodes |
| **ComfyUI-FLUX**         | XLabs-AI/x-flux-comfyui                 | FLUX ControlNet nodes            |
| **was-node-suite**       | WASasquatch/was-node-suite-comfyui      | 100+ utility nodes               |
| ComfyUI-Impact-Pack      | ltdrdata/ComfyUI-Impact-Pack            | Face detection, SAM, ADetailer   |
| ComfyUI-Inspire-Pack     | ltdrdata/ComfyUI-Inspire-Pack           | Advanced samplers, workflows     |
| ComfyUI-AnimateDiff      | Kosinkadink/ComfyUI-AnimateDiff-Evolved | Video / animation generation     |
| ComfyUI-VideoHelperSuite | Kosinkadink/ComfyUI-VideoHelperSuite    | Video I/O handling               |
| ComfyUI-GGUF             | city96/ComfyUI-GGUF                     | Run quantized GGUF models        |
| ComfyUI-KJNodes          | kijai/ComfyUI-KJNodes                   | Utility & mask nodes             |
| rgthree-comfy            | rgthree/rgthree-comfy                   | Workflow helpers, better UI      |

### ComfyUI-FLUX (XLabs-AI)

Adds ControlNet support for FLUX inside ComfyUI:

```bash
cd ComfyUI/custom_nodes
git clone https://github.com/XLabs-AI/x-flux-comfyui
cd x-flux-comfyui
pip install -r requirements.txt
```

Adds nodes: `Apply ControlNet (FLUX)`, `Load ControlNet Model (FLUX)`, `XFlux Sampler`

### was-node-suite

Over 100 utility nodes for advanced workflows:

```bash
cd ComfyUI/custom_nodes
git clone https://github.com/WASasquatch/was-node-suite-comfyui
cd was-node-suite-comfyui
pip install -r requirements.txt
```

Key nodes: Image Batch, Text Operations, Image Analyze, Cache Node, Bus Node, Upscale, Mask operations

### Install via Manager

1. Click **Manager** button
2. **Install Custom Nodes**
3. Search and install
4. Restart ComfyUI

## Advanced Workflows

### ControlNet

```bash
# Download ControlNet models
cd ComfyUI/models/controlnet

# Canny
wget https://huggingface.co/lllyasviel/ControlNet-v1-1/resolve/main/control_v11p_sd15_canny.pth

# Depth
wget https://huggingface.co/lllyasviel/ControlNet-v1-1/resolve/main/control_v11f1p_sd15_depth.pth

# OpenPose
wget https://huggingface.co/lllyasviel/ControlNet-v1-1/resolve/main/control_v11p_sd15_openpose.pth
```

Workflow:

1. Load Image → Canny Edge Detector
2. Apply ControlNet → KSampler
3. Generate with pose/edge guidance

### Upscaling

```bash
# Download upscaler
cd ComfyUI/models/upscale_models
wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth
```

Workflow:

1. Generate image at lower res (768x768)
2. Upscale Image (Model) node
3. Optional: img2img pass for detail

### SDXL + Refiner

1. Generate with SDXL base (steps 1-20)
2. Pass latent to SDXL refiner (steps 21-30)
3. VAE Decode final result

## Keyboard Shortcuts

| Key                | Action               |
| ------------------ | -------------------- |
| `Ctrl+Enter`       | Queue prompt         |
| `Ctrl+Shift+Enter` | Queue prompt (front) |
| `Ctrl+Z`           | Undo                 |
| `Ctrl+Y`           | Redo                 |
| `Ctrl+S`           | Save workflow        |
| `Ctrl+O`           | Load workflow        |
| `Ctrl+A`           | Select all           |
| `Delete`           | Delete selected      |
| `Ctrl+M`           | Mute node            |
| `Ctrl+B`           | Bypass node          |

## API Usage

### Queue Prompt

```python
import json
import urllib.request

# For external access, use your http_pub URL:
SERVER = "your-http-pub.clorecloud.net"
# Or via SSH: SERVER = "localhost:8188"

def queue_prompt(prompt, server=SERVER):
    data = json.dumps({"prompt": prompt}).encode('utf-8')
    req = urllib.request.Request(f"https://{server}/prompt", data=data)
    urllib.request.urlopen(req)

# Load workflow JSON and queue
with open("workflow.json") as f:
    workflow = json.load(f)
queue_prompt(workflow)
```

### WebSocket for Progress

```python
import websocket
import json

# For external access, use wss:// with your http_pub URL
ws = websocket.WebSocket()
ws.connect(f"wss://{SERVER}/ws")

while True:
    msg = json.loads(ws.recv())
    if msg['type'] == 'progress':
        print(f"Step {msg['data']['value']}/{msg['data']['max']}")
    elif msg['type'] == 'executed':
        print("Done!")
        break
```

## Performance Tips

1. **Enable --lowvram** for <8GB VRAM
2. **Use fp16** models when possible
3. **Batch size 1** for limited VRAM
4. **Tiled VAE** for high-res images
5. **Disable preview** for faster generation

## GPU Requirements

| Model             | Minimum VRAM | Recommended VRAM | Min RAM |
| ----------------- | ------------ | ---------------- | ------- |
| SD 1.5            | 4GB          | 8GB              | 16GB    |
| SDXL              | 8GB          | 12GB             | 16GB    |
| SDXL + ControlNet | 10GB         | 16GB             | 16GB    |
| FLUX              | 16GB         | 24GB             | 32GB    |

## GPU Presets

### RTX 3060 12GB (Budget)

```bash
# Launch with optimizations
python main.py --lowvram --force-fp16

# Recommended settings:
# - SDXL: 768x768, batch 1
# - SD 1.5: 512x512, batch 4
# - Use VAE tiling
# - 20-30 steps
```

**Best for:** SD 1.5, SDXL (with limits)

### RTX 3090 24GB (Optimal)

```bash
# Standard launch
python main.py --force-fp16

# Recommended settings:
# - SDXL: 1024x1024, batch 2
# - FLUX schnell: 1024x1024, batch 1
# - ControlNet + SDXL works well
# - 30-50 steps
```

**Best for:** SDXL, ControlNet workflows, moderate FLUX

### RTX 4090 24GB (Performance)

```bash
# Full speed launch
python main.py

# Recommended settings:
# - SDXL: 1024x1024, batch 4
# - FLUX dev: 1024x1024, batch 1-2
# - Complex workflows with multiple ControlNets
# - 50+ steps for quality
```

**Best for:** FLUX, complex workflows, batch generation

### A100 40GB/80GB (Production)

```bash
# Maximum performance
python main.py --highvram

# Recommended settings:
# - SDXL: 1024x1024, batch 8+
# - FLUX: 1024x1024, batch 2-4
# - Multiple models loaded simultaneously
# - High-res outputs (2048x2048)
```

**Best for:** Production workloads, FLUX, high-res generation

## Cost Estimate

Typical CLORE.AI marketplace rates:

| GPU      | VRAM | Price/day  | SDXL Speed   |
| -------- | ---- | ---------- | ------------ |
| RTX 3060 | 12GB | $0.15–0.30 | \~15 sec/img |
| RTX 3090 | 24GB | $0.30–1.00 | \~8 sec/img  |
| RTX 4090 | 24GB | $0.50–2.00 | \~4 sec/img  |
| A100     | 40GB | $1.50–3.00 | \~3 sec/img  |

*Prices in USD/day. Rates vary by provider — check* [*CLORE.AI Marketplace*](https://clore.ai/marketplace) *for current rates.*

## Troubleshooting

### HTTP 502 for a long time

1. **Check RAM:** Server must have 16GB+ RAM
2. **Check VRAM:** 8GB+ for SDXL, 16GB+ for FLUX
3. **Dependencies downloading:** First run takes 5-10 min
4. **Model downloading:** Large models take longer

### Out of Memory

```bash
# Run with low VRAM mode
python main.py --lowvram

# Or force fp16
python main.py --force-fp16
```

### Black Images

* Check VAE is loaded
* Try different VAE
* Reduce image size

### Slow Generation

* Enable CUDA
* Use fp16 models
* Reduce steps (20-30 is often enough)

## Workflow Examples

Import these JSON workflows in ComfyUI:

* [Basic txt2img](https://comfyworkflows.com)
* [SDXL + Refiner](https://comfyworkflows.com)
* [ControlNet Canny](https://comfyworkflows.com)
* [AnimateDiff Video](https://comfyworkflows.com)

## Next Steps

* [ControlNet Guide](https://docs.clore.ai/guides/image-processing/controlnet-advanced)
* [Real-ESRGAN Upscaling](https://docs.clore.ai/guides/image-processing/real-esrgan-upscaling)
* [Kohya Training](https://docs.clore.ai/guides/training/kohya-training) - Train custom LoRAs
* [Fooocus](https://docs.clore.ai/guides/image-generation/fooocus-simple-sd) - Simpler alternative
