# Stable Diffusion WebUI

The most popular web interface for Stable Diffusion on CLORE.AI GPUs.

{% hint style="success" %}
All examples can be run on GPU servers rented through [CLORE.AI Marketplace](https://clore.ai/marketplace).
{% endhint %}

## Server Requirements

| Parameter    | Minimum       | Recommended |
| ------------ | ------------- | ----------- |
| RAM          | 16GB          | 32GB+       |
| VRAM         | 8GB           | 12GB+       |
| Network      | 500Mbps       | 1Gbps+      |
| Startup Time | 10-20 minutes | -           |

{% hint style="warning" %}
**Startup Time:** First launch installs Python dependencies and downloads the base model (10-20 minutes depending on network speed). HTTP 502 during this time is normal.
{% endhint %}

## Why SD WebUI?

* **Feature-rich** - txt2img, img2img, inpainting, outpainting
* **Extensions** - Huge ecosystem of plugins
* **User-friendly** - Intuitive web interface
* **Well documented** - Large community support

> 📚 See also: [How to Run Stable Diffusion on a Cloud GPU](https://blog.clore.ai/how-to-run-stable-diffusion-cloud-gpu/)

## Quick Deploy on CLORE.AI

**Docker Image:**

```
universonic/stable-diffusion-webui:latest
```

**Ports:**

```
22/tcp
8080/http
```

**Command:**

```bash
./webui.sh --listen --xformers
```

### Verify It's Working

After deployment, find your `http_pub` URL in **My Orders**:

```bash
# Check if UI is accessible (may take 10-20 min on first run)
curl https://your-http-pub.clorecloud.net/
```

{% hint style="info" %}
If you get HTTP 502 for more than 20 minutes, check:

1. Server has 16GB+ RAM
2. Server has 8GB+ VRAM
3. Network speed is adequate for downloading dependencies
   {% endhint %}

## Accessing Your Service

When deployed on CLORE.AI, access SD WebUI via the `http_pub` URL:

* **Web UI:** `https://your-http-pub.clorecloud.net/`
* **API (if enabled):** `https://your-http-pub.clorecloud.net/sdapi/v1/`

{% hint style="info" %}
All `localhost:7860` examples below work when connected via SSH. For external access, replace with your `https://your-http-pub.clorecloud.net/` URL.
{% endhint %}

## Installation

### Using Docker (Recommended)

```bash
docker run -d --gpus all \
    -p 8080:8080 \
    -v sd-webui-data:/app/stable-diffusion-webui \
    universonic/stable-diffusion-webui:latest
```

### Manual Installation

```bash
# Install dependencies
sudo apt install python3.10 python3.10-venv git wget

# Clone repository
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
cd stable-diffusion-webui

# Run (auto-installs everything)
./webui.sh --listen --xformers
```

## Directory Structure

```
stable-diffusion-webui/
├── models/
│   ├── Stable-diffusion/   # Main models (.safetensors)
│   ├── Lora/               # LoRA models
│   ├── VAE/                # VAE models
│   ├── ControlNet/         # ControlNet models
│   └── ESRGAN/             # Upscalers
├── embeddings/             # Textual inversions
├── extensions/             # Installed extensions
├── outputs/                # Generated images
└── scripts/                # Custom scripts
```

## Download Models

### Checkpoints

```bash
cd models/Stable-diffusion

# SD 1.5
wget https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.safetensors

# SDXL
wget https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_base_1.0.safetensors

# Realistic Vision (Photorealistic)
wget "https://civitai.com/api/download/models/245598" -O realisticVision_v60B1.safetensors

# DreamShaper (Artistic)
wget "https://civitai.com/api/download/models/351306" -O dreamshaper_8.safetensors
```

### VAE (Better Colors)

```bash
cd models/VAE

# SD 1.5 VAE
wget https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/main/vae-ft-mse-840000-ema-pruned.safetensors

# SDXL VAE
wget https://huggingface.co/stabilityai/sdxl-vae/resolve/main/sdxl_vae.safetensors
```

## Basic Usage

### txt2img (Text to Image)

1. Select model from dropdown
2. Enter positive prompt: what you want
3. Enter negative prompt: what to avoid
4. Set dimensions (512x512 for SD1.5, 1024x1024 for SDXL)
5. Click Generate

### img2img (Image to Image)

1. Go to img2img tab
2. Upload source image
3. Enter prompt describing desired changes
4. Adjust **Denoising strength** (0.3-0.8)
5. Generate

### Inpainting

1. Go to img2img → Inpaint
2. Upload image
3. Draw mask over area to change
4. Enter prompt for masked area
5. Generate

## Essential Settings

### Generation Settings

| Setting   | SD 1.5          | SDXL            |
| --------- | --------------- | --------------- |
| Width     | 512             | 1024            |
| Height    | 512             | 1024            |
| Steps     | 20-30           | 20-40           |
| CFG Scale | 7               | 5-7             |
| Sampler   | DPM++ 2M Karras | DPM++ 2M Karras |

### Command Line Arguments

```bash
./webui.sh \
    --listen \              # Allow external access
    --port 7860 \           # Port number
    --xformers \            # Memory optimization
    --enable-insecure-extension-access \  # Allow extensions
    --api \                 # Enable API
    --no-half-vae           # Fix black images
```

### For Low VRAM

```bash
./webui.sh \
    --listen \
    --medvram \           # 6-8GB VRAM
    # or
    --lowvram \           # 4GB VRAM
    --xformers
```

## Popular Extensions

### Must Have

| Extension                 | Purpose              |
| ------------------------- | -------------------- |
| ControlNet                | Guided generation    |
| ADetailer                 | Auto face/hand fix   |
| Ultimate SD Upscale       | Better upscaling     |
| sd-webui-segment-anything | Segmentation         |
| Regional Prompter         | Multi-region prompts |

### Install Extensions

1. Go to **Extensions** tab
2. Click **Available**
3. Click **Load from**
4. Search and Install
5. Apply and Restart UI

Or manually:

```bash
cd extensions
git clone https://github.com/Mikubill/sd-webui-controlnet.git
```

## ControlNet

### Installation

```bash
# Install extension
cd extensions
git clone https://github.com/Mikubill/sd-webui-controlnet.git

# Download models
cd ../models/ControlNet
wget https://huggingface.co/lllyasviel/ControlNet-v1-1/resolve/main/control_v11p_sd15_canny.pth
wget https://huggingface.co/lllyasviel/ControlNet-v1-1/resolve/main/control_v11p_sd15_openpose.pth
wget https://huggingface.co/lllyasviel/ControlNet-v1-1/resolve/main/control_v11f1p_sd15_depth.pth
```

### Usage

1. Expand ControlNet section
2. Upload control image
3. Select preprocessor (Canny, OpenPose, etc.)
4. Select model matching preprocessor
5. Generate

## API Usage

Enable with `--api` flag, then:

```python
import requests
import base64

# For external access, use your http_pub URL:
API_URL = "https://your-http-pub.clorecloud.net"
# Or via SSH: API_URL = "http://localhost:7860"

def txt2img(prompt, negative="", steps=20, width=512, height=512):
    response = requests.post(
        f"{API_URL}/sdapi/v1/txt2img",
        json={
            "prompt": prompt,
            "negative_prompt": negative,
            "steps": steps,
            "width": width,
            "height": height,
        }
    )
    return base64.b64decode(response.json()["images"][0])

# Generate and save
image_data = txt2img("A beautiful sunset over mountains")
with open("output.png", "wb") as f:
    f.write(image_data)
```

### img2img API

```python
import base64

def img2img(prompt, image_path, denoising=0.5):
    with open(image_path, "rb") as f:
        image_b64 = base64.b64encode(f.read()).decode()

    response = requests.post(
        f"{API_URL}/sdapi/v1/img2img",
        json={
            "prompt": prompt,
            "init_images": [image_b64],
            "denoising_strength": denoising,
            "steps": 30,
        }
    )
    return base64.b64decode(response.json()["images"][0])
```

## Prompt Writing

### Basic Structure

```
[subject], [style], [details], [lighting], [quality tags]
```

### Example Prompts

```
# Portrait
portrait of a young woman, professional photography,
soft lighting, shallow depth of field, 8k uhd,
highly detailed, photorealistic

# Landscape
majestic mountain landscape at sunset, dramatic clouds,
golden hour lighting, national geographic style,
8k wallpaper, highly detailed

# Anime
1girl, silver hair, blue eyes, school uniform,
cherry blossoms, spring, masterpiece, best quality
```

### Negative Prompts

```
# General
lowres, bad anatomy, bad hands, text, error,
missing fingers, cropped, worst quality, low quality,
jpeg artifacts, signature, watermark, blurry

# For photorealistic
cartoon, anime, illustration, painting, drawing,
3d render, cgi
```

## Performance Tips

1. **Enable xFormers** - Significant speed boost
2. **Use VAE** - Better colors
3. **Batch size 1** - For limited VRAM
4. **Hires Fix** - Generate small, then upscale
5. **ADetailer** - Auto-fix faces

## GPU Requirements

| Model           | Minimum VRAM | Recommended VRAM | Min RAM |
| --------------- | ------------ | ---------------- | ------- |
| SD 1.5          | 4GB          | 8GB              | 16GB    |
| SD 2.1          | 6GB          | 8GB              | 16GB    |
| SDXL            | 8GB          | 12GB             | 16GB    |
| With ControlNet | +2GB         | +4GB             | 16GB    |

## GPU Presets

### RTX 3060 12GB (Budget)

```bash
# Launch command
./webui.sh --medvram --xformers

# Recommended settings.json:
# - SD 1.5: 512x512, batch 4, 20-30 steps
# - SDXL: 768x768, batch 1, 20 steps
# - Enable VAE tiling in Settings
# - Use ADetailer for faces
```

**Best models:** SD 1.5, DreamShaper, RealisticVision

### RTX 3090 24GB (Optimal)

```bash
# Launch command
./webui.sh --xformers

# Recommended settings:
# - SD 1.5: 512x512, batch 8, 30 steps
# - SDXL: 1024x1024, batch 2, 30 steps
# - ControlNet + SD 1.5 works great
# - Hires fix to 2x works
```

**Best models:** SDXL, Juggernaut, RealVisXL

### RTX 4090 24GB (Performance)

```bash
# Launch command
./webui.sh --xformers --opt-sdp-attention

# Recommended settings:
# - SD 1.5: 512x512, batch 16, 30 steps
# - SDXL: 1024x1024, batch 4, 40 steps
# - Multiple ControlNets simultaneously
# - Hires fix to 4x
```

**Best models:** SDXL, Pony Diffusion, Any high-res model

### A100 40GB/80GB (Production)

```bash
# Launch command
./webui.sh --opt-sdp-attention --no-half-vae

# Recommended settings:
# - SDXL: 1024x1024, batch 8+, 50 steps
# - Multiple ControlNet + IP-Adapter
# - Hires fix to 4096x4096
# - AnimateDiff for video
```

**Best for:** Batch generation, complex workflows, video

## Cost Estimate

Typical CLORE.AI marketplace rates:

| GPU      | VRAM | Price/day  | SD1.5 Speed | SDXL Speed |
| -------- | ---- | ---------- | ----------- | ---------- |
| RTX 3060 | 12GB | $0.15–0.30 | \~4 sec     | \~12 sec   |
| RTX 3090 | 24GB | $0.30–1.00 | \~2 sec     | \~6 sec    |
| RTX 4090 | 24GB | $0.50–2.00 | \~1 sec     | \~3 sec    |
| A100     | 40GB | $1.50–3.00 | \~0.5 sec   | \~2 sec    |

*Prices in USD/day. Rates vary by provider — check* [*CLORE.AI Marketplace*](https://clore.ai/marketplace) *for current rates.*

## Troubleshooting

### HTTP 502 for a long time

1. **Check RAM:** Server must have 16GB+ RAM
2. **Check VRAM:** Must have 8GB+ for SDXL
3. **Dependencies installing:** First run takes 10-20 min
4. **Slow network:** Low bandwidth causes longer startup

### Black Images

```bash
# Add to launch args
--no-half-vae
```

### Out of Memory

```bash
# Use optimization
--medvram
--xformers
```

### Extensions Not Loading

```bash
--enable-insecure-extension-access
```

### Slow Generation

1. Enable xFormers
2. Reduce image size
3. Lower steps (20 is often enough)

## Next Steps

* [ComfyUI](https://docs.clore.ai/guides/image-generation/comfyui) - More advanced workflows
* [FLUX.1](https://docs.clore.ai/guides/image-generation/flux) - Latest model
* [ControlNet](https://docs.clore.ai/guides/image-processing/controlnet-advanced) - Guided generation
* [LoRA Training](https://docs.clore.ai/guides/training/kohya-training) - Custom models


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.clore.ai/guides/image-generation/stable-diffusion-webui.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
