# Fooocus

Generate images with Fooocus - the easiest way to use Stable Diffusion.

{% hint style="success" %}
All examples can be run on GPU servers rented through [CLORE.AI Marketplace](https://clore.ai/marketplace).
{% endhint %}

## Renting on CLORE.AI

1. Visit [CLORE.AI Marketplace](https://clore.ai/marketplace)
2. Filter by GPU type, VRAM, and price
3. Choose **On-Demand** (fixed rate) or **Spot** (bid price)
4. Configure your order:
   * Select Docker image
   * Set ports (TCP for SSH, HTTP for web UIs)
   * Add environment variables if needed
   * Enter startup command
5. Select payment: **CLORE**, **BTC**, or **USDT/USDC**
6. Create order and wait for deployment

### Access Your Server

* Find connection details in **My Orders**
* Web interfaces: Use the HTTP port URL
* SSH: `ssh -p <port> root@<proxy-address>`

## What is Fooocus?

Fooocus is a streamlined Stable Diffusion interface that:

* Requires zero configuration
* Uses SDXL by default
* Has built-in styles and presets
* Handles all optimizations automatically

## Requirements

| Quality      | Min VRAM | Recommended   |
| ------------ | -------- | ------------- |
| Basic        | 4GB      | RTX 3060      |
| Standard     | 8GB      | RTX 3070      |
| High Quality | 12GB+    | RTX 3090/4090 |

## Quick Deploy

**Docker Image:**

```
pytorch/pytorch:2.5.1-cuda12.4-cudnn9-devel
```

**Ports:**

```
22/tcp
7865/http
```

**Command:**

```bash
apt-get update && apt-get install -y git libgl1 libglib2.0-0 && \
cd /workspace && \
git clone https://github.com/lllyasviel/Fooocus.git && \
cd Fooocus && \
pip install -r requirements_versions.txt && \
python launch.py --listen 0.0.0.0 --port 7865
```

## Accessing Your Service

After deployment, find your `http_pub` URL in **My Orders**:

1. Go to **My Orders** page
2. Click on your order
3. Find the `http_pub` URL (e.g., `abc123.clorecloud.net`)

Use `https://YOUR_HTTP_PUB_URL` instead of `localhost` in examples below.

## First Launch

On first run, Fooocus automatically downloads:

* SDXL base model (\~6.5GB)
* SDXL refiner (\~6GB)
* Required embeddings

This takes 10-15 minutes on first launch.

## Using Fooocus

### Basic Generation

1. Open `http://<proxy>:<port>`
2. Enter your prompt
3. Click "Generate"

That's it! No settings needed.

### Styles

Fooocus includes 200+ built-in styles:

**Popular Styles:**

* Fooocus Enhance - Better details
* Fooocus Sharp - Crisp edges
* Cinematic - Movie look
* Anime - Japanese animation
* Photographic - Realistic photos

### Quality Presets

| Preset        | Speed   | Quality |
| ------------- | ------- | ------- |
| Speed         | Fast    | Good    |
| Quality       | Medium  | Great   |
| Extreme Speed | Fastest | Basic   |

## Advanced Features

### Enable Advanced Mode

Click "Advanced" checkbox to access:

* Negative prompts
* Aspect ratios
* Image count
* Random seed control

### Image-to-Image

1. Enable "Input Image" tab
2. Upload source image
3. Choose mode:
   * **Upscale** - Enhance resolution
   * **Vary** - Create variations
   * **Inpaint** - Edit parts

### Inpainting

```
1. Upload image
2. Click "Inpaint or Outpaint"
3. Draw mask on areas to change
4. Describe what to generate
5. Click Generate
```

### Outpainting

Extend images beyond borders:

1. Upload image
2. Select "Inpaint or Outpaint"
3. Check direction boxes (Left, Right, Top, Bottom)
4. Generate to extend canvas

## Using LoRAs

### Download LoRAs

```bash
cd /workspace/Fooocus/models/loras
wget https://civitai.com/api/download/models/<model_id> -O my_lora.safetensors
```

### Apply LoRA

1. Go to "Model" tab
2. Select LoRA in dropdown
3. Adjust weight (0.5-1.0)

## Custom Models

### Add Custom Checkpoints

```bash
cd /workspace/Fooocus/models/checkpoints

# Download model
wget https://huggingface.co/model/file.safetensors
```

Refresh UI or restart to see new models.

### Recommended Models

| Model          | Style          | Size  |
| -------------- | -------------- | ----- |
| Juggernaut XL  | Photorealistic | 6.5GB |
| DreamShaper XL | Artistic       | 6.5GB |
| RealVisXL      | Realistic      | 6.5GB |
| Animagine XL   | Anime          | 6.5GB |

## Face Swap

Built-in face swap feature:

1. Enable "Image Prompt" tab
2. Upload face image
3. Set "FaceSwap" as type
4. Generate with face prompt

## Upscaling

### Built-in Upscaler

1. Upload image to "Upscale or Vary"
2. Select "Upscale (2x)"
3. Generate

### Vary Options

* **Vary (Subtle)** - Small changes
* **Vary (Strong)** - Significant changes

## Describe Image

Reverse prompt engineering:

1. Go to "Describe" tab
2. Upload image
3. Get prompt suggestions

## Performance Optimization

### For 8GB VRAM

```bash
python launch.py --listen 0.0.0.0 --always-offload-from-vram
```

### For 6GB VRAM

```bash
python launch.py --listen 0.0.0.0 --always-low-vram
```

### For 4GB VRAM

```bash
python launch.py --listen 0.0.0.0 --always-cpu
```

## Batch Processing

### Generate Multiple Images

In Advanced mode:

* Set "Image Number" to desired count
* All images generate with different seeds

### Prompt Queue

Use wildcards for variations:

```
a {red|blue|green} car on the street
```

Generates 3 images with different colors.

## API Access

### Enable API

```bash
python launch.py --listen 0.0.0.0 --port 7865
```

### API Endpoint

```python
import requests

response = requests.post(
    "http://localhost:7865/v1/generation/text-to-image",
    json={
        "prompt": "a beautiful sunset over mountains",
        "negative_prompt": "",
        "style_selections": ["Fooocus Enhance"],
        "performance_selection": "Quality",
        "aspect_ratios_selection": "1024×1024",
        "image_number": 1,
        "image_seed": -1,
    }
)

# Get image
image_data = response.json()
```

## Presets

### Create Custom Preset

Edit `presets/default.json`:

```json
{
    "default_model": "juggernautXL.safetensors",
    "default_refiner": "None",
    "default_loras": [
        ["detail_lora.safetensors", 0.5]
    ],
    "default_styles": ["Fooocus Enhance", "Cinematic"]
}
```

### Launch with Preset

```bash
python launch.py --preset anime
```

## Comparison: Fooocus vs Others

| Feature        | Fooocus   | A1111       | ComfyUI    |
| -------------- | --------- | ----------- | ---------- |
| Setup          | None      | Medium      | Complex    |
| Learning Curve | Easy      | Medium      | Hard       |
| Flexibility    | Low       | High        | Highest    |
| Best For       | Beginners | Power users | Developers |

## Troubleshooting

### Out of Memory

```bash

# Use low VRAM mode
python launch.py --always-low-vram

# Or disable refiner

# In UI: Model tab > Refiner > None
```

### Slow Generation

* Enable "Extreme Speed" preset
* Disable refiner
* Use smaller resolution

### Model Not Loading

```bash

# Check model exists
ls -la /workspace/Fooocus/models/checkpoints/

# Check file size (should be ~6GB for SDXL)
du -h /workspace/Fooocus/models/checkpoints/*.safetensors
```

### Black Images

* Reduce CFG scale
* Change seed
* Try different prompt

## Download Results

```bash

# Images are in outputs folder
scp -P <port> -r root@<proxy>:/workspace/Fooocus/outputs/ ./
```

## Cost Estimate

Typical CLORE.AI marketplace rates (as of 2024):

| GPU       | Hourly Rate | Daily Rate | 4-Hour Session |
| --------- | ----------- | ---------- | -------------- |
| RTX 3060  | \~$0.03     | \~$0.70    | \~$0.12        |
| RTX 3090  | \~$0.06     | \~$1.50    | \~$0.25        |
| RTX 4090  | \~$0.10     | \~$2.30    | \~$0.40        |
| A100 40GB | \~$0.17     | \~$4.00    | \~$0.70        |
| A100 80GB | \~$0.25     | \~$6.00    | \~$1.00        |

*Prices vary by provider and demand. Check* [*CLORE.AI Marketplace*](https://clore.ai/marketplace) *for current rates.*

**Save money:**

* Use **Spot** market for flexible workloads (often 30-50% cheaper)
* Pay with **CLORE** tokens
* Compare prices across different providers

## Next Steps

* Stable Diffusion WebUI - More control
* ComfyUI Workflows - Node-based
* FLUX Generation - Newer model


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.clore.ai/guides/image-generation/fooocus-simple-sd.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
