# FaceFusion

Professional face swapping and enhancement tool.

{% hint style="success" %}
All examples can be run on GPU servers rented through [CLORE.AI Marketplace](https://clore.ai/marketplace).
{% endhint %}

## Renting on CLORE.AI

1. Visit [CLORE.AI Marketplace](https://clore.ai/marketplace)
2. Filter by GPU type, VRAM, and price
3. Choose **On-Demand** (fixed rate) or **Spot** (bid price)
4. Configure your order:
   * Select Docker image
   * Set ports (TCP for SSH, HTTP for web UIs)
   * Add environment variables if needed
   * Enter startup command
5. Select payment: **CLORE**, **BTC**, or **USDT/USDC**
6. Create order and wait for deployment

### Access Your Server

* Find connection details in **My Orders**
* Web interfaces: Use the HTTP port URL
* SSH: `ssh -p <port> root@<proxy-address>`

## What is FaceFusion?

FaceFusion provides:

* High-quality face swapping
* Face enhancement
* Age modification
* Expression transfer
* Video processing

## Resources

* **GitHub:** [facefusion/facefusion](https://github.com/facefusion/facefusion)
* **Documentation:** [docs.facefusion.io](https://docs.facefusion.io/)
* **Discord:** [FaceFusion Community](https://discord.gg/facefusion)

## Recommended Hardware

| Component | Minimum       | Recommended   | Optimal       |
| --------- | ------------- | ------------- | ------------- |
| GPU       | RTX 3060 12GB | RTX 4080 16GB | RTX 4090 24GB |
| VRAM      | 8GB           | 16GB          | 24GB          |
| CPU       | 8 cores       | 16 cores      | 32 cores      |
| RAM       | 16GB          | 32GB          | 64GB          |
| Storage   | 50GB SSD      | 100GB NVMe    | 200GB NVMe    |
| Internet  | 100 Mbps      | 500 Mbps      | 1 Gbps        |

## Quick Deploy on CLORE.AI

**Docker Image:**

```
pytorch/pytorch:2.5.1-cuda12.4-cudnn9-devel
```

**Ports:**

```
22/tcp
7860/http
```

**Command:**

```bash
pip install facefusion && \
facefusion run --ui-layouts default
```

## Accessing Your Service

After deployment, find your `http_pub` URL in **My Orders**:

1. Go to **My Orders** page
2. Click on your order
3. Find the `http_pub` URL (e.g., `abc123.clorecloud.net`)

Use `https://YOUR_HTTP_PUB_URL` instead of `localhost` in examples below.

## Installation

```bash

# Install FaceFusion
pip install facefusion

# Or from source
git clone https://github.com/facefusion/facefusion.git
cd facefusion
pip install -r requirements.txt
```

## What You Can Create

### Entertainment

* Movie dubbing preparation
* Fan-made content
* Meme creation

### Professional

* Virtual try-on for fashion
* Personalized marketing
* Privacy protection in videos

### Creative Projects

* Art installations
* Music video effects
* Short film production

**Important:** Always use responsibly and with consent.

## Basic Usage

### Web Interface

```bash

# Start web UI
facefusion run --ui-layouts default

# Access at http://localhost:7860
```

### Command Line

```bash

# Single image face swap
facefusion headless-run \
    --source-paths source_face.jpg \
    --target-path target_image.jpg \
    --output-path result.jpg \
    --face-swapper-model inswapper_128

# Video face swap
facefusion headless-run \
    --source-paths source_face.jpg \
    --target-path input_video.mp4 \
    --output-path output_video.mp4 \
    --face-swapper-model inswapper_128 \
    --execution-providers cuda
```

### Python API

```python
from facefusion import core
from facefusion.processors.frame.face_swapper import FaceSwapper

# Initialize
swapper = FaceSwapper(
    model="inswapper_128",
    device="cuda"
)

# Process image
result = swapper.process(
    source_image="source_face.jpg",
    target_image="target.jpg"
)

result.save("swapped.jpg")
```

## Face Enhancement

```bash

# Enhance face quality
facefusion headless-run \
    --target-path input.jpg \
    --output-path enhanced.jpg \
    --face-enhancer-model gfpgan_1.4 \
    --face-enhancer-blend 80
```

## Multiple Faces

```bash

# Swap specific faces
facefusion headless-run \
    --source-paths face1.jpg face2.jpg \
    --target-path group_photo.jpg \
    --output-path result.jpg \
    --face-selector-mode reference \
    --reference-face-position 0 1
```

## Video Processing

```bash

# Full video face swap with enhancement
facefusion headless-run \
    --source-paths source_face.jpg \
    --target-path input_video.mp4 \
    --output-path output_video.mp4 \
    --face-swapper-model inswapper_128 \
    --face-enhancer-model gfpgan_1.4 \
    --execution-providers cuda \
    --execution-thread-count 4 \
    --video-encoder libx264 \
    --video-quality 18
```

## Batch Processing

```python
import os
from facefusion import core
from facefusion.processors.frame.face_swapper import FaceSwapper

swapper = FaceSwapper(model="inswapper_128", device="cuda")

# Source face
source = "my_face.jpg"

# Target images
targets = [f for f in os.listdir("./targets") if f.endswith(('.jpg', '.png'))]

os.makedirs("./results", exist_ok=True)

for target in targets:
    print(f"Processing: {target}")

    result = swapper.process(
        source_image=source,
        target_image=f"./targets/{target}"
    )

    result.save(f"./results/swapped_{target}")
```

## Gradio Custom Interface

```python
import gradio as gr
from facefusion.processors.frame.face_swapper import FaceSwapper

swapper = FaceSwapper(model="inswapper_128", device="cuda")

def swap_faces(source_image, target_image, enhance):
    result = swapper.process(
        source_image=source_image,
        target_image=target_image,
        enhance=enhance
    )
    return result

demo = gr.Interface(
    fn=swap_faces,
    inputs=[
        gr.Image(type="filepath", label="Source Face"),
        gr.Image(type="filepath", label="Target Image"),
        gr.Checkbox(label="Enhance Result", value=True)
    ],
    outputs=gr.Image(label="Result"),
    title="FaceFusion - Face Swapping",
    description="Professional face swapping on CLORE.AI servers"
)

demo.launch(server_name="0.0.0.0", server_port=7860)
```

## Quality Settings

### Swap Models

| Model                | Quality | Speed  | VRAM |
| -------------------- | ------- | ------ | ---- |
| inswapper\_128       | Good    | Fast   | 4GB  |
| inswapper\_128\_fp16 | Good    | Faster | 2GB  |
| simswap\_256         | Better  | Medium | 6GB  |
| simswap\_512         | Best    | Slow   | 10GB |

### Enhancement Models

| Model          | Quality | Speed  |
| -------------- | ------- | ------ |
| gfpgan\_1.4    | Great   | Medium |
| codeformer     | Best    | Slow   |
| gpen\_bfr\_512 | Good    | Fast   |

## Performance

| Task          | Resolution | GPU      | Time  |
| ------------- | ---------- | -------- | ----- |
| Single image  | 1024x1024  | RTX 3090 | 0.5s  |
| Single image  | 1024x1024  | RTX 4090 | 0.3s  |
| Video (1 min) | 1080p      | RTX 4090 | 5 min |
| Video (1 min) | 1080p      | A100     | 3 min |

## Common Problems & Solutions

### Face Not Detected

**Problem:** "No face found in source/target"

**Solutions:**

* Ensure face is clearly visible
* Face should be at least 64x64 pixels
* Use frontal or slight angle photos
* Good lighting required

### Poor Quality Swap

**Problem:** Result looks unnatural

**Solutions:**

```bash

# Use better model and enhancement
facefusion headless-run \
    --source-paths source.jpg \
    --target-path target.jpg \
    --output-path result.jpg \
    --face-swapper-model simswap_256 \
    --face-enhancer-model gfpgan_1.4 \
    --face-enhancer-blend 80
```

### Video Processing Slow

**Problem:** Takes too long for video

**Solutions:**

* Use CUDA execution provider
* Increase thread count
* Use fp16 model for speed
* Process on A100 for best performance

```bash
facefusion headless-run \
    --execution-providers cuda \
    --execution-thread-count 8 \
    --face-swapper-model inswapper_128_fp16
```

### Color Mismatch

**Problem:** Swapped face has different color

**Solutions:**

* Enable face color correction
* Match lighting between source and target
* Post-process with color grading

### Memory Issues

**Problem:** Out of memory on video

**Solutions:**

```bash

# Reduce memory usage
facefusion headless-run \
    --video-memory-strategy tolerant \
    --execution-thread-count 2
```

## Troubleshooting

### Face swap quality poor

* Use higher resolution source images
* Ensure good lighting in both images
* Try different face enhancer models

### "No face detected"

* Face must be clearly visible
* Avoid side profiles
* Check image isn't too small

### Video processing stuck

* Large videos need more RAM
* Process in smaller segments
* Use SSD storage for temp files

### Models not downloading

* Check internet connection
* Run with `--download-models` flag
* Manually download from HuggingFace

## Cost Estimate

Typical CLORE.AI marketplace rates (as of 2024):

| GPU       | Hourly Rate | Daily Rate | 4-Hour Session |
| --------- | ----------- | ---------- | -------------- |
| RTX 3060  | \~$0.03     | \~$0.70    | \~$0.12        |
| RTX 3090  | \~$0.06     | \~$1.50    | \~$0.25        |
| RTX 4090  | \~$0.10     | \~$2.30    | \~$0.40        |
| A100 40GB | \~$0.17     | \~$4.00    | \~$0.70        |
| A100 80GB | \~$0.25     | \~$6.00    | \~$1.00        |

*Prices vary by provider and demand. Check* [*CLORE.AI Marketplace*](https://clore.ai/marketplace) *for current rates.*

**Save money:**

* Use **Spot** market for flexible workloads (often 30-50% cheaper)
* Pay with **CLORE** tokens
* Compare prices across different providers

## Next Steps

* [InstantID](https://docs.clore.ai/guides/face-and-identity/instantid) - Identity preservation
* [LivePortrait](https://docs.clore.ai/guides/talking-heads/liveportrait) - Portrait animation
* [GFPGAN](https://docs.clore.ai/guides/image-processing/gfpgan-face-restore) - Face restoration


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.clore.ai/guides/face-and-identity/facefusion.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
