# GFPGAN Face Restore

Restore and enhance faces in photos using GFPGAN.

{% hint style="success" %}
All examples can be run on GPU servers rented through [CLORE.AI Marketplace](https://clore.ai/marketplace).
{% endhint %}

## Renting on CLORE.AI

1. Visit [CLORE.AI Marketplace](https://clore.ai/marketplace)
2. Filter by GPU type, VRAM, and price
3. Choose **On-Demand** (fixed rate) or **Spot** (bid price)
4. Configure your order:
   * Select Docker image
   * Set ports (TCP for SSH, HTTP for web UIs)
   * Add environment variables if needed
   * Enter startup command
5. Select payment: **CLORE**, **BTC**, or **USDT/USDC**
6. Create order and wait for deployment

### Access Your Server

* Find connection details in **My Orders**
* Web interfaces: Use the HTTP port URL
* SSH: `ssh -p <port> root@<proxy-address>`

## What is GFPGAN?

GFPGAN (Generative Facial Prior GAN) specializes in:

* Restoring old/damaged photos
* Enhancing blurry faces
* Improving AI-generated faces
* Fixing low-resolution portraits

## Quick Deploy

**Docker Image:**

```
pytorch/pytorch:2.5.1-cuda12.4-cudnn9-runtime
```

**Ports:**

```
22/tcp
7860/http
```

**Command:**

```bash
pip install gfpgan gradio && \
python -c "
import gradio as gr
from gfpgan import GFPGANer
import cv2
import numpy as np

restorer = GFPGANer(
    model_path='https://github.com/TencentARC/GFPGAN/releases/download/v1.3.4/GFPGANv1.4.pth',
    upscale=2,
    arch='clean',
    channel_multiplier=2,
    bg_upsampler=None
)

def restore(image):
    img = np.array(image)
    img = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)
    _, _, output = restorer.enhance(img, has_aligned=False, only_center_face=False, paste_back=True)
    output = cv2.cvtColor(output, cv2.COLOR_BGR2RGB)
    return output

demo = gr.Interface(fn=restore, inputs=gr.Image(), outputs=gr.Image(), title='GFPGAN Face Restorer')
demo.launch(server_name='0.0.0.0', server_port=7860)
"
```

## Accessing Your Service

After deployment, find your `http_pub` URL in **My Orders**:

1. Go to **My Orders** page
2. Click on your order
3. Find the `http_pub` URL (e.g., `abc123.clorecloud.net`)

Use `https://YOUR_HTTP_PUB_URL` instead of `localhost` in examples below.

## CLI Usage

### Installation

```bash
pip install gfpgan
```

### Download Models

```bash

# Download face restoration model
wget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.4/GFPGANv1.4.pth -P ./models

# Download detection model
wget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/detection_Resnet50_Final.pth -P ./models

# Download parsing model
wget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/parsing_parsenet.pth -P ./models
```

### Basic Usage

```bash

# Restore single image
python inference_gfpgan.py -i input.jpg -o results -v 1.4 -s 2

# Restore folder
python inference_gfpgan.py -i ./inputs -o ./results -v 1.4 -s 2
```

### Options

```bash
python inference_gfpgan.py \
    -i input.jpg \      # Input image/folder
    -o results \        # Output folder
    -v 1.4 \            # GFPGAN version (1.2, 1.3, 1.4)
    -s 2 \              # Upscale factor
    --bg_upsampler realesrgan \  # Background upscaler
    --only_center_face  # Only restore center face
```

## Python API

### Basic Face Restoration

```python
from gfpgan import GFPGANer
import cv2

# Initialize
restorer = GFPGANer(
    model_path='GFPGANv1.4.pth',
    upscale=2,
    arch='clean',
    channel_multiplier=2,
    bg_upsampler=None
)

# Load image
img = cv2.imread('photo.jpg')

# Restore faces
cropped_faces, restored_faces, restored_img = restorer.enhance(
    img,
    has_aligned=False,
    only_center_face=False,
    paste_back=True
)

# Save result
cv2.imwrite('restored.jpg', restored_img)
```

### With Background Enhancement

```python
from gfpgan import GFPGANer
from realesrgan import RealESRGANer
from basicsr.archs.rrdbnet_arch import RRDBNet
import cv2

# Setup background upscaler
bg_model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=2)
bg_upsampler = RealESRGANer(
    scale=2,
    model_path='RealESRGAN_x2plus.pth',
    model=bg_model,
    half=True
)

# Setup face restorer with background enhancement
restorer = GFPGANer(
    model_path='GFPGANv1.4.pth',
    upscale=2,
    arch='clean',
    channel_multiplier=2,
    bg_upsampler=bg_upsampler
)

# Process
img = cv2.imread('old_photo.jpg')
_, _, output = restorer.enhance(img, has_aligned=False, only_center_face=False, paste_back=True)
cv2.imwrite('enhanced.jpg', output)
```

### Process Only Faces (No Paste Back)

```python

# Get individual restored faces
cropped_faces, restored_faces, _ = restorer.enhance(
    img,
    has_aligned=False,
    only_center_face=False,
    paste_back=False
)

# Save each face separately
for i, face in enumerate(restored_faces):
    cv2.imwrite(f'face_{i}.jpg', face)
```

## Batch Processing

```python
import os
from gfpgan import GFPGANer
import cv2
from tqdm import tqdm

restorer = GFPGANer(
    model_path='GFPGANv1.4.pth',
    upscale=2,
    arch='clean',
    channel_multiplier=2
)

input_dir = './old_photos'
output_dir = './restored'
os.makedirs(output_dir, exist_ok=True)

for filename in tqdm(os.listdir(input_dir)):
    if filename.lower().endswith(('.png', '.jpg', '.jpeg')):
        img = cv2.imread(os.path.join(input_dir, filename))

        try:
            _, _, output = restorer.enhance(
                img,
                has_aligned=False,
                only_center_face=False,
                paste_back=True
            )
            cv2.imwrite(os.path.join(output_dir, filename), output)
        except Exception as e:
            print(f"Failed: {filename} - {e}")
```

## CodeFormer (Alternative)

CodeFormer is another excellent face restorer:

```python

# Installation
pip install codeformer-pip

# Usage
from codeformer import CodeFormer
import cv2

restorer = CodeFormer()
img = cv2.imread('blurry_face.jpg')
result = restorer.restore(img)
cv2.imwrite('restored.jpg', result)
```

## Video Face Restoration

```python
import cv2
from gfpgan import GFPGANer
from tqdm import tqdm

restorer = GFPGANer(
    model_path='GFPGANv1.4.pth',
    upscale=1,  # Keep original size for video
    arch='clean',
    channel_multiplier=2
)

cap = cv2.VideoCapture('video.mp4')
fps = cap.get(cv2.CAP_PROP_FPS)
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
total = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))

out = cv2.VideoWriter('restored_video.mp4', cv2.VideoWriter_fourcc(*'mp4v'), fps, (width, height))

for _ in tqdm(range(total)):
    ret, frame = cap.read()
    if not ret:
        break

    try:
        _, _, restored = restorer.enhance(frame, paste_back=True)
        out.write(restored)
    except:
        out.write(frame)  # Keep original if restoration fails

cap.release()
out.release()
```

## API Server

```python
from fastapi import FastAPI, UploadFile
from fastapi.responses import Response
from gfpgan import GFPGANer
import cv2
import numpy as np

app = FastAPI()

restorer = GFPGANer(
    model_path='GFPGANv1.4.pth',
    upscale=2,
    arch='clean',
    channel_multiplier=2
)

@app.post("/restore")
async def restore_face(file: UploadFile, upscale: int = 2):
    contents = await file.read()
    nparr = np.frombuffer(contents, np.uint8)
    img = cv2.imdecode(nparr, cv2.IMREAD_COLOR)

    _, _, output = restorer.enhance(img, paste_back=True)

    _, encoded = cv2.imencode('.jpg', output)
    return Response(content=encoded.tobytes(), media_type="image/jpeg")

# Run: uvicorn server:app --host 0.0.0.0 --port 8000
```

## Model Versions

| Version | Quality | Speed   | Notes        |
| ------- | ------- | ------- | ------------ |
| v1.4    | Best    | Medium  | Recommended  |
| v1.3    | Great   | Fast    | Good balance |
| v1.2    | Good    | Fastest | Legacy       |

## Use Cases

### Old Photo Restoration

```python

# Best settings for old photos
restorer = GFPGANer(
    model_path='GFPGANv1.4.pth',
    upscale=4,  # Higher upscale for old low-res photos
    bg_upsampler=bg_upsampler
)
```

### AI Art Enhancement

```python

# For AI-generated images with face artifacts
restorer = GFPGANer(
    model_path='GFPGANv1.4.pth',
    upscale=1,  # Keep original size
    only_center_face=True  # Focus on main face
)
```

### Group Photos

```python

# Process all faces in group photo
restorer = GFPGANer(
    model_path='GFPGANv1.4.pth',
    upscale=2,
    only_center_face=False  # Process ALL faces
)
```

## Performance

| Image Size | Faces | GPU      | Time   |
| ---------- | ----- | -------- | ------ |
| 512x512    | 1     | RTX 3090 | \~0.2s |
| 1024x1024  | 1     | RTX 3090 | \~0.3s |
| 1024x1024  | 5     | RTX 3090 | \~0.8s |
| 2048x2048  | 1     | RTX 4090 | \~0.3s |

## Troubleshooting

### No Face Detected

```python

# Lower detection threshold
from gfpgan.utils import GFPGANer

# Or manually crop face area first
```

### Over-smoothed Results

* Use CodeFormer with lower fidelity weight
* Blend with original using alpha compositing

### VRAM Issues

```python

# Use CPU for face detection
import torch
torch.cuda.empty_cache()

# Process faces one at a time
only_center_face=True
```

## Cost Estimate

Typical CLORE.AI marketplace rates (as of 2024):

| GPU       | Hourly Rate | Daily Rate | 4-Hour Session |
| --------- | ----------- | ---------- | -------------- |
| RTX 3060  | \~$0.03     | \~$0.70    | \~$0.12        |
| RTX 3090  | \~$0.06     | \~$1.50    | \~$0.25        |
| RTX 4090  | \~$0.10     | \~$2.30    | \~$0.40        |
| A100 40GB | \~$0.17     | \~$4.00    | \~$0.70        |
| A100 80GB | \~$0.25     | \~$6.00    | \~$1.00        |

*Prices vary by provider and demand. Check* [*CLORE.AI Marketplace*](https://clore.ai/marketplace) *for current rates.*

**Save money:**

* Use **Spot** market for flexible workloads (often 30-50% cheaper)
* Pay with **CLORE** tokens
* Compare prices across different providers

## Next Steps

* [Real-ESRGAN Upscaling](https://docs.clore.ai/guides/image-processing/real-esrgan-upscaling)
* Stable Diffusion WebUI
* [AI Video Generation](https://docs.clore.ai/guides/video-generation/ai-video-generation)
