# Real-ESRGAN Upscaling

Upscale and enhance images using Real-ESRGAN on GPU.

{% hint style="success" %}
All examples can be run on GPU servers rented through [CLORE.AI Marketplace](https://clore.ai/marketplace).
{% endhint %}

## Renting on CLORE.AI

1. Visit [CLORE.AI Marketplace](https://clore.ai/marketplace)
2. Filter by GPU type, VRAM, and price
3. Choose **On-Demand** (fixed rate) or **Spot** (bid price)
4. Configure your order:
   * Select Docker image
   * Set ports (TCP for SSH, HTTP for web UIs)
   * Add environment variables if needed
   * Enter startup command
5. Select payment: **CLORE**, **BTC**, or **USDT/USDC**
6. Create order and wait for deployment

### Access Your Server

* Find connection details in **My Orders**
* Web interfaces: Use the HTTP port URL
* SSH: `ssh -p <port> root@<proxy-address>`

## What is Real-ESRGAN?

Real-ESRGAN is a practical image restoration model that:

* Upscales images 2x-4x
* Removes noise and artifacts
* Enhances details
* Works on photos, anime, and art

## Model Variants

| Model                         | Best For        | Speed   |
| ----------------------------- | --------------- | ------- |
| RealESRGAN\_x4plus            | General photos  | Medium  |
| RealESRGAN\_x4plus\_anime\_6B | Anime/drawings  | Medium  |
| RealESRGAN\_x2plus            | 2x upscale      | Fast    |
| RealESRNet\_x4plus            | Fast processing | Fastest |

## Quick Deploy

**Docker Image:**

```
pytorch/pytorch:2.5.1-cuda12.4-cudnn9-runtime
```

**Ports:**

```
22/tcp
7860/http
```

**Command:**

```bash
pip install realesrgan gradio && \
python -c "
import gradio as gr
from realesrgan import RealESRGANer
from basicsr.archs.rrdbnet_arch import RRDBNet
import numpy as np
from PIL import Image
import torch

# Load model
model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=4)
upsampler = RealESRGANer(
    scale=4,
    model_path='https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth',
    model=model,
    tile=0,
    tile_pad=10,
    pre_pad=0,
    half=True
)

def upscale(image, scale):
    img = np.array(image)
    output, _ = upsampler.enhance(img, outscale=scale)
    return Image.fromarray(output)

demo = gr.Interface(
    fn=upscale,
    inputs=[gr.Image(type='pil'), gr.Slider(2, 4, value=4, step=1, label='Scale')],
    outputs=gr.Image(type='pil'),
    title='Real-ESRGAN Upscaler'
)
demo.launch(server_name='0.0.0.0', server_port=7860)
"
```

## Accessing Your Service

After deployment, find your `http_pub` URL in **My Orders**:

1. Go to **My Orders** page
2. Click on your order
3. Find the `http_pub` URL (e.g., `abc123.clorecloud.net`)

Use `https://YOUR_HTTP_PUB_URL` instead of `localhost` in examples below.

## CLI Usage

### Installation

```bash
pip install realesrgan
```

### Basic Upscaling

```bash

# Upscale single image
python -m realesrgan -i input.jpg -o output.jpg -n RealESRGAN_x4plus -s 4

# Upscale folder
python -m realesrgan -i ./inputs -o ./outputs -n RealESRGAN_x4plus -s 4
```

### Options

```bash
python -m realesrgan \
    -i input.jpg \           # Input
    -o output.jpg \          # Output
    -n RealESRGAN_x4plus \   # Model name
    -s 4 \                   # Scale factor
    --face_enhance \         # Enable face enhancement
    --fp32 \                 # Use FP32 (more VRAM, better quality)
    --tile 400               # Tile size for large images
```

## Python API

### Basic Usage

```python
from realesrgan import RealESRGANer
from basicsr.archs.rrdbnet_arch import RRDBNet
import cv2

# Setup model
model = RRDBNet(
    num_in_ch=3,
    num_out_ch=3,
    num_feat=64,
    num_block=23,
    num_grow_ch=32,
    scale=4
)

upsampler = RealESRGANer(
    scale=4,
    model_path='RealESRGAN_x4plus.pth',
    model=model,
    tile=0,
    tile_pad=10,
    pre_pad=0,
    half=True  # Use FP16
)

# Upscale
img = cv2.imread('input.jpg', cv2.IMREAD_UNCHANGED)
output, _ = upsampler.enhance(img, outscale=4)
cv2.imwrite('output.jpg', output)
```

### With Face Enhancement

```python
from realesrgan import RealESRGANer
from basicsr.archs.rrdbnet_arch import RRDBNet
from gfpgan import GFPGANer
import cv2

# Real-ESRGAN model
model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=4)
upsampler = RealESRGANer(scale=4, model_path='RealESRGAN_x4plus.pth', model=model, half=True)

# GFPGAN for faces
face_enhancer = GFPGANer(
    model_path='GFPGANv1.4.pth',
    upscale=4,
    arch='clean',
    channel_multiplier=2,
    bg_upsampler=upsampler
)

# Process
img = cv2.imread('portrait.jpg')
_, _, output = face_enhancer.enhance(img, has_aligned=False, only_center_face=False, paste_back=True)
cv2.imwrite('output.jpg', output)
```

### Anime Upscaling

```python
from realesrgan import RealESRGANer
from basicsr.archs.rrdbnet_arch import RRDBNet
import cv2

# Anime model
model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=6, num_grow_ch=32, scale=4)

upsampler = RealESRGANer(
    scale=4,
    model_path='RealESRGAN_x4plus_anime_6B.pth',
    model=model,
    half=True
)

img = cv2.imread('anime.png')
output, _ = upsampler.enhance(img, outscale=4)
cv2.imwrite('anime_upscaled.png', output)
```

## Batch Processing

### Process Folder

```python
import os
from realesrgan import RealESRGANer
from basicsr.archs.rrdbnet_arch import RRDBNet
import cv2
from tqdm import tqdm

# Setup
model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=4)
upsampler = RealESRGANer(scale=4, model_path='RealESRGAN_x4plus.pth', model=model, half=True)

input_dir = './inputs'
output_dir = './outputs'
os.makedirs(output_dir, exist_ok=True)

# Process all images
for filename in tqdm(os.listdir(input_dir)):
    if filename.lower().endswith(('.png', '.jpg', '.jpeg', '.webp')):
        img_path = os.path.join(input_dir, filename)
        img = cv2.imread(img_path, cv2.IMREAD_UNCHANGED)

        output, _ = upsampler.enhance(img, outscale=4)

        output_path = os.path.join(output_dir, f"upscaled_{filename}")
        cv2.imwrite(output_path, output)
```

### Shell Script

```bash
#!/bin/bash
INPUT_DIR=$1
OUTPUT_DIR=$2

mkdir -p $OUTPUT_DIR

for file in $INPUT_DIR/*.{jpg,png,jpeg}; do
    if [ -f "$file" ]; then
        filename=$(basename "$file")
        python -m realesrgan -i "$file" -o "$OUTPUT_DIR/upscaled_$filename" -n RealESRGAN_x4plus -s 4
        echo "Processed: $filename"
    fi
done
```

## Tiled Processing (Large Images)

For images that don't fit in VRAM:

```python
upsampler = RealESRGANer(
    scale=4,
    model_path='RealESRGAN_x4plus.pth',
    model=model,
    tile=400,      # Process in 400px tiles
    tile_pad=10,   # Overlap between tiles
    pre_pad=0,
    half=True
)
```

### Tile Size Recommendations

| VRAM | Max Tile Size |
| ---- | ------------- |
| 4GB  | 200           |
| 6GB  | 300           |
| 8GB  | 400           |
| 12GB | 600           |
| 24GB | 0 (no tiling) |

## Video Upscaling

### Using Real-ESRGAN

```python
import cv2
from realesrgan import RealESRGANer
from basicsr.archs.rrdbnet_arch import RRDBNet
from tqdm import tqdm

# Setup model
model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=4)
upsampler = RealESRGANer(scale=4, model_path='RealESRGAN_x4plus.pth', model=model, tile=400, half=True)

# Open video
cap = cv2.VideoCapture('input.mp4')
fps = cap.get(cv2.CAP_PROP_FPS)
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)) * 4
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) * 4
total_frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))

# Output
fourcc = cv2.VideoWriter_fourcc(*'mp4v')
out = cv2.VideoWriter('output.mp4', fourcc, fps, (width, height))

# Process frames
for _ in tqdm(range(total_frames)):
    ret, frame = cap.read()
    if not ret:
        break

    output, _ = upsampler.enhance(frame, outscale=4)
    out.write(output)

cap.release()
out.release()
```

### FFmpeg Pipeline

```bash

# Extract frames
ffmpeg -i input.mp4 -qscale:v 1 frames/frame_%06d.png

# Upscale frames
python -m realesrgan -i frames -o upscaled -n RealESRGAN_x4plus -s 4

# Reassemble video
ffmpeg -framerate 30 -i upscaled/frame_%06d.png -c:v libx264 -pix_fmt yuv420p output.mp4

# Add audio back
ffmpeg -i output.mp4 -i input.mp4 -c:v copy -c:a aac -map 0:v:0 -map 1:a:0 final.mp4
```

## API Server

### FastAPI Server

```python
from fastapi import FastAPI, UploadFile
from fastapi.responses import Response
from realesrgan import RealESRGANer
from basicsr.archs.rrdbnet_arch import RRDBNet
import cv2
import numpy as np
import io

app = FastAPI()

# Load model
model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=4)
upsampler = RealESRGANer(scale=4, model_path='RealESRGAN_x4plus.pth', model=model, tile=400, half=True)

@app.post("/upscale")
async def upscale(file: UploadFile, scale: int = 4):
    contents = await file.read()
    nparr = np.frombuffer(contents, np.uint8)
    img = cv2.imdecode(nparr, cv2.IMREAD_UNCHANGED)

    output, _ = upsampler.enhance(img, outscale=scale)

    _, encoded = cv2.imencode('.png', output)
    return Response(content=encoded.tobytes(), media_type="image/png")

# Run: uvicorn server:app --host 0.0.0.0 --port 8000
```

### Usage

```bash
curl -X POST "http://localhost:8000/upscale?scale=4" \
    -F "file=@input.jpg" \
    --output upscaled.png
```

## Model Comparison

| Model             | Quality | Speed   | VRAM | Best For |
| ----------------- | ------- | ------- | ---- | -------- |
| x4plus            | Best    | Slow    | 4GB+ | Photos   |
| x4plus\_anime\_6B | Best    | Medium  | 3GB+ | Anime    |
| x2plus            | Good    | Fast    | 2GB+ | Quick 2x |
| RealESRNet        | OK      | Fastest | 2GB+ | Previews |

## Performance

| Image Size | GPU      | 4x Upscale Time |
| ---------- | -------- | --------------- |
| 512x512    | RTX 3090 | \~0.5s          |
| 1024x1024  | RTX 3090 | \~1.5s          |
| 2048x2048  | RTX 3090 | \~5s            |
| 512x512    | RTX 4090 | \~0.3s          |

## Troubleshooting

### CUDA Out of Memory

```python

# Use tiling
upsampler = RealESRGANer(..., tile=200, ...)

# Or reduce scale
output, _ = upsampler.enhance(img, outscale=2)  # Instead of 4
```

### Artifacts in Output

* Use smaller tile size with more overlap
* Try different model (anime vs photo)
* Check input image quality

### Slow Processing

* Enable FP16: `half=True`
* Increase tile size if VRAM allows
* Use faster model: RealESRNet

## Download Results

```bash
scp -P <port> -r root@<proxy>:/workspace/outputs/ ./upscaled_images/
```

## Cost Estimate

Typical CLORE.AI marketplace rates (as of 2024):

| GPU       | Hourly Rate | Daily Rate | 4-Hour Session |
| --------- | ----------- | ---------- | -------------- |
| RTX 3060  | \~$0.03     | \~$0.70    | \~$0.12        |
| RTX 3090  | \~$0.06     | \~$1.50    | \~$0.25        |
| RTX 4090  | \~$0.10     | \~$2.30    | \~$0.40        |
| A100 40GB | \~$0.17     | \~$4.00    | \~$0.70        |
| A100 80GB | \~$0.25     | \~$6.00    | \~$1.00        |

*Prices vary by provider and demand. Check* [*CLORE.AI Marketplace*](https://clore.ai/marketplace) *for current rates.*

**Save money:**

* Use **Spot** market for flexible workloads (often 30-50% cheaper)
* Pay with **CLORE** tokens
* Compare prices across different providers

## Next Steps

* [GFPGAN Face Restoration](/guides/image-processing/gfpgan-face-restore.md)
* [AI Video Generation](/guides/video-generation/ai-video-generation.md)
* Stable Diffusion WebUI


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.clore.ai/guides/image-processing/real-esrgan-upscaling.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
