# RIFE Interpolation

Increase video frame rate with RIFE AI interpolation.

{% hint style="success" %}
All examples can be run on GPU servers rented through [CLORE.AI Marketplace](https://clore.ai/marketplace).
{% endhint %}

## Renting on CLORE.AI

1. Visit [CLORE.AI Marketplace](https://clore.ai/marketplace)
2. Filter by GPU type, VRAM, and price
3. Choose **On-Demand** (fixed rate) or **Spot** (bid price)
4. Configure your order:
   * Select Docker image
   * Set ports (TCP for SSH, HTTP for web UIs)
   * Add environment variables if needed
   * Enter startup command
5. Select payment: **CLORE**, **BTC**, or **USDT/USDC**
6. Create order and wait for deployment

### Access Your Server

* Find connection details in **My Orders**
* Web interfaces: Use the HTTP port URL
* SSH: `ssh -p <port> root@<proxy-address>`

## What is RIFE?

RIFE (Real-Time Intermediate Flow Estimation) can:

* Increase FPS (24→60, 30→120)
* Create smooth slow motion
* Fix choppy footage
* Real-time processing

## Quick Deploy

**Docker Image:**

```
pytorch/pytorch:2.5.1-cuda12.4-cudnn9-runtime
```

**Ports:**

```
22/tcp
```

**Command:**

```bash
pip install torch torchvision && \
git clone https://github.com/megvii-research/ECCV2022-RIFE.git && \
cd ECCV2022-RIFE && \
pip install -r requirements.txt
```

## Installation

### Option 1: Python Package

```bash
pip install rife-ncnn-vulkan-python
```

### Option 2: From Source

```bash
git clone https://github.com/megvii-research/ECCV2022-RIFE.git
cd ECCV2022-RIFE
pip install -r requirements.txt
```

## Basic Usage

### Double Frame Rate

```bash
python inference_video.py --exp=1 --video=input.mp4

# Output: 2x FPS
```

### 4x Frame Rate

```bash
python inference_video.py --exp=2 --video=input.mp4

# Output: 4x FPS
```

### 8x Frame Rate

```bash
python inference_video.py --exp=3 --video=input.mp4

# Output: 8x FPS
```

## Python API

### Load Model

```python
import torch
from model.RIFE import Model

device = torch.device("cuda")
model = Model()
model.load_model('./train_log', -1)
model.eval()
model.device()
```

### Interpolate Single Frame

```python
import cv2
import numpy as np
import torch

def interpolate_frames(frame1, frame2, model, num_frames=1):
    """Interpolate frames between two images"""
    # Prepare tensors
    img0 = torch.from_numpy(frame1).permute(2, 0, 1).float() / 255.0
    img1 = torch.from_numpy(frame2).permute(2, 0, 1).float() / 255.0

    img0 = img0.unsqueeze(0).cuda()
    img1 = img1.unsqueeze(0).cuda()

    # Interpolate
    with torch.no_grad():
        middle = model.inference(img0, img1)

    # Convert back
    middle = (middle[0] * 255).byte().cpu().numpy().transpose(1, 2, 0)
    return middle

# Load frames
frame1 = cv2.imread('frame1.png')
frame2 = cv2.imread('frame2.png')

# Get interpolated frame
middle_frame = interpolate_frames(frame1, frame2, model)
cv2.imwrite('interpolated.png', middle_frame)
```

### Process Video

```python
import cv2
import torch
from model.RIFE import Model

model = Model()
model.load_model('./train_log', -1)
model.eval()
model.device()

def process_video(input_path, output_path, multiplier=2):
    cap = cv2.VideoCapture(input_path)
    fps = cap.get(cv2.CAP_PROP_FPS)
    width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
    height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))

    out = cv2.VideoWriter(
        output_path,
        cv2.VideoWriter_fourcc(*'mp4v'),
        fps * multiplier,
        (width, height)
    )

    ret, prev_frame = cap.read()
    if not ret:
        return

    out.write(prev_frame)

    while True:
        ret, curr_frame = cap.read()
        if not ret:
            break

        # Prepare tensors
        img0 = torch.from_numpy(prev_frame).permute(2, 0, 1).float() / 255.0
        img1 = torch.from_numpy(curr_frame).permute(2, 0, 1).float() / 255.0

        img0 = img0.unsqueeze(0).cuda()
        img1 = img1.unsqueeze(0).cuda()

        # Generate intermediate frames
        for i in range(multiplier - 1):
            t = (i + 1) / multiplier
            with torch.no_grad():
                middle = model.inference(img0, img1, timestep=t)
            middle = (middle[0] * 255).byte().cpu().numpy().transpose(1, 2, 0)
            out.write(middle)

        out.write(curr_frame)
        prev_frame = curr_frame

    cap.release()
    out.release()

process_video('input.mp4', 'output_60fps.mp4', multiplier=2)
```

## Using rife-ncnn-vulkan

Faster NCNN implementation:

```python
from rife_ncnn_vulkan import Rife

rife = Rife(gpu_id=0)

# Interpolate
frame1 = Image.open('frame1.png')
frame2 = Image.open('frame2.png')
middle = rife.process(frame1, frame2)
middle.save('interpolated.png')
```

### Video Processing

```python
from rife_ncnn_vulkan import Rife
import cv2
from PIL import Image

rife = Rife(gpu_id=0, num_threads=4)

def interpolate_video(input_path, output_path, factor=2):
    cap = cv2.VideoCapture(input_path)
    fps = cap.get(cv2.CAP_PROP_FPS)
    width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
    height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))

    out = cv2.VideoWriter(
        output_path,
        cv2.VideoWriter_fourcc(*'mp4v'),
        fps * factor,
        (width, height)
    )

    ret, prev = cap.read()
    out.write(prev)

    while True:
        ret, curr = cap.read()
        if not ret:
            break

        # Convert to PIL
        prev_pil = Image.fromarray(cv2.cvtColor(prev, cv2.COLOR_BGR2RGB))
        curr_pil = Image.fromarray(cv2.cvtColor(curr, cv2.COLOR_BGR2RGB))

        # Interpolate
        for i in range(factor - 1):
            t = (i + 1) / factor
            mid = rife.process(prev_pil, curr_pil, timestep=t)
            mid_cv = cv2.cvtColor(np.array(mid), cv2.COLOR_RGB2BGR)
            out.write(mid_cv)

        out.write(curr)
        prev = curr

    cap.release()
    out.release()
```

## Slow Motion

Create smooth slow motion:

```python

# Original: 30 fps, 10 seconds = 300 frames

# 8x interpolation: 2400 frames

# Play at 30 fps: 80 seconds (8x slower)

python inference_video.py --exp=3 --video=input.mp4

# This creates 8x frames, play at original FPS for slow motion
```

### Slow Motion Script

```python
def create_slow_motion(input_path, output_path, slowdown_factor=4):
    """Create slow motion video"""
    cap = cv2.VideoCapture(input_path)
    original_fps = cap.get(cv2.CAP_PROP_FPS)

    # Interpolate to get more frames
    exp = int(np.log2(slowdown_factor))
    interpolate_video(input_path, 'temp_interpolated.mp4', factor=slowdown_factor)

    # Re-encode at original FPS
    cap2 = cv2.VideoCapture('temp_interpolated.mp4')
    width = int(cap2.get(cv2.CAP_PROP_FRAME_WIDTH))
    height = int(cap2.get(cv2.CAP_PROP_FRAME_HEIGHT))

    out = cv2.VideoWriter(
        output_path,
        cv2.VideoWriter_fourcc(*'mp4v'),
        original_fps,  # Keep original FPS
        (width, height)
    )

    while True:
        ret, frame = cap2.read()
        if not ret:
            break
        out.write(frame)

    cap.release()
    cap2.release()
    out.release()
```

## Batch Processing

```python
import os
from concurrent.futures import ThreadPoolExecutor

def process_single(input_path, output_dir, factor=2):
    filename = os.path.basename(input_path)
    output_path = os.path.join(output_dir, f"interpolated_{filename}")
    interpolate_video(input_path, output_path, factor)
    return output_path

input_dir = './videos'
output_dir = './interpolated'
os.makedirs(output_dir, exist_ok=True)

videos = [os.path.join(input_dir, f) for f in os.listdir(input_dir)
          if f.endswith(('.mp4', '.mkv', '.avi'))]

for video in videos:
    result = process_single(video, output_dir, factor=2)
    print(f"Completed: {result}")
```

## Quality Settings

### Model Versions

| Model     | Quality | Speed   |
| --------- | ------- | ------- |
| RIFE v4.6 | Best    | Slow    |
| RIFE v4.0 | Great   | Medium  |
| RIFE-NCNN | Good    | Fastest |

### UHD Mode

For 4K+ videos:

```bash
python inference_video.py --exp=1 --video=input.mp4 --UHD
```

## Memory Optimization

### For Limited VRAM

```python

# Process in tiles
from model.RIFE import Model

model = Model()
model.load_model('./train_log', -1)
model.eval()

# Set tile size for large frames

# model.inference handles tiling internally
```

### Reduce Memory

```bash

# Use NCNN version (more memory efficient)
pip install rife-ncnn-vulkan-python
```

## Performance

| Resolution | GPU      | 2x Interp FPS |
| ---------- | -------- | ------------- |
| 1080p      | RTX 3090 | \~60 fps      |
| 1080p      | RTX 4090 | \~100 fps     |
| 4K         | RTX 3090 | \~15 fps      |
| 4K         | RTX 4090 | \~30 fps      |

## Troubleshooting

### Artifacts/Ghosting

* Use scene detection to skip cuts
* Reduce interpolation factor
* Check for fast motion

### Out of Memory

* Use NCNN version
* Process at lower resolution, upscale after
* Reduce batch size

### Slow Processing

* Use NCNN-Vulkan version
* Enable GPU acceleration
* Use smaller model

## Scene Detection

Skip interpolation across scene cuts:

```python
from scenedetect import detect, ContentDetector

scenes = detect('input.mp4', ContentDetector())

# Don't interpolate between scenes
for scene in scenes:
    print(f"Scene: {scene[0].get_frames()} - {scene[1].get_frames()}")
```

## Cost Estimate

Typical CLORE.AI marketplace rates (as of 2024):

| GPU       | Hourly Rate | Daily Rate | 4-Hour Session |
| --------- | ----------- | ---------- | -------------- |
| RTX 3060  | \~$0.03     | \~$0.70    | \~$0.12        |
| RTX 3090  | \~$0.06     | \~$1.50    | \~$0.25        |
| RTX 4090  | \~$0.10     | \~$2.30    | \~$0.40        |
| A100 40GB | \~$0.17     | \~$4.00    | \~$0.70        |
| A100 80GB | \~$0.25     | \~$6.00    | \~$1.00        |

*Prices vary by provider and demand. Check* [*CLORE.AI Marketplace*](https://clore.ai/marketplace) *for current rates.*

**Save money:**

* Use **Spot** market for flexible workloads (often 30-50% cheaper)
* Pay with **CLORE** tokens
* Compare prices across different providers

## Next Steps

* [FFmpeg NVENC](https://docs.clore.ai/guides/video-processing/ffmpeg-nvenc) - Encode output
* [Real-ESRGAN](https://docs.clore.ai/guides/image-processing/real-esrgan-upscaling) - Upscale video
* [AI Video Generation](https://docs.clore.ai/guides/video-generation/ai-video-generation) - Generate videos
