# Blender Rendering

Render 3D scenes and animations using Blender on CLORE.AI GPUs.

{% hint style="success" %}
All examples can be run on GPU servers rented through [CLORE.AI Marketplace](https://clore.ai/marketplace).
{% endhint %}

## Renting on CLORE.AI

1. Visit [CLORE.AI Marketplace](https://clore.ai/marketplace)
2. Filter by GPU type, VRAM, and price
3. Choose **On-Demand** (fixed rate) or **Spot** (bid price)
4. Configure your order:
   * Select Docker image
   * Set ports (TCP for SSH, HTTP for web UIs)
   * Add environment variables if needed
   * Enter startup command
5. Select payment: **CLORE**, **BTC**, or **USDT/USDC**
6. Create order and wait for deployment

### Access Your Server

* Find connection details in **My Orders**
* Web interfaces: Use the HTTP port URL
* SSH: `ssh -p <port> root@<proxy-address>`

## Why Rent GPUs for Blender?

* Render complex scenes 10-50x faster than CPU
* Multiple GPUs for even faster rendering
* No need to invest in expensive hardware
* Pay only for render time

## Requirements

| Scene Complexity | Recommended GPU | VRAM    |
| ---------------- | --------------- | ------- |
| Simple           | RTX 3070        | 8GB     |
| Medium           | RTX 3090        | 24GB    |
| Complex          | RTX 4090        | 24GB    |
| Production       | A100            | 40-80GB |

## Quick Deploy

**Docker Image:**

```
linuxserver/blender
```

Or headless rendering:

```
nytimes/blender:3.6-gpu-ubuntu22.04
```

**Ports:**

```
22/tcp
3000/http
```

## Headless Rendering Setup

**Image:**

```
nvidia/cuda:12.1.0-devel-ubuntu22.04
```

**Command:**

```bash
apt-get update && \
apt-get install -y wget libxi6 libxxf86vm1 libxfixes3 libxrender1 libgl1 && \
wget https://download.blender.org/release/Blender4.0/blender-4.0.2-linux-x64.tar.xz && \
tar -xf blender-4.0.2-linux-x64.tar.xz && \
mv blender-4.0.2-linux-x64 /opt/blender
```

## Accessing Your Service

After deployment, find your `http_pub` URL in **My Orders**:

1. Go to **My Orders** page
2. Click on your order
3. Find the `http_pub` URL (e.g., `abc123.clorecloud.net`)

Use `https://YOUR_HTTP_PUB_URL` instead of `localhost` in examples below.

## Upload Your Project

### Via SCP

```bash

# Upload .blend file
scp -P <port> myproject.blend root@<proxy>:/workspace/

# Upload project folder
scp -P <port> -r ./project/ root@<proxy>:/workspace/
```

### Via rsync (large projects)

```bash
rsync -avz --progress -e "ssh -p <port>" ./project/ root@<proxy>:/workspace/project/
```

## Render Commands

### Single Frame

```bash
/opt/blender/blender -b /workspace/myproject.blend -o /workspace/output/frame_### -f 1 -- --cycles-device CUDA
```

### Animation (Frame Range)

```bash
/opt/blender/blender -b /workspace/myproject.blend -o /workspace/output/frame_### -s 1 -e 250 -a -- --cycles-device CUDA
```

### Specific Frames

```bash
/opt/blender/blender -b /workspace/myproject.blend -o /workspace/output/frame_### -f 1,50,100,150,200 -- --cycles-device CUDA
```

## Render Options

### Resolution

```bash

# Override resolution
blender -b file.blend -o //output/frame_### -x 1920 -y 1080 -a -- --cycles-device CUDA
```

### Use Python Script

```bash
blender -b file.blend --python render_setup.py -a
```

**render\_setup.py:**

```python
import bpy

# Set render engine
bpy.context.scene.render.engine = 'CYCLES'

# Set device
bpy.context.preferences.addons['cycles'].preferences.compute_device_type = 'CUDA'

# Enable all GPUs
for device in bpy.context.preferences.addons['cycles'].preferences.devices:
    device.use = True

# Set samples
bpy.context.scene.cycles.samples = 128

# Set resolution
bpy.context.scene.render.resolution_x = 1920
bpy.context.scene.render.resolution_y = 1080

# Output settings
bpy.context.scene.render.image_settings.file_format = 'PNG'
```

## Multi-GPU Rendering

For servers with multiple GPUs:

```python
import bpy

# Enable CUDA
prefs = bpy.context.preferences.addons['cycles'].preferences
prefs.compute_device_type = 'CUDA'

# Refresh devices
prefs.get_devices()

# Enable all GPUs
for device in prefs.devices:
    if device.type == 'CUDA':
        device.use = True
        print(f"Enabled: {device.name}")
```

## Render Farm Style (Multiple Servers)

Rent multiple servers and split frames:

**Server 1:**

```bash
blender -b project.blend -o //output/frame_### -s 1 -e 100 -a
```

**Server 2:**

```bash
blender -b project.blend -o //output/frame_### -s 101 -e 200 -a
```

**Server 3:**

```bash
blender -b project.blend -o //output/frame_### -s 201 -e 300 -a
```

Then combine renders locally.

## Eevee Rendering (Faster)

For real-time quality:

```bash
blender -b file.blend -E BLENDER_EEVEE -o //output/frame_### -a
```

## OptiX Support (RTX GPUs)

For RTX ray tracing:

```python
import bpy
prefs = bpy.context.preferences.addons['cycles'].preferences
prefs.compute_device_type = 'OPTIX'  # Instead of CUDA
```

## Automated Render Script

**render.sh:**

```bash
#!/bin/bash
BLEND_FILE=$1
START_FRAME=$2
END_FRAME=$3
OUTPUT_DIR=/workspace/output

mkdir -p $OUTPUT_DIR

/opt/blender/blender -b $BLEND_FILE \
    -o ${OUTPUT_DIR}/frame_### \
    -s $START_FRAME \
    -e $END_FRAME \
    -a \
    -- --cycles-device CUDA

echo "Rendering complete!"
ls -la $OUTPUT_DIR
```

Usage:

```bash
chmod +x render.sh
./render.sh /workspace/myproject.blend 1 250
```

## Monitoring Render Progress

### Watch Output Folder

```bash
watch -n 5 'ls -la /workspace/output/ | tail -20'
```

### Blender Output

Blender prints frame progress to stdout:

```
Fra:1 Mem:1234.56M (Peak 1500.00M) | Time:00:01.23 | Remaining:04:32.10 | Mem:567.89M, Peak:890.12M | Scene, View Layer | Sample 64/128
```

## Download Rendered Frames

```bash

# Download all frames
scp -P <port> -r root@<proxy>:/workspace/output/ ./renders/

# Download specific frames
scp -P <port> root@<proxy>:/workspace/output/frame_001.png ./

# Sync with rsync
rsync -avz --progress -e "ssh -p <port>" root@<proxy>:/workspace/output/ ./renders/
```

## Video Encoding

After rendering frames, encode to video:

```bash

# Install ffmpeg
apt-get install -y ffmpeg

# Encode to MP4
ffmpeg -framerate 24 -i /workspace/output/frame_%03d.png -c:v libx264 -pix_fmt yuv420p output.mp4

# Encode to ProRes (high quality)
ffmpeg -framerate 24 -i /workspace/output/frame_%03d.png -c:v prores_ks -profile:v 3 output.mov
```

## Performance Tips

### Optimize for Speed

```python

# Reduce samples for preview
bpy.context.scene.cycles.samples = 64

# Use adaptive sampling
bpy.context.scene.cycles.use_adaptive_sampling = True
bpy.context.scene.cycles.adaptive_threshold = 0.01

# Limit bounces
bpy.context.scene.cycles.max_bounces = 8
```

### Memory Optimization

```python

# Use tiled rendering for high-res
bpy.context.scene.render.use_persistent_data = True

# Set tile size
bpy.context.scene.cycles.tile_size = 256  # For GPU
```

## Render Time Estimates

| Scene      | GPU      | Resolution | Samples | Time/Frame |
| ---------- | -------- | ---------- | ------- | ---------- |
| Simple     | RTX 3090 | 1080p      | 128     | \~30s      |
| Medium     | RTX 3090 | 1080p      | 256     | \~2min     |
| Complex    | RTX 4090 | 4K         | 512     | \~10min    |
| Production | A100     | 4K         | 1024    | \~20min    |

## Cost Calculation

**Example: 250 frame animation**

```
GPU: RTX 4090
Time per frame: 2 minutes
Total render time: 500 minutes = 8.3 hours
Hourly rate: $0.04
Total cost: ~$0.33
```

## Troubleshooting

### "CUDA device not found"

```python

# Check available devices
import bpy
prefs = bpy.context.preferences.addons['cycles'].preferences
prefs.compute_device_type = 'CUDA'
prefs.get_devices()
for d in prefs.devices:
    print(d.name, d.type)
```

{% hint style="danger" %}
**Out of memory**
{% endhint %}

* Reduce texture resolution
* Use smaller tile size
* Enable "persistent data"
* Use simpler shaders

### Slow rendering

* Check GPU is being used (nvidia-smi)
* Optimize scene geometry
* Use denoising with fewer samples

## Next Steps

* Run Jupyter for Post-Processing
* [AI Video Generation](https://docs.clore.ai/guides/video-generation/ai-video-generation)


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.clore.ai/guides/other-workloads/blender-rendering.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
