# 3D Gaussian Splatting

**3D Gaussian Splatting** is a revolutionary real-time 3D scene reconstruction technique with over **15,000 GitHub stars**. Unlike NeRF-based methods, Gaussian Splatting represents scenes as millions of tiny 3D Gaussians that can be rendered at **real-time frame rates** (100+ FPS) while achieving photorealistic quality. Deploy it on Clore.ai's GPU cloud to reconstruct and explore 3D scenes from your own photos.

***

## What is 3D Gaussian Splatting?

Traditional NeRF methods implicitly encode a scene in a neural network, requiring per-pixel ray marching at render time. Gaussian Splatting takes a fundamentally different approach:

1. **Initialization:** Start from a sparse point cloud (from COLMAP)
2. **Representation:** Expand each point into a 3D Gaussian with position, scale, rotation, opacity, and spherical harmonics color
3. **Optimization:** Differentiably render Gaussians and optimize against training images
4. **Rendering:** Project Gaussians onto the image plane via alpha-compositing (extremely fast)

**Key advantages over NeRF:**

* Real-time rendering (100+ FPS at 1080p)
* Better fine detail reconstruction
* Explicit 3D representation (editable, exportable)
* Faster training (30–60 min vs hours)
* Works on consumer GPUs

***

## Prerequisites

| Requirement | Minimum       | Recommended     |
| ----------- | ------------- | --------------- |
| GPU VRAM    | 12 GB         | 24 GB           |
| GPU         | RTX 3080 12GB | RTX 4090 / A100 |
| RAM         | 16 GB         | 32 GB           |
| Storage     | 30 GB         | 60 GB           |
| CUDA        | 11.7+         | 12.1+           |

{% hint style="warning" %}
Gaussian Splatting has strict CUDA requirements. The CUDA version must match the `diff-gaussian-rasterization` compiled extension. Using the provided Dockerfile eliminates compatibility issues.
{% endhint %}

***

## Step 1 — Rent a GPU on Clore.ai

1. Log in to [clore.ai](https://clore.ai).
2. Click **Marketplace** and filter by VRAM ≥ 16 GB.
3. Select a server — RTX 4090 offers the best price/performance.
4. Set Docker image to your custom image (see Step 2).
5. Set open ports: `22` (SSH) and `8080` (web viewer).
6. Click **Rent**.

***

## Step 2 — Dockerfile

Build a custom Docker image with all dependencies:

```dockerfile
FROM pytorch/pytorch:2.1.2-cuda12.1-cudnn8-devel

ENV DEBIAN_FRONTEND=noninteractive
ENV TORCH_CUDA_ARCH_LIST="6.0;6.1;7.0;7.5;8.0;8.6;8.9;9.0+PTX"

RUN apt-get update && apt-get install -y \
    git wget curl cmake build-essential \
    libboost-program-options-dev libboost-filesystem-dev \
    libboost-graph-dev libboost-system-dev libboost-test-dev \
    libeigen3-dev libflann-dev libfreeimage-dev \
    libmetis-dev libgoogle-glog-dev libgflags-dev \
    libsqlite3-dev libglew-dev qtbase5-dev libqt5opengl5-dev \
    libcgal-dev libceres-dev \
    ffmpeg libgl1 libglib2.0-0 \
    openssh-server \
    python3-pip python3-dev \
    && rm -rf /var/lib/apt/lists/*

# Install COLMAP
RUN apt-get update && apt-get install -y colmap && rm -rf /var/lib/apt/lists/*

# Configure SSH
RUN mkdir /var/run/sshd && \
    echo 'root:clore123' | chpasswd && \
    sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config

WORKDIR /workspace

# Clone original 3DGS repo
RUN git clone https://github.com/graphdeco-inria/gaussian-splatting /workspace/gaussian-splatting \
    --recursive

# Install Python dependencies
RUN cd /workspace/gaussian-splatting && \
    pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121 && \
    pip install -r requirements.txt

# Build CUDA extensions
RUN cd /workspace/gaussian-splatting && \
    pip install submodules/diff-gaussian-rasterization && \
    pip install submodules/simple-knn

# Install web viewer dependencies
RUN pip install viser==0.1.29 nerfview==0.0.4 trimesh

EXPOSE 22 8080

CMD service ssh start && tail -f /dev/null
```

### Build and Push

Build the image and push it to your own Docker Hub account (replace `YOUR_DOCKERHUB_USERNAME` with your actual username):

```bash
docker build -t YOUR_DOCKERHUB_USERNAME/gaussian-splatting:latest .
docker push YOUR_DOCKERHUB_USERNAME/gaussian-splatting:latest
```

{% hint style="info" %}
There is no official pre-built Docker image for 3D Gaussian Splatting on Docker Hub. The official repository at [graphdeco-inria/gaussian-splatting](https://github.com/graphdeco-inria/gaussian-splatting) does not provide one — build from the Dockerfile above. The image must be built with the correct CUDA architecture flags matching your target GPU.
{% endhint %}

Use `YOUR_DOCKERHUB_USERNAME/gaussian-splatting:latest` in your Clore.ai configuration.

***

## Step 3 — Connect via SSH

```bash
ssh root@<clore-host> -p <assigned-ssh-port>
```

Verify the build:

```bash
cd /workspace/gaussian-splatting
python -c "from diff_gaussian_rasterization import GaussianRasterizationSettings; print('CUDA extension OK')"
```

***

## Step 4 — Prepare Your Dataset

### Option A: Use Tandt (Tanks and Temples) Dataset

Classic benchmark dataset for quick testing:

```bash
mkdir -p /workspace/data && cd /workspace/data

# Download small test scene
wget https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/datasets/input/tandt.zip
unzip tandt.zip
```

### Option B: Process Your Own Photos

```bash
# Upload photos
scp -P <port> -r ./my_photos/ root@<clore-host>:/workspace/data/

# Run COLMAP processing script (provided with 3DGS)
cd /workspace/gaussian-splatting

python convert.py \
    -s /workspace/data/my_photos \
    --no_gpu   # optional: if COLMAP GPU solver conflicts
```

{% hint style="info" %}
The `convert.py` script runs the full COLMAP pipeline: feature extraction, matching, sparse reconstruction, and undistortion. This takes 5–30 minutes depending on image count.
{% endhint %}

### Option C: Process from Video

```bash
# Extract frames from video at 2fps
ffmpeg -i /workspace/data/my_video.mp4 \
    -vf fps=2 \
    /workspace/data/frames/frame_%04d.jpg

# Then run COLMAP processing on the frames
python convert.py -s /workspace/data/frames
```

***

## Step 5 — Train a Gaussian Splat

### Standard Training

```bash
cd /workspace/gaussian-splatting

python train.py \
    -s /workspace/data/my_photos \
    -m /workspace/output/my_scene \
    --iterations 30000 \
    --eval
```

### Training on Tandt Dataset

```bash
python train.py \
    -s /workspace/data/tandt/truck \
    -m /workspace/output/truck \
    --iterations 30000 \
    --eval
```

### Fast Training (Quick Preview)

```bash
python train.py \
    -s /workspace/data/my_photos \
    -m /workspace/output/my_scene_fast \
    --iterations 7000
```

{% hint style="info" %}
Training to 7,000 iterations takes \~10 minutes on an RTX 4090 and gives a good quality preview. Full 30,000 iterations takes \~30–40 minutes and produces final quality.
{% endhint %}

### Training Progress

Monitor training output — you'll see metrics like:

```
[ITER 1000] Evaluating train: L1 0.04, PSNR 26.12 dB
[ITER 7000] Evaluating train: L1 0.02, PSNR 29.45 dB
[ITER 30000] Evaluating train: L1 0.01, PSNR 32.80 dB
```

PSNR above 30 dB indicates high-quality reconstruction.

***

## Step 6 — Render and Visualize

### Render from Trained Model

```bash
python render.py \
    -m /workspace/output/my_scene \
    --skip_train
```

Renders are saved to `/workspace/output/my_scene/test/ours_30000/renders/`.

### Create a Flythrough Video

```bash
# Convert rendered frames to video
ffmpeg -framerate 24 \
    -pattern_type glob \
    -i '/workspace/output/my_scene/test/ours_30000/renders/*.png' \
    -c:v libx264 \
    -pix_fmt yuv420p \
    /workspace/output/flythrough.mp4
```

### Evaluate Metrics

```bash
python metrics.py -m /workspace/output/my_scene
```

Expected output:

```
SSIM : 0.8324
PSNR : 32.81
LPIPS: 0.1893
```

***

## Step 7 — Interactive Web Viewer

To explore the trained scene interactively:

### Using nerfview/viser

```python
# /workspace/view_splat.py
import viser
import numpy as np
from plyfile import PlyData
import torch

server = viser.ViserServer(host="0.0.0.0", port=8080)
print("Viewer running at http://0.0.0.0:8080")

# Load PLY file
ply_path = "/workspace/output/my_scene/point_cloud/iteration_30000/point_cloud.ply"
plydata = PlyData.read(ply_path)

xyz = np.stack([
    plydata['vertex']['x'],
    plydata['vertex']['y'],
    plydata['vertex']['z'],
], axis=-1)

# Add point cloud to viewer
server.add_point_cloud(
    name="/splat",
    points=xyz,
    colors=np.ones((len(xyz), 3)) * 0.7,
    point_size=0.003,
)

import time
while True:
    time.sleep(0.01)
```

```bash
python /workspace/view_splat.py &
```

Then open: `http://<clore-host>:<public-port-8080>`

### Alternative: Use SuperSplat (Browser-Based Viewer)

Download the `.ply` file and open it in [SuperSplat](https://playcanvas.com/super-splat):

```bash
# Download from your local machine
scp -P <port> root@<clore-host>:/workspace/output/my_scene/point_cloud/iteration_30000/point_cloud.ply ./
```

Then drag-and-drop the `.ply` into the SuperSplat browser at: `https://playcanvas.com/super-splat`

***

## Advanced Options

### Control Number of Gaussians

```bash
# Higher densification for more detailed scenes
python train.py \
    -s /workspace/data/my_photos \
    -m /workspace/output/my_scene \
    --densify_until_iter 15000 \
    --densify_grad_threshold 0.0002
```

### White Background (for Objects)

```bash
python train.py \
    -s /workspace/data/my_object \
    -m /workspace/output/my_object \
    --white_background
```

### Large Scale Scenes

```bash
# Increase opacity reset interval for outdoor scenes
python train.py \
    -s /workspace/data/outdoor \
    -m /workspace/output/outdoor \
    --opacity_reset_interval 5000 \
    --iterations 50000
```

***

## Alternative: Gaussian Splatting with gsplat

`gsplat` is a faster, memory-efficient implementation:

```bash
pip install gsplat

# Training with gsplat
python examples/simple_trainer.py \
    --data_dir /workspace/data/my_photos \
    --result_dir /workspace/gsplat_output
```

***

## Troubleshooting

### CUDA Extension Build Fails

```
error: no kernel image is available for execution on the device
```

**Solution:** Rebuild for your specific GPU architecture:

```bash
export TORCH_CUDA_ARCH_LIST="8.6"  # For RTX 3090/4090
cd /workspace/gaussian-splatting
pip install submodules/diff-gaussian-rasterization --force-reinstall
```

### COLMAP Fails to Reconstruct

**Solutions:**

* Ensure ≥ 50% image overlap
* Use more photos (100+ recommended)
* Try sequential matching for video frames: add `--match sequential` to convert.py

### Out of Memory During Training

```bash
# Reduce max number of Gaussians
python train.py \
    -s /workspace/data/my_photos \
    -m /workspace/output/my_scene \
    --max_num_splats 2000000  # default is ~6M
```

### Floaters in the Scene

Floating artifacts from Gaussian initialization:

* Increase `--densify_grad_threshold` to be more selective
* Use `--prune_opacity_threshold 0.005` to remove low-opacity Gaussians earlier

***

## Clore.ai GPU Recommendations

Gaussian Splatting training is GPU-compute intensive with frequent CUDA kernel calls. VRAM determines max scene complexity (number of Gaussians); compute determines training speed.

| GPU           | VRAM  | Clore.ai Price | 30K iter training | Max Gaussians  |
| ------------- | ----- | -------------- | ----------------- | -------------- |
| RTX 3090      | 24 GB | \~$0.12/hr     | \~45–55 min       | \~6M           |
| RTX 4090      | 24 GB | \~$0.70/hr     | \~30–35 min       | \~6M           |
| A100 40GB     | 40 GB | \~$1.20/hr     | \~12–18 min       | \~10M+         |
| RTX 3080 12GB | 12 GB | \~$0.08/hr     | \~70 min          | \~3M (limited) |

{% hint style="info" %}
**RTX 3090 at \~$0.12/hr is the best choice** for Gaussian Splatting. A full 30K iteration training run costs \~$0.09–0.11 in GPU time. For multiple scenes in a session, the cost is negligible.

For quick experiments: train to 7,000 iterations first (\~15 min on RTX 3090, \~$0.03). Check quality in the web viewer. Only run full 30K iterations for final output.
{% endhint %}

**COLMAP preprocessing note:** COLMAP (Structure from Motion) runs on CPU/GPU but the heavy compute is on CPU. Most Clore.ai servers have adequate CPUs for scenes under 200 images. For 500+ image datasets, look for servers with 16+ CPU cores.

***

## Useful Resources

* [3D Gaussian Splatting GitHub](https://github.com/graphdeco-inria/gaussian-splatting)
* [Original Paper (SIGGRAPH 2023)](https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/)
* [gsplat — Fast Implementation](https://github.com/nerfstudio-project/gsplat)
* [SuperSplat — Browser Viewer](https://playcanvas.com/super-splat)
* [Gaussian Splatting Community (Reddit)](https://www.reddit.com/r/gaussiansplatting/)
* [Awesome Gaussian Splatting](https://github.com/MrNeRF/awesome-3D-gaussian-splatting)


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.clore.ai/guides/3d-generation/gaussian-splatting.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
