Nerfstudio

Nerfstudio is a modular, researcher-friendly framework for training and rendering Neural Radiance Fields (NeRF) — a technique that reconstructs photorealistic 3D scenes from 2D images. With over 10,000 GitHub stars, it is the de facto standard for NeRF research and production applications. Run it on Clore.ai's GPU cloud to reconstruct 3D scenes from your own photos or videos.


What is Nerfstudio?

NeRF (Neural Radiance Field) represents a 3D scene as a neural network that, given a camera position and direction, outputs the color and density at that point. By training on dozens of photographs taken from different angles, NeRF learns a complete 3D representation that can be rendered from any viewpoint.

Nerfstudio provides:

  • Multiple NeRF methods: Nerfacto, Instant-NGP, Splatfacto, TensoRF, and more

  • CLI and Python API

  • Interactive web viewer (Viser) at port 7007

  • Export to point clouds, meshes, and video flythroughs

  • Support for custom datasets via COLMAP integration

Use cases:

  • 3D scene reconstruction from drone footage

  • Product visualization from photos

  • Virtual tours from smartphone captures

  • Research into novel view synthesis


Prerequisites

Requirement
Minimum
Recommended

GPU VRAM

8 GB

16–24 GB

GPU

RTX 3080

RTX 4090 / A100

RAM

16 GB

32 GB

Storage

20 GB

50+ GB

CUDA

11.8+

12.1+

circle-info

Training time scales with scene complexity. A typical outdoor scene from 100 photos trains in 10–30 minutes on an RTX 4090. The interactive viewer updates in real-time during training.


Step 1 — Rent a GPU on Clore.ai

  1. Click Marketplace and filter by VRAM ≥ 16 GB.

  2. Select a server — RTX 4090 is ideal for Nerfstudio.

  3. Set Docker image: dromni/nerfstudio:latest

  4. Set open ports: 22 (SSH) and 7007 (Viser web viewer).

  5. Click Rent and wait for the instance to initialize.

circle-info

The dromni/nerfstudio image is the community-maintained official image and includes all dependencies pre-installed (CUDA, tiny-cuda-nn, colmap, ffmpeg).


Step 2 — Connect via SSH

circle-info

The dromni/nerfstudio image uses user (not root) by default. Use sudo for administrative tasks.

Verify the installation:


Step 3 — Prepare Your Dataset

Option A: Use the Provided Example Dataset

Nerfstudio includes built-in datasets to test immediately:

Option B: Process Your Own Images

If you have photos or video of your scene:

From images (COLMAP pipeline):

From video:

circle-info

For best results, use 100–300 photos with significant overlap (>60% between adjacent frames). Walk around the object/scene in a systematic pattern — circles, grids, or figure-eights work well.


Step 4 — Train a NeRF

Nerfacto is Nerfstudio's flagship method, balancing quality and speed:

Training with Instant-NGP (Fastest)

Training with the Provided Poster Dataset


Step 5 — Access the Interactive Viewer

Open your browser and navigate to:

You will see a 3D viewer powered by Viser that shows:

  • Live training progress

  • Current NeRF rendering quality

  • Interactive camera controls

  • Training loss curves

circle-info

The viewer updates every few seconds during training. You can rotate, pan, and zoom to inspect the scene quality as training progresses.


Available Training Methods

Method
Speed
Quality
VRAM
Notes

nerfacto

Medium

High

8 GB

Best all-around

instant-ngp

Fast

Medium

6 GB

Fastest training

splatfacto

Fast

High

8 GB

Gaussian splatting

tensorf

Medium

High

12 GB

Good for objects

mipnerf360

Slow

Very High

24 GB

Best quality

vanilla-nerf

Very Slow

High

16 GB

Research baseline

Training with Splatfacto (Gaussian Splatting)


Step 6 — Evaluate and Render

Check Training Metrics

Render a Video Flythrough

Render Interpolated Spiral


Step 7 — Export 3D Geometry

Export Point Cloud

Export Mesh

Export Gaussian Splats (PLY)


Python API

For programmatic training and evaluation:


Custom Dataset Tips

Camera Capture Best Practices

Setting
Recommendation

Overlap

≥ 60% between frames

Images

100–300 (outdoors), 50–150 (objects)

Motion

Slow, steady movement

Lighting

Consistent, avoid harsh shadows

Focus

Sharp throughout

Improving COLMAP Results


Troubleshooting

COLMAP Fails to Find Camera Poses

Solutions:

  • Ensure images have sufficient overlap

  • Verify images are sharp (no motion blur)

  • Try exhaustive matching: --matching-method exhaustive

  • Reduce --num-frames-target for video to select better frames

Viewer Not Accessible

Solution: Ensure port 7007 is forwarded in Clore.ai. Test connectivity:

Training Loss Not Decreasing

Solutions:

  • Check COLMAP succeeded (look for transforms.json in output dir)

  • Reduce learning rate: --pipeline.model.field-implementation hash

  • Check for dominant sky (use --pipeline.model.background-color white)

Out of Memory


Download Outputs

After training, download your renders and exports:


Cost Estimation

GPU
VRAM
Est. Price
100-image scene

RTX 3080

10 GB

~$0.10/hr

~30–45 min

RTX 4090

24 GB

~$0.35/hr

~10–15 min

A100 40GB

40 GB

~$0.80/hr

~5–8 min

circle-info

Start with Instant-NGP for fast previews, then switch to Nerfacto or MipNeRF360 for final quality. This workflow saves significant compute cost.


Useful Resources


Clore.ai GPU Recommendations

Use Case
Recommended GPU
Est. Cost on Clore.ai

Development/Testing

RTX 3090 (24GB)

~$0.12/gpu/hr

Production

RTX 4090 (24GB)

~$0.70/gpu/hr

Large Scale / High-res Scenes

A100 80GB

~$1.20/gpu/hr

💡 All examples in this guide can be deployed on Clore.aiarrow-up-right GPU servers. Browse available GPUs and rent by the hour — no commitments, full root access.

Last updated

Was this helpful?