# Open Interpreter

**Open Interpreter** lets language models run code, browse the web, and edit files on your machine through a natural language chat interface. With 57K+ GitHub stars, it's the leading open-source alternative to ChatGPT's Code Interpreter — but without sandbox limits.

{% hint style="success" %}
All examples can be run on GPU servers rented through [CLORE.AI Marketplace](https://clore.ai/marketplace).
{% endhint %}

***

## What is Open Interpreter?

Open Interpreter brings the power of an AI coding assistant directly to your terminal. Instead of copy-pasting between ChatGPT and your shell, you chat naturally and the model executes code in real time:

* **Run Python, JS, shell, R, AppleScript** — directly on your server
* **Browse the web** — fetch pages, fill forms, extract data
* **Edit files** — create, modify, and manage any file on disk
* **Persistent state** — variables, imports, and results survive across messages
* **Multiple LLM backends** — OpenAI, Anthropic, local models via Ollama/LlamaCpp

{% hint style="info" %}
Open Interpreter is designed for developers and researchers who want a conversational interface to their entire compute environment. On a Clore.ai GPU server, you get a powerful machine with full internet access and no execution limits.
{% endhint %}

***

## Server Requirements

| Component | Minimum                  | Recommended                    |
| --------- | ------------------------ | ------------------------------ |
| GPU       | Any (CPU mode available) | RTX 3090 / A100 for local LLMs |
| VRAM      | —                        | 24 GB+ for local 13B models    |
| RAM       | 8 GB                     | 16 GB+                         |
| CPU       | 4 cores                  | 8+ cores                       |
| Storage   | 20 GB                    | 50 GB+                         |
| OS        | Ubuntu 20.04+            | Ubuntu 22.04                   |
| Python    | 3.10+                    | 3.11                           |
| Network   | Required                 | High-speed for web browsing    |

***

## Ports

| Port | Service                 | Notes                              |
| ---- | ----------------------- | ---------------------------------- |
| 22   | SSH                     | Terminal access, tunnel for web UI |
| 8000 | Open Interpreter Server | REST API & optional web UI         |

***

## Quick Start with Docker

Open Interpreter doesn't have an official Docker image, so we build a clean one. This approach gives you a reproducible, isolated environment on any Clore.ai server.

### Dockerfile

```dockerfile
FROM python:3.11-slim

# System dependencies
RUN apt-get update && apt-get install -y \
    git \
    curl \
    wget \
    build-essential \
    nodejs \
    npm \
    chromium \
    chromium-driver \
    && rm -rf /var/lib/apt/lists/*

# Install Open Interpreter with all extras
RUN pip install --no-cache-dir \
    open-interpreter \
    'open-interpreter[local]' \
    playwright \
    jupyter

# Install Playwright browsers
RUN playwright install chromium

WORKDIR /workspace

# Expose server port
EXPOSE 8000

# Default: interactive terminal
CMD ["interpreter"]
```

### Build & Run

```bash
# Build the image
docker build -t open-interpreter:latest .

# Run interactive mode (terminal chat)
docker run -it --rm \
  -e OPENAI_API_KEY=sk-... \
  -v $(pwd)/workspace:/workspace \
  open-interpreter:latest

# Run as REST API server
docker run -d \
  --name open-interpreter \
  -p 8000:8000 \
  -e OPENAI_API_KEY=sk-... \
  -v $(pwd)/workspace:/workspace \
  open-interpreter:latest \
  interpreter --server --port 8000 --host 0.0.0.0
```

***

## Installation on Clore.ai (Bare Metal)

If you prefer to run directly on a Clore.ai server without Docker:

### Step 1 — Rent a Server

1. Go to [Clore.ai Marketplace](https://clore.ai/marketplace)
2. Filter by **RAM ≥ 16 GB**, **GPU** (optional but useful for local models)
3. Choose a server with a **PyTorch** or **Ubuntu** base image
4. Open **SSH port 22** and optionally **8000** in your order

### Step 2 — Connect via SSH

```bash
ssh root@<server-ip> -p <ssh-port>
```

### Step 3 — Install Dependencies

```bash
# Update system
apt-get update && apt-get upgrade -y

# Install Python 3.11
apt-get install -y python3.11 python3.11-venv python3.11-pip nodejs npm curl

# Create virtual environment
python3.11 -m venv /opt/open-interpreter
source /opt/open-interpreter/bin/activate
```

### Step 4 — Install Open Interpreter

```bash
# Basic install
pip install open-interpreter

# With local LLM support (Ollama, LlamaCpp)
pip install 'open-interpreter[local]'

# With browser/web support
pip install playwright
playwright install chromium
```

### Step 5 — Configure API Key

```bash
# Set environment variable (add to ~/.bashrc for persistence)
export OPENAI_API_KEY=sk-your-key-here
export ANTHROPIC_API_KEY=sk-ant-your-key-here

# Or use a .env file
cat > /workspace/.env << 'EOF'
OPENAI_API_KEY=sk-your-key-here
ANTHROPIC_API_KEY=sk-ant-your-key-here
EOF
```

### Step 6 — First Run

```bash
# Activate environment
source /opt/open-interpreter/bin/activate

# Start interactive chat
interpreter

# Or with specific model
interpreter --model gpt-4o
interpreter --model claude-3-5-sonnet-20241022
```

***

## Using Local LLMs (No API Key Required)

One of Open Interpreter's killer features on Clore.ai GPU servers is running entirely local models:

### Option A: Ollama Backend

```bash
# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh

# Pull a code-capable model
ollama pull codellama:13b
ollama pull deepseek-coder:6.7b
ollama pull mistral:7b

# Run Open Interpreter with Ollama
interpreter --model ollama/codellama:13b
```

### Option B: LlamaCpp Backend

```bash
# Install llama-cpp-python with CUDA support
pip install 'llama-cpp-python[server]' --extra-index-url https://jllllll.github.io/llama-cpp-python-cuBLAS-wheels/AVX2/cu121

# Download a GGUF model
wget https://huggingface.co/TheBloke/CodeLlama-13B-GGUF/resolve/main/codellama-13b.Q4_K_M.gguf

# Run with the local model
interpreter --local --model /path/to/codellama-13b.Q4_K_M.gguf
```

***

## Running as a Server (REST API)

Open Interpreter 0.2+ includes a built-in HTTP server for programmatic access:

```bash
# Start the server
interpreter --server --port 8000 --host 0.0.0.0

# In another terminal or client, send requests:
curl -X POST http://<server-ip>:8000/run \
  -H "Content-Type: application/json" \
  -d '{"language": "python", "code": "import os; print(os.listdir(\".\"))"}'

# Chat endpoint
curl -X POST http://<server-ip>:8000/chat \
  -H "Content-Type: application/json" \
  -d '{"message": "List all Python files in /workspace and count lines of code"}'
```

### SSH Tunnel for Local Access

If port 8000 is not publicly exposed, use SSH tunneling:

```bash
# On your local machine
ssh -L 8000:localhost:8000 root@<server-ip> -p <ssh-port> -N
# Then open http://localhost:8000
```

***

## Practical Examples

### Example 1: Data Analysis Pipeline

```
User: Download the MNIST dataset, train a simple CNN, and plot accuracy curves. Save the plot as mnist_results.png

Open Interpreter: Sure! I'll do this step by step...
[executes Python code in real time]
```

```python
# Open Interpreter generates and runs:
import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np

(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
# ... trains model, plots results, saves PNG
```

### Example 2: Web Scraping

```
User: Scrape the top 10 trending GitHub repositories today and save them to a CSV with name, stars, and description.
```

### Example 3: File Management

```
User: Find all .log files in /var/log older than 7 days and compress them into a tarball at /backup/logs-$(date).tar.gz
```

### Example 4: System Monitoring Script

```
User: Write me a Python script that monitors GPU memory usage every 5 seconds and alerts if it exceeds 90%. Run it in the background.
```

***

## Configuration File

Create `~/.interpreter/config.yaml` to set defaults:

```yaml
model: gpt-4o
temperature: 0
system_message: |
  You are a helpful AI assistant running on a Clore.ai GPU server.
  Always prefer efficient, production-ready code.
  Save important outputs to /workspace/.
safe_mode: false
auto_run: true
verbose: false
```

***

## Running with systemd (Persistent Service)

```ini
# /etc/systemd/system/open-interpreter.service
[Unit]
Description=Open Interpreter Server
After=network.target

[Service]
Type=simple
User=root
WorkingDirectory=/workspace
Environment=OPENAI_API_KEY=sk-your-key
ExecStart=/opt/open-interpreter/bin/interpreter --server --port 8000 --host 0.0.0.0
Restart=on-failure
RestartSec=10

[Install]
WantedBy=multi-user.target
```

```bash
systemctl daemon-reload
systemctl enable open-interpreter
systemctl start open-interpreter
systemctl status open-interpreter
```

***

## Troubleshooting

### `interpreter` command not found

```bash
# Make sure the venv is activated
source /opt/open-interpreter/bin/activate

# Or install globally
pip install open-interpreter

# Check PATH
which interpreter
echo $PATH
```

### Code execution is blocked / safety mode

```bash
# Disable safe mode (use with caution on trusted servers)
interpreter --safe_mode false

# Or in config.yaml:
# safe_mode: false
# auto_run: true
```

### Playwright / browser errors

```bash
# Install system deps for Chromium
apt-get install -y \
    libnss3 libatk1.0-0 libatk-bridge2.0-0 \
    libcups2 libxcomposite1 libxdamage1 \
    libxrandr2 libgbm1 libxkbcommon0

playwright install chromium
playwright install-deps
```

### Out of memory with local LLMs

```bash
# Use a smaller quantized model
ollama pull codellama:7b  # instead of 13b

# Or reduce context window
interpreter --model ollama/codellama:7b --context_window 4096
```

### Connection refused on port 8000

```bash
# Check if server is running
ss -tlnp | grep 8000

# Check firewall
ufw allow 8000/tcp

# Restart service
systemctl restart open-interpreter
```

### API rate limits

```bash
# Switch to Anthropic Claude for higher limits
export ANTHROPIC_API_KEY=sk-ant-...
interpreter --model claude-3-5-sonnet-20241022

# Or use a local model to avoid API limits entirely
interpreter --model ollama/codellama:13b
```

***

## Security Considerations

{% hint style="warning" %}
Open Interpreter executes code directly on your server. Always:

* Run in a Docker container or VM for production use
* Never expose port 8000 publicly without authentication
* Use SSH tunneling for remote access
* Audit the code before enabling `auto_run: true` for untrusted inputs
  {% endhint %}

***

## Clore.ai GPU Recommendations

Open Interpreter itself is lightweight — the GPU need is driven by whichever **local model** you run as the backend.

| GPU       | VRAM  | Clore.ai Price | Local Model Recommendation                                          |
| --------- | ----- | -------------- | ------------------------------------------------------------------- |
| RTX 3090  | 24 GB | \~$0.12/hr     | CodeLlama 13B Q8, Llama 3 8B, Mistral 7B — solid coding quality     |
| RTX 4090  | 24 GB | \~$0.70/hr     | CodeLlama 34B Q4, DeepSeek Coder 33B Q4 — near GPT-4 coding quality |
| A100 40GB | 40 GB | \~$1.20/hr     | Llama 3 70B Q4 — production-grade autonomous coding agent           |
| CPU-only  | —     | \~$0.02/hr     | Any model via OpenAI/Anthropic API — no local GPU needed            |

{% hint style="info" %}
**If you're using OpenAI/Anthropic API:** You only need a CPU instance (\~$0.02/hr) — the GPU is irrelevant since inference runs in the cloud. Choose GPU instances only when running **local models** to avoid per-token API costs.

**Best local model setup:** RTX 3090 + Ollama running `codellama:13b` gives you a fully autonomous, privacy-preserving coding agent with no API costs for \~$0.12/hr.
{% endhint %}

***

## Useful Links

* **GitHub**: <https://github.com/OpenInterpreter/open-interpreter> ⭐ 57K+
* **Documentation**: <https://docs.openinterpreter.com>
* **Discord**: <https://discord.gg/Hvz9Axh84z>
* **Clore.ai Marketplace**: <https://clore.ai/marketplace>
* **Ollama Models**: <https://ollama.ai/library>
* **HuggingFace GGUF Models**: <https://huggingface.co/TheBloke>
