# OpenClaw on Clore

## Overview

[OpenClaw](https://openclaw.ai) is an open-source AI agent platform that connects to Claude, GPT, Gemini, and local models — acting as a personal AI assistant across Telegram, Discord, WhatsApp, and more. Running it on a Clore.ai server gives you:

* **24/7 uptime** — no laptop sleep, no disconnections
* **GPU acceleration** — local LLM inference (Ollama, vLLM), Whisper STT, TTS, image generation
* **Low cost** — rent exactly the hardware you need, pay by the hour
* **Full control** — root access, Docker support, any software stack

### Why Clore + OpenClaw?

| Feature             | Laptop  | Traditional VPS | Clore.ai Server |
| ------------------- | ------- | --------------- | --------------- |
| Always-on           | ❌       | ✅               | ✅               |
| GPU available       | Limited | ❌ or $$$        | ✅ from $0.10/hr |
| Local LLM inference | Slow    | CPU only        | Full GPU speed  |
| Voice (Whisper/TTS) | ✅       | Slow (CPU)      | ✅ Real-time     |
| Root + Docker       | ✅       | ✅               | ✅               |
| Hourly billing      | N/A     | Monthly         | ✅ Per-hour      |

### Recommended Hardware

| Use Case                                         | GPU             | VRAM     | RAM    | Est. Cost     |
| ------------------------------------------------ | --------------- | -------- | ------ | ------------- |
| **Basic assistant** (API-only, no local models)  | Any / CPU-only  | —        | 8 GB+  | $0.05–0.15/hr |
| **Local 7–8B LLM** (Ollama + Llama 3.1 8B)       | RTX 3060/3070   | 12 GB    | 16 GB+ | $0.10–0.25/hr |
| **Local 70B LLM** (vLLM + Llama 3.1 70B)         | RTX 4090 / A100 | 24–80 GB | 64 GB+ | $0.30–1.00/hr |
| **Full stack** (LLM + Whisper + TTS + image gen) | RTX 4090        | 24 GB    | 32 GB+ | $0.25–0.50/hr |

> **Tip:** If you only need OpenClaw as a cloud-based assistant using API models (Claude, GPT), you don't need a GPU at all — a cheap CPU server works fine. Add a GPU when you want local inference.

***

## Step 1: Rent a Server on Clore.ai

### 1.1 Browse the Marketplace

Go to [clore.ai/marketplace](https://clore.ai/marketplace) and filter by your requirements:

* **For basic assistant**: Sort by price, pick any cheap Ubuntu server
* **For local LLM**: Filter by GPU (e.g., RTX 4090), ensure ≥24 GB VRAM
* **OS**: Choose **Ubuntu 22.04** or **Ubuntu 24.04** (best compatibility)

### 1.2 Create an Order

1. Select the server → **Rent**
2. Choose **On-demand** (hourly) or **Spot** (cheaper but can be outbid)
3. Select the Docker image: **`ubuntu:22.04`** or **`nvidia/cuda:12.4.0-runtime-ubuntu22.04`** (if you need GPU)
4. Set SSH public key (or use password — SSH key recommended)
5. Confirm the order

### 1.3 Connect via SSH

Once the server is running, find the SSH connection details in your [Orders](https://clore.ai/my-orders) page:

```bash
ssh root@<server-ip> -p <port>
```

> **Note:** Clore servers use Docker containers, so you get root access inside the container. The SSH port may be non-standard (e.g., 50022) — check your order details.

***

## Step 2: Install OpenClaw

### 2.1 Install Node.js 22+

```bash
# Update system packages
apt update && apt upgrade -y

# Install Node.js 22 via NodeSource
curl -fsSL https://deb.nodesource.com/setup_22.x | bash -
apt install -y nodejs

# Verify
node --version   # Should show v22.x.x
npm --version
```

### 2.2 Install OpenClaw

**Option A: Installer script (recommended)**

```bash
curl -fsSL https://openclaw.ai/install.sh | bash
```

The script installs the CLI, runs onboarding, and starts the gateway.

**Option B: Manual npm install**

```bash
npm install -g openclaw@latest
openclaw onboard --install-daemon
```

### 2.3 Run the Onboarding Wizard

If you used the installer script, onboarding runs automatically. Otherwise:

```bash
openclaw onboard --install-daemon
```

The wizard will ask you to:

1. **Set up auth** — paste your Anthropic API key or connect via OAuth
2. **Choose a channel** — Telegram bot token, Discord, WhatsApp, etc.
3. **Configure the gateway** — port, binding, security

> **For Telegram:** Create a bot via [@BotFather](https://t.me/BotFather), copy the token, and paste it during onboarding.

***

## Step 3: Configure for Always-On Operation

### 3.1 Start the Gateway as a Service

```bash
# Check if the gateway is running
openclaw gateway status

# Start it (if not already running)
openclaw gateway start

# Verify it's healthy
openclaw status
```

### 3.2 Keep It Running with systemd (Recommended)

If OpenClaw didn't auto-install the systemd service:

```bash
# Create a systemd service file
cat > /etc/systemd/system/openclaw.service << 'EOF'
[Unit]
Description=OpenClaw Gateway
After=network.target

[Service]
Type=simple
ExecStart=/usr/bin/openclaw gateway --port 18789
Restart=always
RestartSec=10
Environment=NODE_ENV=production
WorkingDirectory=/root

[Install]
WantedBy=multi-user.target
EOF

# Enable and start
systemctl daemon-reload
systemctl enable openclaw
systemctl start openclaw

# Check status
systemctl status openclaw
```

### 3.3 Alternative: Screen/tmux (Quick & Simple)

```bash
# Install screen
apt install -y screen

# Start OpenClaw in a detached screen session
screen -dmS openclaw openclaw gateway --port 18789

# Reattach later
screen -r openclaw
```

***

## Step 4: GPU Setup (Optional — For Local Models)

Skip this section if you only use API-based models (Claude, GPT, etc.).

### 4.1 Verify GPU Access

```bash
# Check if NVIDIA drivers are available
nvidia-smi
```

If `nvidia-smi` works, your GPU is ready. Most Clore CUDA images come pre-configured.

### 4.2 Install Ollama (Local LLM Inference)

```bash
curl -fsSL https://ollama.com/install.sh | sh

# Start Ollama
ollama serve &

# Pull a model
ollama pull llama3.1:8b        # 8B — fits in 12GB VRAM
# ollama pull llama3.1:70b     # 70B — needs 48GB+ VRAM
# ollama pull qwen2.5:32b      # 32B — needs 24GB VRAM
```

Configure OpenClaw to use Ollama as a provider — see the [Ollama guide](https://docs.clore.ai/guides/language-models/ollama) for details.

### 4.3 Install Whisper (Voice Transcription)

For GPU-accelerated speech-to-text:

```bash
pip install faster-whisper

# Or use WhisperX for better accuracy
pip install whisperx
```

See the [WhisperX guide](https://docs.clore.ai/guides/audio-and-voice/whisperx) for full setup.

***

## Step 5: Security & Remote Access

### 5.1 Secure the Gateway

By default, the gateway binds to loopback (127.0.0.1). For remote access:

**Option A: SSH tunnel (most secure)**

From your laptop:

```bash
ssh -N -L 18789:127.0.0.1:18789 root@<server-ip> -p <port>
```

Then open `http://127.0.0.1:18789/` in your browser.

**Option B: Token-protected direct access**

Edit `~/.openclaw/config.json5`:

```json5
{
  gateway: {
    bind: "lan",       // Listen on all interfaces
    port: 18789,
    auth: {
      token: "your-secret-token-here"  // Required for remote access!
    }
  }
}
```

> ⚠️ **Always set a token** if binding to `lan`. Without it, anyone can access your gateway.

### 5.2 Firewall Setup

```bash
# Install UFW
apt install -y ufw

# Allow SSH (use your Clore SSH port)
ufw allow <ssh-port>/tcp

# Allow OpenClaw gateway (only if using direct access)
ufw allow 18789/tcp

# Enable firewall
ufw enable
```

***

## Step 6: Persistence & Backups

### 6.1 Important Directories

| Path                     | Contents                              |
| ------------------------ | ------------------------------------- |
| `~/.openclaw/`           | Config, auth, state, agent profiles   |
| `~/.openclaw/workspace/` | MEMORY.md, daily notes, skills, tools |
| `~/.openclaw/agents/`    | Multi-agent configs (if using teams)  |

### 6.2 Backup Script

Create a simple backup to keep your config safe:

```bash
cat > /root/backup-openclaw.sh << 'EOF'
#!/bin/bash
BACKUP_DIR="/root/openclaw-backups"
mkdir -p "$BACKUP_DIR"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
tar czf "$BACKUP_DIR/openclaw-$TIMESTAMP.tar.gz" \
  ~/.openclaw/config.json5 \
  ~/.openclaw/workspace/ \
  ~/.openclaw/agents/ \
  ~/.openclaw/identity/
echo "Backup saved: $BACKUP_DIR/openclaw-$TIMESTAMP.tar.gz"
EOF
chmod +x /root/backup-openclaw.sh

# Run daily via cron
(crontab -l 2>/dev/null; echo "0 4 * * * /root/backup-openclaw.sh") | crontab -
```

### 6.3 Migrating Between Servers

If you need to switch to a different Clore server:

```bash
# On old server — export
tar czf /tmp/openclaw-migration.tar.gz ~/.openclaw/

# Transfer to new server
scp -P <port> /tmp/openclaw-migration.tar.gz root@<new-server-ip>:/tmp/

# On new server — import
tar xzf /tmp/openclaw-migration.tar.gz -C /
openclaw gateway start
```

***

## Example Configurations

### Basic Telegram Bot (No GPU)

Cheapest setup — just an API-powered assistant:

```
Server: Any Ubuntu, no GPU needed
Cost: ~$0.05–0.15/hr ($3–10/month)
Config: Anthropic API key + Telegram bot token
```

### AI Workstation (GPU)

Full-featured with local models:

```
Server: RTX 4090, 24GB VRAM, 32GB RAM
Cost: ~$0.25–0.50/hr
Stack: OpenClaw + Ollama (Llama 3.1 70B) + WhisperX + Coqui TTS
```

### Multi-Agent Team

Run a team of specialized AI agents:

```
Server: RTX 4090 or dual GPU
Cost: ~$0.30–0.60/hr
Stack: OpenClaw multi-agent (5+ agents) + Ollama + shared skills
```

***

## Troubleshooting

### Gateway won't start

```bash
# Check logs
openclaw gateway status
journalctl -u openclaw -n 50

# Common fix: port already in use
lsof -i :18789
kill <pid>
openclaw gateway start
```

### GPU not detected

```bash
# Check NVIDIA drivers
nvidia-smi

# If not found, you may need the CUDA Docker image
# Re-create order with nvidia/cuda:12.4.0-runtime-ubuntu22.04
```

### Connection drops on server restart

Clore spot instances can be reclaimed. For persistent operation:

* Use **on-demand** pricing (not spot)
* Set up the systemd service (auto-restart)
* Keep backups (the backup script above)
* Consider a dedicated/reserved server for critical workloads

### Node.js version issues

```bash
# Check version
node --version

# If below v22, reinstall
curl -fsSL https://deb.nodesource.com/setup_22.x | bash -
apt install -y nodejs
```

***

## Tips & Best Practices

1. **Start cheap** — Use a basic CPU server first. Add GPU when you need local inference.
2. **Use on-demand for production** — Spot is cheaper but can be interrupted. On-demand guarantees uptime.
3. **Back up regularly** — Your `~/.openclaw/workspace/` contains all memory and configs.
4. **Monitor costs** — Check your Clore dashboard regularly. Set spending alerts if available.
5. **Use the Control UI** — Access via SSH tunnel at `http://127.0.0.1:18789/` for web-based management.
6. **Combine with API models** — Even with a GPU server, use Claude/GPT via API for the main agent and local models for specific tasks (embeddings, transcription).

***

## Further Reading

* [OpenClaw Getting Started](https://docs.openclaw.ai/start/getting-started)
* [OpenClaw VPS Hosting Guide](https://docs.openclaw.ai/install/vps)
* [OpenClaw Docker Setup](https://docs.openclaw.ai/install/docker)
* [Ollama on Clore](https://docs.clore.ai/guides/language-models/ollama)
* [vLLM on Clore](https://docs.clore.ai/guides/language-models/vllm)
* [GPU Comparison & Pricing](https://docs.clore.ai/guides/getting-started/gpu-comparison)
