OpenClaw on Clore

Deploy OpenClaw AI assistant on a Clore.ai GPU server — always-on, GPU-accelerated, with Telegram/Discord integration, local LLM inference, and voice capabilities.

Overview

OpenClawarrow-up-right is an open-source AI agent platform that connects to Claude, GPT, Gemini, and local models — acting as a personal AI assistant across Telegram, Discord, WhatsApp, and more. Running it on a Clore.ai server gives you:

  • 24/7 uptime — no laptop sleep, no disconnections

  • GPU acceleration — local LLM inference (Ollama, vLLM), Whisper STT, TTS, image generation

  • Low cost — rent exactly the hardware you need, pay by the hour

  • Full control — root access, Docker support, any software stack

Why Clore + OpenClaw?

Feature
Laptop
Traditional VPS
Clore.ai Server

Always-on

GPU available

Limited

❌ or $$$

✅ from $0.10/hr

Local LLM inference

Slow

CPU only

Full GPU speed

Voice (Whisper/TTS)

Slow (CPU)

✅ Real-time

Root + Docker

Hourly billing

N/A

Monthly

✅ Per-hour

Use Case
GPU
VRAM
RAM
Est. Cost

Basic assistant (API-only, no local models)

Any / CPU-only

8 GB+

$0.05–0.15/hr

Local 7–8B LLM (Ollama + Llama 3.1 8B)

RTX 3060/3070

12 GB

16 GB+

$0.10–0.25/hr

Local 70B LLM (vLLM + Llama 3.1 70B)

RTX 4090 / A100

24–80 GB

64 GB+

$0.30–1.00/hr

Full stack (LLM + Whisper + TTS + image gen)

RTX 4090

24 GB

32 GB+

$0.25–0.50/hr

Tip: If you only need OpenClaw as a cloud-based assistant using API models (Claude, GPT), you don't need a GPU at all — a cheap CPU server works fine. Add a GPU when you want local inference.


Step 1: Rent a Server on Clore.ai

1.1 Browse the Marketplace

Go to clore.ai/marketplacearrow-up-right and filter by your requirements:

  • For basic assistant: Sort by price, pick any cheap Ubuntu server

  • For local LLM: Filter by GPU (e.g., RTX 4090), ensure ≥24 GB VRAM

  • OS: Choose Ubuntu 22.04 or Ubuntu 24.04 (best compatibility)

1.2 Create an Order

  1. Select the server → Rent

  2. Choose On-demand (hourly) or Spot (cheaper but can be outbid)

  3. Select the Docker image: ubuntu:22.04 or nvidia/cuda:12.4.0-runtime-ubuntu22.04 (if you need GPU)

  4. Set SSH public key (or use password — SSH key recommended)

  5. Confirm the order

1.3 Connect via SSH

Once the server is running, find the SSH connection details in your Ordersarrow-up-right page:

Note: Clore servers use Docker containers, so you get root access inside the container. The SSH port may be non-standard (e.g., 50022) — check your order details.


Step 2: Install OpenClaw

2.1 Install Node.js 22+

2.2 Install OpenClaw

Option A: Installer script (recommended)

The script installs the CLI, runs onboarding, and starts the gateway.

Option B: Manual npm install

2.3 Run the Onboarding Wizard

If you used the installer script, onboarding runs automatically. Otherwise:

The wizard will ask you to:

  1. Set up auth — paste your Anthropic API key or connect via OAuth

  2. Choose a channel — Telegram bot token, Discord, WhatsApp, etc.

  3. Configure the gateway — port, binding, security

For Telegram: Create a bot via @BotFatherarrow-up-right, copy the token, and paste it during onboarding.


Step 3: Configure for Always-On Operation

3.1 Start the Gateway as a Service

If OpenClaw didn't auto-install the systemd service:

3.3 Alternative: Screen/tmux (Quick & Simple)


Step 4: GPU Setup (Optional — For Local Models)

Skip this section if you only use API-based models (Claude, GPT, etc.).

4.1 Verify GPU Access

If nvidia-smi works, your GPU is ready. Most Clore CUDA images come pre-configured.

4.2 Install Ollama (Local LLM Inference)

Configure OpenClaw to use Ollama as a provider — see the Ollama guide for details.

4.3 Install Whisper (Voice Transcription)

For GPU-accelerated speech-to-text:

See the WhisperX guide for full setup.


Step 5: Security & Remote Access

5.1 Secure the Gateway

By default, the gateway binds to loopback (127.0.0.1). For remote access:

Option A: SSH tunnel (most secure)

From your laptop:

Then open http://127.0.0.1:18789/ in your browser.

Option B: Token-protected direct access

Edit ~/.openclaw/config.json5:

⚠️ Always set a token if binding to lan. Without it, anyone can access your gateway.

5.2 Firewall Setup


Step 6: Persistence & Backups

6.1 Important Directories

Path
Contents

~/.openclaw/

Config, auth, state, agent profiles

~/.openclaw/workspace/

MEMORY.md, daily notes, skills, tools

~/.openclaw/agents/

Multi-agent configs (if using teams)

6.2 Backup Script

Create a simple backup to keep your config safe:

6.3 Migrating Between Servers

If you need to switch to a different Clore server:


Example Configurations

Basic Telegram Bot (No GPU)

Cheapest setup — just an API-powered assistant:

AI Workstation (GPU)

Full-featured with local models:

Multi-Agent Team

Run a team of specialized AI agents:


Troubleshooting

Gateway won't start

GPU not detected

Connection drops on server restart

Clore spot instances can be reclaimed. For persistent operation:

  • Use on-demand pricing (not spot)

  • Set up the systemd service (auto-restart)

  • Keep backups (the backup script above)

  • Consider a dedicated/reserved server for critical workloads

Node.js version issues


Tips & Best Practices

  1. Start cheap — Use a basic CPU server first. Add GPU when you need local inference.

  2. Use on-demand for production — Spot is cheaper but can be interrupted. On-demand guarantees uptime.

  3. Back up regularly — Your ~/.openclaw/workspace/ contains all memory and configs.

  4. Monitor costs — Check your Clore dashboard regularly. Set spending alerts if available.

  5. Use the Control UI — Access via SSH tunnel at http://127.0.0.1:18789/ for web-based management.

  6. Combine with API models — Even with a GPU server, use Claude/GPT via API for the main agent and local models for specific tasks (embeddings, transcription).


Further Reading

Last updated

Was this helpful?