OpenHands AI Developer

Deploy OpenHands (formerly OpenDevin) on Clore.ai — run a fully autonomous AI software engineer on affordable GPU cloud servers for coding, debugging, and GitHub issue resolution.

Overview

OpenHandsarrow-up-right (formerly OpenDevin) is an open-source platform for autonomous AI software development agents. With 65K+ GitHub stars, it has become one of the most popular tools for delegating real programming tasks to AI — writing code, fixing bugs, resolving GitHub issues, running shell commands, browsing the web, and interacting with your codebase end-to-end.

Unlike typical code-completion tools, OpenHands runs an agentic loop: it receives a task, plans, writes code, executes it, observes the output, and iterates — all without human intervention. It supports dozens of LLM backends including OpenAI, Anthropic Claude, Google Gemini, and locally-hosted models via Ollama or vLLM.

Why Clore.ai for OpenHands?

  • OpenHands itself is CPU-based and does not require a GPU

  • However, pairing it with a local LLM (Ollama, vLLM) on the same server eliminates API costs and latency

  • Clore.ai's affordable GPU servers let you run both OpenHands and a local model for as little as $0.20–$0.35/hr

  • You get persistent workspace storage, Docker-in-Docker support, and full root access

  • Ideal for long-running autonomous tasks that would be expensive via cloud LLM APIs

Typical use cases on Clore.ai:

  • Autonomous code generation from a spec or issue description

  • Bulk refactoring large codebases

  • Running OpenHands + Ollama together for 100% offline agentic development

  • CI/CD task automation without API costs


Requirements

OpenHands requires Docker socket access and runs a sandboxed runtime container internally. The following table covers recommended configurations on Clore.ai:

Configuration
GPU
VRAM
RAM
Storage
Est. Price

API-only (no local LLM)

Any / CPU-only

N/A

8 GB

20 GB

~$0.05–0.10/hr

+ Ollama (Llama 3.1 8B)

RTX 3090

24 GB

16 GB

40 GB

~$0.20/hr

+ Ollama (Qwen2.5 32B)

RTX 4090

24 GB

32 GB

60 GB

~$0.35/hr

+ vLLM (Llama 3.1 70B)

A100 80GB

80 GB

64 GB

100 GB

~$1.10/hr

+ vLLM (Llama 3.3 70B INT4)

RTX 4090

24 GB

32 GB

80 GB

~$0.35/hr

Note: If you only use OpenAI/Anthropic/Gemini APIs, any server with ≥8 GB RAM works. GPU is only needed if you want to run a local LLM on the same machine. See the GPU Comparison Guide for more details.

Software requirements on the Clore.ai server:

  • Docker Engine (pre-installed on all Clore.ai images)

  • NVIDIA Container Toolkit (pre-installed on GPU images)

  • Docker socket accessible at /var/run/docker.sock

  • Outbound internet access for pulling GHCR images


Quick Start

Step 1: Select and Connect to a Clore.ai Server

In the Clore.ai marketplacearrow-up-right, filter servers by:

  • RAM ≥ 16 GB (for local LLM combo)

  • Docker: ✓ enabled

  • Choose your preferred GPU if using a local model

Connect via SSH once the server is provisioned:

Step 2: Verify Docker Is Running

Both commands should succeed. If Docker socket is missing, contact Clore.ai support or choose a different image.

Step 3: Pull and Run OpenHands

Step 4: Access the Web UI

The UI is available at http://<server-ip>:3000

Clore.ai Port Forwarding: In the Clore.ai dashboard, make sure port 3000 is forwarded/exposed in your server configuration. Some templates restrict external ports — check the "Ports" section in your server details.

On first launch, OpenHands will prompt you to configure an LLM provider.

Step 5: Configure Your LLM

In the web UI settings:

  • Provider: Select OpenAI, Anthropic, Google, or Custom

  • API Key: Enter your API key

  • Model: e.g., gpt-4o, claude-3-5-sonnet-20241022, or ollama/llama3.1

For local Ollama (see GPU Acceleration section below), use:

  • Provider: ollama

  • Base URL: http://host.docker.internal:11434

  • Model: ollama/llama3.1:8b


Configuration

Environment Variables

OpenHands can be configured entirely via environment variables passed to docker run:

Variable
Description
Default

LLM_MODEL

Model identifier (e.g. gpt-4o, claude-3-5-sonnet-20241022)

Set in UI

LLM_API_KEY

API key for the LLM provider

Set in UI

LLM_BASE_URL

Custom base URL (for Ollama, vLLM, LiteLLM)

Provider default

SANDBOX_TIMEOUT

Agent sandbox timeout in seconds

120

MAX_ITERATIONS

Max agentic loop iterations per task

100

SANDBOX_USER_ID

UID to run sandbox as (use $(id -u))

0

LOG_ALL_EVENTS

Enable verbose event logging (true/false)

false

Persistent Configuration File

You can persist settings by mounting a config directory:

Running in Background (Detached Mode)

For long-running sessions on Clore.ai:


GPU Acceleration (Local LLM Integration)

While OpenHands itself doesn't use the GPU, combining it with a local LLM running on Clore.ai's GPU gives you a powerful, cost-effective, API-free autonomous agent.

Run Ollama first, then point OpenHands at it:

See the full Ollama guide for model selection, performance tuning, and GPU configuration.

Option B: OpenHands + vLLM (High Performance)

For maximum throughput with larger models:

See the vLLM guide for full setup, quantization options, and multi-GPU configurations.

Model
Size
Min VRAM
Quality

qwen2.5-coder:7b

7B

8 GB

★★★☆☆

deepseek-coder-v2:16b

16B

12 GB

★★★★☆

qwen2.5-coder:32b

32B

24 GB

★★★★☆

llama3.1:70b

70B

48 GB

★★★★★


Tips & Best Practices

1. Use Workspace Mounts Wisely

Mount your actual project directory as the workspace so OpenHands can directly edit your files:

2. Task Prompting for Best Results

OpenHands works best with specific, actionable prompts:

3. Monitor Resource Usage

4. Set Iteration Limits

Prevent runaway agents from consuming too many API tokens:

5. GitHub Integration

OpenHands can resolve GitHub issues directly. Configure in the UI:

  • GitHub Token: Your personal access token with repo scope

  • OpenHands will clone the repo, fix the issue, and create a PR

6. Cost Estimation

For API-based LLMs, estimate cost per task:

  • Simple bug fix: ~$0.05–0.15 (Claude Haiku/GPT-4o-mini)

  • Complex feature: ~$0.50–2.00 (Claude Sonnet/GPT-4o)

  • For 100+ tasks/day, local LLM on Clore.ai pays for itself


Troubleshooting

Docker Socket Permission Denied

Sandbox Container Fails to Start

Port 3000 Not Accessible

LLM Connection Errors with Ollama

Agent Loops Indefinitely

Out of Memory (OOM)


Further Reading

Last updated

Was this helpful?