AutoGPT Autonomous Agent

Deploy AutoGPT on Clore.ai — run the original autonomous AI agent platform with web browsing, code execution, and long-horizon task automation.

Overview

AutoGPTarrow-up-right is the pioneering open-source autonomous AI agent platform, with 175K+ GitHub stars — one of the most starred repositories on GitHub. Originally a Python CLI tool that went viral in 2023, AutoGPT has evolved into a full-featured platform with a web frontend, visual workflow builder, multi-agent orchestration, and a self-improving agent benchmark suite.

The current AutoGPT Platform consists of:

  • Frontend — Next.js visual agent builder (port 3000)

  • Backend / API — FastAPI service handling agent execution (port 8000)

  • Agent executor — Python workers running autonomous task loops

  • Postgres — persistent storage for agent state and runs

  • Redis — job queue and pub/sub

  • Minio — S3-compatible object storage for agent artifacts

On Clore.ai, AutoGPT runs entirely on CPU (it delegates LLM calls to cloud APIs), making it affordable at $0.05–0.20/hr. You can optionally integrate local models via its OpenAI-compatible provider support.

Key capabilities:

  • 🤖 Autonomous agents — agents decompose tasks into sub-goals and execute them iteratively

  • 🌐 Web browsing — agents can search the web, scrape pages, and synthesize information

  • 💻 Code execution — sandboxed Python execution environment for coding agents

  • 📁 File operations — read, write, and manage files as part of task execution

  • 🔗 Multi-agent — spawn specialized sub-agents and orchestrate them hierarchically

  • 🧠 Long-term memory — vector-backed memory persisted across sessions

  • 📈 Agent benchmarking — built-in AgentBenchmark suite for evaluating agent performance


Requirements

AutoGPT's compute requirements depend on whether you use cloud LLM APIs (default) or local models. The platform itself is lightweight.

Configuration
GPU
VRAM
System RAM
Disk
Clore.ai Price

Minimal (cloud APIs)

None / CPU

4 GB

20 GB

~$0.05/hr (CPU)

Standard

None / CPU

8 GB

40 GB

~$0.08/hr

Recommended

None / CPU

16 GB

60 GB

~$0.12/hr

+ Local LLM (Ollama)

RTX 3090

24 GB

16 GB

80 GB

~$0.20/hr

+ Large Local LLM

A100 40 GB

40 GB

32 GB

100 GB

~$0.80/hr

Note: AutoGPT uses API-based LLMs by default (OpenAI GPT-4, Anthropic Claude, etc.). A GPU is only useful if you configure a local model endpoint via Ollama or another OpenAI-compatible server.

API Keys Required

You'll need at least one of:

  • OpenAI API Key (GPT-4o recommended for best agent performance)

  • Anthropic API Key (Claude 3.5 Sonnet is excellent for agents)

  • Google AI Key (Gemini models supported)


Quick Start

1. Rent a Clore.ai server

Log into clore.aiarrow-up-right and launch a server with:

  • 2+ CPU cores, 8 GB RAM minimum

  • Exposed ports 8000 (backend API) and 3000 (frontend)

  • SSH access enabled

  • 20+ GB disk space

2. Connect to the server

3. Clone and configure AutoGPT

4. Set required environment variables

5. Build and launch

6. Verify services are healthy

7. Access AutoGPT

Open your browser:

  • Frontend: http://<clore-server-ip>:3000

  • Backend API: http://<clore-server-ip>:8000

  • API Docs (Swagger): http://<clore-server-ip>:8000/docs

Create an account on the frontend, configure your LLM provider in Settings, and start building agents.


Configuration

Full .env reference

Customizing agent capabilities

Managing agent executor scaling


GPU Acceleration

AutoGPT delegates all LLM inference to external providers by default. To use local GPU-accelerated models:

Connect to Ollama on the same server

In .env, point AutoGPT at Ollama:

Performance note: Autonomous agents make many sequential LLM calls. Local models on RTX 3090 (~30 tok/s) work, but A100 80GB enables faster iteration. See GPU Comparison.

Local model recommendations for agents

Model
Agent Quality
VRAM
Clore GPU

Llama 3 8B

Fair

8 GB

RTX 3080

Llama 3.1 8B Instruct

Good

8 GB

RTX 3080

Llama 3.1 70B

Excellent

40 GB

A100 40GB

Mixtral 8x7B

Good

24 GB

RTX 3090

Qwen 2.5 72B

Excellent

40 GB

A100 40GB


Tips & Best Practices

Cost management on Clore.ai

Updating AutoGPT

Monitoring agent runs

Security hardening

Optimizing build times


Troubleshooting

Build fails with out-of-memory error

Backend returns 500 / "Database not ready"

Frontend shows "Failed to connect to backend"

Agent executor crashes / gets OOM killed

Redis connection refused

Agent stuck in loop


Further Reading

Last updated

Was this helpful?