# AutoGPT Autonomous Agent

## Overview

[AutoGPT](https://github.com/Significant-Gravitas/AutoGPT) is the pioneering open-source autonomous AI agent platform, with **175K+ GitHub stars** — one of the most starred repositories on GitHub. Originally a Python CLI tool that went viral in 2023, AutoGPT has evolved into a full-featured platform with a web frontend, visual workflow builder, multi-agent orchestration, and a self-improving agent benchmark suite.

The current AutoGPT Platform consists of:

* **Frontend** — Next.js visual agent builder (port 3000)
* **Backend / API** — FastAPI service handling agent execution (port 8000)
* **Agent executor** — Python workers running autonomous task loops
* **Postgres** — persistent storage for agent state and runs
* **Redis** — job queue and pub/sub
* **Minio** — S3-compatible object storage for agent artifacts

On **Clore.ai**, AutoGPT runs entirely on CPU (it delegates LLM calls to cloud APIs), making it affordable at **$0.05–0.20/hr**. You can optionally integrate local models via its OpenAI-compatible provider support.

**Key capabilities:**

* 🤖 **Autonomous agents** — agents decompose tasks into sub-goals and execute them iteratively
* 🌐 **Web browsing** — agents can search the web, scrape pages, and synthesize information
* 💻 **Code execution** — sandboxed Python execution environment for coding agents
* 📁 **File operations** — read, write, and manage files as part of task execution
* 🔗 **Multi-agent** — spawn specialized sub-agents and orchestrate them hierarchically
* 🧠 **Long-term memory** — vector-backed memory persisted across sessions
* 📈 **Agent benchmarking** — built-in AgentBenchmark suite for evaluating agent performance

***

## Requirements

AutoGPT's compute requirements depend on whether you use cloud LLM APIs (default) or local models. The platform itself is lightweight.

| Configuration            | GPU        | VRAM  | System RAM | Disk   | Clore.ai Price   |
| ------------------------ | ---------- | ----- | ---------- | ------ | ---------------- |
| **Minimal** (cloud APIs) | None / CPU | —     | 4 GB       | 20 GB  | \~$0.05/hr (CPU) |
| **Standard**             | None / CPU | —     | 8 GB       | 40 GB  | \~$0.08/hr       |
| **Recommended**          | None / CPU | —     | 16 GB      | 60 GB  | \~$0.12/hr       |
| **+ Local LLM (Ollama)** | RTX 3090   | 24 GB | 16 GB      | 80 GB  | \~$0.20/hr       |
| **+ Large Local LLM**    | A100 40 GB | 40 GB | 32 GB      | 100 GB | \~$0.80/hr       |

> **Note:** AutoGPT uses API-based LLMs by default (OpenAI GPT-4, Anthropic Claude, etc.). A GPU is only useful if you configure a local model endpoint via Ollama or another OpenAI-compatible server.

### API Keys Required

You'll need at least one of:

* **OpenAI API Key** (GPT-4o recommended for best agent performance)
* **Anthropic API Key** (Claude 3.5 Sonnet is excellent for agents)
* **Google AI Key** (Gemini models supported)

***

## Quick Start

### 1. Rent a Clore.ai server

Log into [clore.ai](https://clore.ai) and launch a server with:

* **2+ CPU cores, 8 GB RAM** minimum
* Exposed ports **8000** (backend API) and **3000** (frontend)
* SSH access enabled
* **20+ GB disk space**

### 2. Connect to the server

```bash
ssh root@<clore-server-ip> -p <ssh-port>

# Update packages
apt-get update && apt-get upgrade -y

# Verify Docker and Compose
docker --version
docker compose version   # Must be v2.x
```

### 3. Clone and configure AutoGPT

```bash
# Clone the repository
git clone https://github.com/Significant-Gravitas/AutoGPT.git
cd AutoGPT/autogpt_platform

# Copy environment template
cp .env.example .env

# Edit the environment file
nano .env
```

### 4. Set required environment variables

```bash
# Minimum required in .env:

# ── LLM Provider (pick at least one) ────────────────────────────────────────
OPENAI_API_KEY=sk-...
# ANTHROPIC_API_KEY=sk-ant-...

# ── Secret keys (CHANGE THESE) ──────────────────────────────────────────────
APP_SECRET_KEY=$(openssl rand -hex 32)
JWT_SECRET_KEY=$(openssl rand -hex 32)

# ── Database ─────────────────────────────────────────────────────────────────
DATABASE_URL=postgresql://postgres:password@db:5432/autogpt

# ── Backend URL (set to your Clore server IP) ────────────────────────────────
NEXT_PUBLIC_AGPT_SERVER_URL=http://<clore-server-ip>:8000/api
```

### 5. Build and launch

```bash
# Build images and start all services (first run takes 5-10 minutes)
docker compose up -d --build

# Watch build and startup progress
docker compose logs -f

# After startup, verify all containers are running
docker compose ps
```

### 6. Verify services are healthy

```bash
# Check each service
docker compose ps

# Expected running services:
# autogpt_platform-db-1        Up (healthy)
# autogpt_platform-redis-1     Up
# autogpt_platform-minio-1     Up
# autogpt_platform-backend-1   Up
# autogpt_platform-frontend-1  Up
# autogpt_platform-executor-1  Up

# Test backend API
curl http://localhost:8000/api/health
# Expected: {"status": "ok"}
```

### 7. Access AutoGPT

Open your browser:

* **Frontend:** `http://<clore-server-ip>:3000`
* **Backend API:** `http://<clore-server-ip>:8000`
* **API Docs (Swagger):** `http://<clore-server-ip>:8000/docs`

Create an account on the frontend, configure your LLM provider in Settings, and start building agents.

***

## Configuration

### Full `.env` reference

```bash
# ── Application ──────────────────────────────────────────────────────────────
ENVIRONMENT=production
APP_SECRET_KEY=<generate-with-openssl-rand-hex-32>
JWT_SECRET_KEY=<generate-with-openssl-rand-hex-32>
ALLOWED_ORIGINS=http://<clore-server-ip>:3000

# ── LLM Providers ────────────────────────────────────────────────────────────
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GROQ_API_KEY=gsk_...

# ── OpenAI-compatible (for local models via Ollama/vLLM) ────────────────────
OPENAI_API_BASE=http://localhost:11434/v1   # Ollama's OpenAI-compatible endpoint
OPENAI_API_KEY=ollama                        # Dummy key for Ollama

# ── Database ─────────────────────────────────────────────────────────────────
DATABASE_URL=postgresql://postgres:password@db:5432/autogpt
POSTGRES_USER=postgres
POSTGRES_PASSWORD=password    # Change in production!
POSTGRES_DB=autogpt

# ── Redis (job queue) ────────────────────────────────────────────────────────
REDIS_URL=redis://redis:6379/0

# ── Minio (object storage) ──────────────────────────────────────────────────
MINIO_ROOT_USER=minioadmin
MINIO_ROOT_PASSWORD=minioadmin   # Change in production!
MINIO_URL=http://minio:9000

# ── Frontend ─────────────────────────────────────────────────────────────────
NEXT_PUBLIC_AGPT_SERVER_URL=http://<clore-server-ip>:8000/api
NEXTAUTH_SECRET=<generate-with-openssl-rand-hex-32>
NEXTAUTH_URL=http://<clore-server-ip>:3000
```

### Customizing agent capabilities

```bash
# Enable web browsing capability
WEB_BROWSER_ENABLED=true
SELENIUM_CHROME_DRIVER_URL=http://selenium:4444/wd/hub

# Add Selenium to docker-compose for web browsing:
# services:
#   selenium:
#     image: selenium/standalone-chrome:latest
#     shm_size: 2gb
#     ports:
#       - "4444:4444"

# File system access (agent workspace)
WORKSPACE_PATH=/workspace
RESTRICT_TO_WORKSPACE=true
```

### Managing agent executor scaling

```bash
# Scale executor workers for parallel agent runs
docker compose up -d --scale executor=4

# Monitor executor logs
docker compose logs -f executor
```

***

## GPU Acceleration

AutoGPT delegates all LLM inference to external providers by default. To use local GPU-accelerated models:

### Connect to Ollama on the same server

```bash
# Install Ollama on the Clore server
curl -fsSL https://ollama.com/install.sh | sh

# Pull a capable model (Llama 3 70B needs A100, 8B works on RTX 3090)
ollama pull llama3:8b
# For best agent performance:
ollama pull llama3.1:70b   # Requires A100 40GB+

# Make Ollama accessible to Docker containers
OLLAMA_HOST=0.0.0.0 ollama serve &

# Test the OpenAI-compatible endpoint
curl http://localhost:11434/v1/models
```

In `.env`, point AutoGPT at Ollama:

```bash
OPENAI_API_BASE=http://host.docker.internal:11434/v1
OPENAI_API_KEY=ollama
OPENAI_DEFAULT_MODEL=llama3.1:70b
```

> **Performance note:** Autonomous agents make many sequential LLM calls. Local models on RTX 3090 (\~30 tok/s) work, but A100 80GB enables faster iteration. See [GPU Comparison](https://docs.clore.ai/guides/getting-started/gpu-comparison).

### Local model recommendations for agents

| Model                 | Agent Quality | VRAM  | Clore GPU |
| --------------------- | ------------- | ----- | --------- |
| Llama 3 8B            | Fair          | 8 GB  | RTX 3080  |
| Llama 3.1 8B Instruct | Good          | 8 GB  | RTX 3080  |
| Llama 3.1 70B         | Excellent     | 40 GB | A100 40GB |
| Mixtral 8x7B          | Good          | 24 GB | RTX 3090  |
| Qwen 2.5 72B          | Excellent     | 40 GB | A100 40GB |

***

## Tips & Best Practices

### Cost management on Clore.ai

```bash
# AutoGPT's biggest cost is often LLM API calls, not compute
# Set token budget limits in agent configuration:
# MAX_TOKENS_PER_RUN=100000
# MAX_COST_PER_RUN=1.00   # USD

# Backup agent state before stopping the Clore instance
docker exec autogpt_platform-db-1 \
  pg_dump -U postgres autogpt | gzip > autogpt-backup-$(date +%Y%m%d).sql.gz

# Copy to local machine
scp -P <port> root@<ip>:autogpt-backup-*.sql.gz ./
```

### Updating AutoGPT

```bash
cd AutoGPT

# Pull latest changes
git pull origin master

cd autogpt_platform

# Rebuild with new code
docker compose down
docker compose up -d --build

# Run DB migrations if needed
docker compose exec backend alembic upgrade head
```

### Monitoring agent runs

```bash
# Watch backend logs for agent activity
docker compose logs -f backend executor

# Monitor system resources during agent runs
htop
# or
docker stats

# View agent run history via API
curl http://localhost:8000/api/v1/runs | python3 -m json.tool
```

### Security hardening

```bash
# NEVER expose ports 8000/3000 directly to the internet without auth
# Use Nginx or Caddy as a reverse proxy with HTTPS:

# Caddyfile:
# autogpt.yourdomain.com {
#     reverse_proxy localhost:3000
# }
# api.autogpt.yourdomain.com {
#     reverse_proxy localhost:8000
# }

# Restrict agent file system access
RESTRICT_TO_WORKSPACE=true
WORKSPACE_PATH=/agent-workspace

# Disable user registration after initial setup
ALLOW_SIGNUP=false
```

### Optimizing build times

```bash
# First build is slow (~10 min); subsequent builds use cache
# Use BuildKit for faster builds:
DOCKER_BUILDKIT=1 docker compose up -d --build

# Pre-pull base images to speed up builds
docker pull python:3.11-slim
docker pull node:20-alpine
```

***

## Troubleshooting

### Build fails with out-of-memory error

```bash
# Docker build needs sufficient memory (4GB+)
# Add swap space if needed:
fallocate -l 8G /swapfile
chmod 600 /swapfile
mkswap /swapfile
swapon /swapfile

# Make permanent
echo '/swapfile none swap sw 0 0' >> /etc/fstab
```

### Backend returns 500 / "Database not ready"

```bash
# Check DB health
docker compose ps db
docker compose logs db

# Run DB migrations manually
docker compose exec backend alembic upgrade head

# If migrations fail, check DATABASE_URL in .env
docker compose exec backend printenv DATABASE_URL
```

### Frontend shows "Failed to connect to backend"

```bash
# Verify NEXT_PUBLIC_AGPT_SERVER_URL is set correctly
grep NEXT_PUBLIC_AGPT_SERVER_URL .env

# Must be the public IP, not localhost (browser makes this request)
NEXT_PUBLIC_AGPT_SERVER_URL=http://<clore-server-ip>:8000/api

# Rebuild frontend after changing this env var
docker compose up -d --build frontend
```

### Agent executor crashes / gets OOM killed

```bash
# Check memory usage
docker stats autogpt_platform-executor-1

# Limit executor memory and add restart policy
# In docker-compose.yaml:
#   executor:
#     mem_limit: 2g
#     restart: unless-stopped

# Or reduce concurrent agent runs
MAX_CONCURRENT_RUNS=2
```

### Redis connection refused

```bash
# Check Redis is running
docker compose ps redis
docker compose logs redis

# Test connectivity from backend
docker compose exec backend redis-cli -h redis ping
# Expected: PONG

# If Redis requires auth, set:
REDIS_URL=redis://:password@redis:6379/0
```

### Agent stuck in loop

```bash
# AutoGPT agents can sometimes enter infinite loops
# Set a maximum cycle limit:
MAX_AGENT_CYCLES=50

# Or interrupt a running agent via the API:
curl -X POST http://localhost:8000/api/v1/runs/<run-id>/stop
```

***

## Further Reading

* [AutoGPT Official Documentation](https://docs.agpt.co)
* [AutoGPT GitHub Repository](https://github.com/Significant-Gravitas/AutoGPT)
* [AutoGPT Platform Setup Guide](https://docs.agpt.co/platform/getting-started)
* [Running Ollama on Clore.ai](https://docs.clore.ai/guides/language-models/ollama)
* [Running vLLM on Clore.ai](https://docs.clore.ai/guides/language-models/vllm)
* [Clore.ai GPU Comparison](https://docs.clore.ai/guides/getting-started/gpu-comparison)
* [AutoGPT Discord Community](https://discord.gg/autogpt)
