# SuperAGI Agent Framework

## Overview

[SuperAGI](https://github.com/TransformerOptimus/SuperAGI) is an open-source, developer-first autonomous AI agent framework with 15K+ GitHub stars. Unlike simple chatbots, SuperAGI runs **autonomous agents** — AI systems that independently plan, execute multi-step tasks, use tools, and iterate toward a goal without constant human input.

**Why run SuperAGI on Clore.ai?**

* **GPU-optional with powerful local LLM support** — Run agents backed by local models (Llama, Mistral, etc.) on Clore.ai GPUs for fully private, cost-controlled autonomous AI.
* **Concurrent agent execution** — Run multiple agents in parallel on the same server, each working on different tasks simultaneously.
* **Persistent agent memory** — Agents maintain context, learn from tool outputs, and store long-term memory in vector databases between runs.
* **Tool marketplace** — Pre-built integrations for Google Search, GitHub, Email, Jira, Notion, and more.
* **Clore.ai economics** — At \~$0.20/hr for an RTX 3090, you can run capable autonomous agents at a fraction of cloud AI service costs.

### Key Features

| Feature             | Description                                         |
| ------------------- | --------------------------------------------------- |
| Agent provisioning  | Create, configure, and deploy agents via GUI        |
| Tool marketplace    | 30+ built-in tools (search, code, files, APIs)      |
| Multi-model support | OpenAI, Anthropic, local LLMs via custom endpoint   |
| Concurrent agents   | Run multiple agents simultaneously                  |
| Agent memory        | Short-term (context window) + long-term (vector DB) |
| GUI dashboard       | Full web interface for agent management             |
| Resource manager    | Track token usage and costs per agent               |
| Workflow templates  | Pre-built agent templates for common tasks          |

### Architecture

```
┌────────────────────────────────────────────────────┐
│              SuperAGI Stack                        │
│                                                    │
│  ┌─────────────────────┐   ┌───────────────────┐   │
│  │  Frontend (Port 3000)│   │  API (Port 8001)  │   │
│  │   Next.js UI         │   │  FastAPI Backend   │   │
│  └──────────┬──────────┘   └─────────┬─────────┘   │
│             └──────────┬─────────────┘             │
│                        ▼                           │
│  ┌─────────────────────────────────────────────┐   │
│  │              Agent Executor                 │   │
│  │  ┌──────────┐  ┌──────────┐  ┌──────────┐  │   │
│  │  │  Agent 1  │  │  Agent 2  │  │  Agent N  │  │   │
│  │  └──────────┘  └──────────┘  └──────────┘  │   │
│  └───────┬─────────────┬─────────────┬─────────┘   │
│          ▼             ▼             ▼             │
│  ┌───────────┐  ┌───────────┐  ┌───────────┐      │
│  │ PostgreSQL │  │  Redis    │  │  Vector   │      │
│  │  (State)  │  │  (Queue)  │  │    DB     │      │
│  └───────────┘  └───────────┘  └───────────┘      │
└────────────────────────────────────────────────────┘
          │
    ┌─────┴──────┐
    ▼            ▼
 OpenAI      Local LLM
 Anthropic   (Ollama/vLLM)
```

***

## Requirements

### Server Specifications

| Component   | Minimum         | Recommended           | Notes                                  |
| ----------- | --------------- | --------------------- | -------------------------------------- |
| **GPU**     | None (API mode) | RTX 3090 (local LLMs) | GPU needed for local model inference   |
| **VRAM**    | —               | 24 GB                 | For running 13B+ local models          |
| **CPU**     | 4 vCPU          | 8 vCPU                | Agent execution is CPU-intensive       |
| **RAM**     | 8 GB            | 16 GB                 | Multiple concurrent agents need memory |
| **Storage** | 20 GB           | 100+ GB               | Agent logs, vector DB, model storage   |

### Clore.ai Pricing Reference

| Server Type         | Approx. Cost    | Use Case                                   |
| ------------------- | --------------- | ------------------------------------------ |
| CPU (8 vCPU, 16 GB) | \~$0.10–0.20/hr | SuperAGI + external API (OpenAI/Anthropic) |
| RTX 3090 (24 GB)    | \~$0.20/hr      | SuperAGI + Ollama 13B local model          |
| RTX 4090 (24 GB)    | \~$0.35/hr      | SuperAGI + Ollama, faster inference        |
| 2× RTX 3090         | \~$0.40/hr      | SuperAGI + 70B model (Q4 quantized)        |
| A100 80 GB          | \~$1.10/hr      | SuperAGI + large models, high concurrency  |
| H100 80 GB          | \~$2.50/hr      | Production-grade autonomous agent systems  |

> 💡 **Cost tip:** For development and testing, use OpenAI or Anthropic APIs (no GPU needed). Switch to a GPU instance only when you need local LLM inference for privacy or cost reasons. See [GPU Comparison Guide](https://docs.clore.ai/guides/getting-started/gpu-comparison).

### Prerequisites

* Clore.ai server with SSH access
* Docker + Docker Compose (pre-installed on Clore.ai)
* Git (pre-installed)
* 4+ vCPU, 8+ GB RAM (16 GB recommended for concurrent agents)
* OpenAI API key **or** local LLM endpoint (Ollama/vLLM)

***

## Quick Start

### Method 1: Docker Compose (Official — Recommended)

SuperAGI's official deployment uses Docker Compose to manage all services.

**Step 1: Connect to your Clore.ai server**

```bash
ssh root@<your-clore-server-ip> -p <ssh-port>
```

**Step 2: Clone and configure**

```bash
git clone https://github.com/TransformerOptimus/SuperAGI.git
cd SuperAGI
cp config_template.yaml config.yaml
```

**Step 3: Edit `config.yaml`**

```bash
nano config.yaml
```

Minimum required configuration:

```yaml
# config.yaml
OPENAI_API_KEY: "sk-your-openai-key-here"

# Database (leave as default for Docker Compose)
POSTGRES_DB: "super_agi"
POSTGRES_USER: "super_agi"
POSTGRES_PASSWORD: "password"

# Vector Database
VECTOR_STORE: "Redis"  # or "Pinecone", "Qdrant", "Weaviate"
REDIS_URL: "redis://super__agi-redis-1:6379/0"

# App settings
ENV: "PROD"
ALLOW_LISTS_CREATION: "true"

# Optional: restrict access
# AUTH_SECRET_KEY: "your-random-secret"
```

**Step 4: Start the stack**

```bash
docker compose up -d --build
```

The build process downloads dependencies and compiles the frontend (\~5–10 minutes on first run).

**Step 5: Monitor startup**

```bash
# Watch all services come up
docker compose ps

# Follow logs
docker compose logs -f

# Wait for "Application startup complete" in the backend logs
docker compose logs superagi-backend --tail 30
```

**Step 6: Access the dashboard**

```
http://<your-clore-server-ip>:3000
```

The API is available at:

```
http://<your-clore-server-ip>:8001
```

API documentation:

```
http://<your-clore-server-ip>:8001/docs
```

***

### Method 2: Quick Start with Pre-built Images

For faster startup using pre-built images (skip the build step):

```bash
git clone https://github.com/TransformerOptimus/SuperAGI.git
cd SuperAGI
cp config_template.yaml config.yaml

# Edit config with your API keys
nano config.yaml

# Use pre-built images if available
docker compose -f docker-compose.yaml pull
docker compose up -d
```

***

### Method 3: Minimal Single-Model Setup

A streamlined setup for testing with just OpenAI:

```bash
git clone https://github.com/TransformerOptimus/SuperAGI.git
cd SuperAGI

# Create minimal config
cat > config.yaml << 'EOF'
OPENAI_API_KEY: "sk-your-key-here"
POSTGRES_DB: "super_agi"
POSTGRES_USER: "super_agi"
POSTGRES_PASSWORD: "superagi_password_123"
REDIS_URL: "redis://super__agi-redis-1:6379/0"
ENV: "PROD"
EOF

docker compose up -d --build

# Monitor build progress
docker compose logs superagi-frontend --tail 5 -f &
docker compose logs superagi-backend --tail 5 -f
```

***

## Configuration

### `config.yaml` Reference

```yaml
# ============================================================
# LLM Providers
# ============================================================
OPENAI_API_KEY: "sk-..."                    # OpenAI GPT models
ANTHROPIC_API_KEY: "sk-ant-..."            # Claude models

# For local models (Ollama or OpenAI-compatible API)
# Set in the UI: Settings → Models → Custom Model
OPENAI_API_BASE: "http://172.17.0.1:11434/v1"  # Ollama on same host
OPENAI_MODEL: "llama3.1:8b"

# ============================================================
# Database
# ============================================================
POSTGRES_DB: "super_agi"
POSTGRES_USER: "super_agi"
POSTGRES_PASSWORD: "your-strong-password"

# ============================================================
# Vector Database (agent long-term memory)
# ============================================================
VECTOR_STORE: "Redis"       # Redis (default, built-in)
# Or use external:
# VECTOR_STORE: "Pinecone"
# PINECONE_API_KEY: "your-key"
# PINECONE_ENVIRONMENT: "us-east-1-aws"

# VECTOR_STORE: "Weaviate"
# WEAVIATE_URL: "http://weaviate:8080"

# ============================================================
# Tool API Keys (optional, for specific tools)
# ============================================================
GOOGLE_API_KEY: "your-google-key"
GOOGLE_CUSTOM_SEARCH_ENGINE_ID: "your-cx-id"
GITHUB_TOKEN: "ghp_your-token"
JIRA_EMAIL: "your@email.com"
JIRA_API_TOKEN: "your-jira-token"
JIRA_SERVER_URL: "https://your-org.atlassian.net"

# ============================================================
# Storage
# ============================================================
STORAGE_TYPE: "File"        # Local file storage
# STORAGE_TYPE: "S3"        # S3-compatible (MinIO, AWS)
# BUCKET_NAME: "superagi"
# AWS_ACCESS_KEY_ID: "..."
# AWS_SECRET_ACCESS_KEY: "..."

# ============================================================
# Security
# ============================================================
JWT_SECRET_KEY: "your-random-secret-key"
```

### Connecting SuperAGI to Tools

Tools are configured through the GUI at **Settings → Toolkit**. Each tool can be enabled/disabled per agent.

**Built-in tools:**

| Tool             | Purpose                    | API Key Needed     |
| ---------------- | -------------------------- | ------------------ |
| Google Search    | Web search                 | Yes (Google API)   |
| DuckDuckGo       | Web search                 | No                 |
| GitHub           | Code repository access     | Yes (GitHub token) |
| Email            | Send/read email            | Yes (SMTP config)  |
| Code Writer      | Write and execute code     | No                 |
| File Manager     | Read/write local files     | No                 |
| Browser          | Headless web browsing      | No                 |
| Jira             | Issue tracking             | Yes                |
| Notion           | Knowledge base             | Yes                |
| Image Generation | DALL-E 3, Stable Diffusion | Yes (OpenAI key)   |

### Creating Your First Agent

Via the GUI (Settings → Agents → Create Agent):

1. **Name** — Give your agent a descriptive name
2. **Description** — What this agent does
3. **Goals** — List the objectives (one per line)
4. **Instructions** — System prompt for behavior
5. **Model** — Select LLM (GPT-4, Claude, or local)
6. **Tools** — Enable relevant tools
7. **Max Iterations** — Safety limit (10–50 typical)

Via the REST API:

```bash
# Create an agent via API
curl -X POST "http://localhost:8001/v1/agent" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "Research Agent",
    "description": "Researches topics and writes summaries",
    "goal": [
      "Research the topic provided",
      "Write a comprehensive summary",
      "Save the summary to a file"
    ],
    "agent_type": "Task Queue",
    "constraints": [],
    "tools": ["DuckDuckGoSearch", "WriteFileTool", "ReadFileTool"],
    "exit_criterion": "No exit criterion",
    "max_iterations": 25,
    "user_timezone": "UTC",
    "llm_model_config": {
      "model_name": "gpt-4o-mini",
      "temperature": 0.5,
      "max_new_tokens": 2000
    }
  }'
```

***

## GPU Acceleration

SuperAGI supports local LLM inference via any OpenAI-compatible endpoint, making it ideal for GPU-backed Clore.ai deployments.

### Setting Up Ollama as Agent LLM Backend

See [Ollama Guide](https://docs.clore.ai/guides/language-models/ollama) for full Ollama setup. Integration with SuperAGI:

**Step 1: Start Ollama on the same Clore.ai server**

```bash
docker run -d \
  --name ollama \
  --gpus all \
  --restart unless-stopped \
  -p 11434:11434 \
  -v ollama_models:/root/.ollama \
  ollama/ollama

# Pull a capable model for agent use (needs good reasoning)
docker exec ollama ollama pull llama3.1:8b        # Fast, good reasoning
docker exec ollama ollama pull mistral:7b-instruct  # Code tasks
docker exec ollama ollama pull deepseek-coder:6.7b # Code-heavy agents
```

**Step 2: Configure SuperAGI to use Ollama**

In `config.yaml`:

```yaml
# Point to Ollama (running on the Docker host)
OPENAI_API_BASE: "http://172.17.0.1:11434/v1"
```

Or configure in the SuperAGI UI:

* **Settings → Models → Add Custom Model**
* Provider: OpenAI-compatible
* Base URL: `http://172.17.0.1:11434/v1`
* API Key: `ollama` (any string)
* Model name: `llama3.1:8b`

### Setting Up vLLM for High-Throughput Agents

For production deployments with many concurrent agents (see [vLLM Guide](https://docs.clore.ai/guides/language-models/vllm)):

```bash
# Start vLLM on GPU server
docker run -d \
  --name vllm \
  --gpus all \
  --restart unless-stopped \
  -p 8000:8000 \
  -v hf_cache:/root/.cache/huggingface \
  -e HF_TOKEN=hf_your-token \
  vllm/vllm-openai:latest \
  --model mistralai/Mistral-7B-Instruct-v0.3 \
  --served-model-name mistral-7b \
  --max-model-len 8192 \
  --enable-prefix-caching

# In config.yaml:
# OPENAI_API_BASE: "http://172.17.0.1:8000/v1"
```

### GPU Sizing for Agent Workloads

| Use Case        | Model               | GPU         | VRAM  | Concurrent Agents        |
| --------------- | ------------------- | ----------- | ----- | ------------------------ |
| Testing         | GPT-4o-mini (API)   | None        | —     | Unlimited (rate-limited) |
| Light agents    | Llama 3.1 8B        | RTX 3090    | 8 GB  | 2–4                      |
| Reasoning tasks | Mistral 7B Instruct | RTX 3090    | 6 GB  | 3–5                      |
| Complex agents  | Llama 3.1 70B Q4    | 2× RTX 3090 | 48 GB | 1–2                      |
| Production      | Llama 3.1 70B FP16  | A100 80GB   | 80 GB | 3–6                      |

***

## Tips & Best Practices

### Agent Design

* **Be specific with goals** — Vague goals like "do research" cause agents to loop. Use "Research X and write a 500-word summary to file output.txt."
* **Set iteration limits** — Always set `max_iterations` (20–50). Unlimited agents can consume tokens rapidly.
* **Use task queue mode** — For multi-step pipelines, "Task Queue" agents are more reliable than "Don't Limit" mode.
* **Test with cheap models first** — Validate agent logic with GPT-4o-mini or a local 7B model before using expensive models.

### Cost Management on Clore.ai

```bash
# Monitor token usage in real-time from the SuperAGI dashboard
# Settings → Resources → Token Usage

# Set organization-level limits in config.yaml
MAX_BUDGET_TOKENS: 100000  # Soft limit per session
```

Since Clore.ai charges hourly:

```bash
# Save agent configurations before stopping instance
docker compose exec superagi-backend \
  python -c "import json; from superagi.models import Agent; ..."

# Backup PostgreSQL database
docker compose exec super__agi-db-1 \
  pg_dump -U super_agi super_agi | gzip > superagi-db-$(date +%Y%m%d).sql.gz

# Copy backup off-server
scp -P <ssh-port> root@<server-ip>:~/SuperAGI/superagi-db-*.sql.gz ./
```

### Securing SuperAGI

```bash
# Enable authentication in config.yaml
AUTH_SECRET_KEY: "your-strong-random-secret"

# Restrict API to localhost (use SSH tunnels)
# Modify docker-compose.yml to remove external port binding:
# ports:
#   - "127.0.0.1:8001:8001"  # API local only
#   - "127.0.0.1:3000:3000"  # UI local only

# Then access via SSH tunnel:
# ssh -L 3000:localhost:3000 -L 8001:localhost:8001 root@<server-ip> -p <port>
```

### Persistent Storage Between Clore.ai Sessions

```bash
# Create full backup script
cat > /root/backup-superagi.sh << 'EOF'
#!/bin/bash
cd ~/SuperAGI

# Backup database
docker compose exec -T super__agi-db-1 \
  pg_dump -U super_agi super_agi | \
  gzip > ~/backups/superagi-db-$(date +%Y%m%d-%H%M).sql.gz

# Backup config and workspaces
tar -czf ~/backups/superagi-files-$(date +%Y%m%d-%H%M).tar.gz \
  config.yaml \
  workspace/ \
  .env 2>/dev/null || true

echo "Backup complete: $(ls -lh ~/backups/ | tail -2)"
EOF

chmod +x /root/backup-superagi.sh
mkdir -p ~/backups
```

### Updating SuperAGI

```bash
cd ~/SuperAGI

# Save current config
cp config.yaml config.yaml.backup

# Pull latest changes
git pull origin main

# Rebuild and restart
docker compose down
docker compose up -d --build

# Verify all services are healthy
docker compose ps
docker compose logs superagi-backend --tail 20
```

***

## Troubleshooting

### Build fails during `docker compose up --build`

```bash
# Check build logs in detail
docker compose build superagi-backend --no-cache 2>&1 | tail -50

# Common fix: free up disk space
docker system prune -f
df -h  # Ensure at least 10 GB free

# If npm build fails for frontend
docker compose build superagi-frontend --no-cache

# Check Node.js memory during build
docker compose build superagi-frontend \
  --build-arg NODE_OPTIONS="--max-old-space-size=4096"
```

### Backend crashes on startup

```bash
# Check backend logs
docker compose logs superagi-backend --tail 50

# Common causes:
# 1. Invalid config.yaml syntax
python3 -c "import yaml; yaml.safe_load(open('config.yaml'))" && echo "YAML OK"

# 2. Database not ready
docker compose restart superagi-backend  # Wait for DB to start first

# 3. Missing API key
grep OPENAI_API_KEY config.yaml  # Ensure it's set and not empty
```

### Frontend not loading (port 3000)

```bash
# Check frontend container
docker compose ps superagi-frontend
docker compose logs superagi-frontend --tail 30

# Verify port mapping
ss -tlnp | grep 3000

# Check if API backend is reachable from frontend
docker compose exec superagi-frontend \
  curl -s http://superagi-backend:8001/health
```

### Agents loop indefinitely

```bash
# Check agent logs in the SuperAGI UI:
# Dashboard → Agent → View Logs

# Force-stop a running agent via API
curl -X POST "http://localhost:8001/v1/agent/<agent-id>/stop" \
  -H "Content-Type: application/json"

# Or stop all agents via DB
docker compose exec super__agi-db-1 \
  psql -U super_agi -c "UPDATE agent_executions SET status='COMPLETED' WHERE status='RUNNING';"
```

### Redis connection errors

```bash
# Check Redis status
docker compose ps super__agi-redis-1
docker compose logs super__agi-redis-1

# Test Redis connection
docker compose exec superagi-backend \
  python3 -c "import redis; r=redis.from_url('redis://super__agi-redis-1:6379/0'); print(r.ping())"

# Restart Redis
docker compose restart super__agi-redis-1
```

### Ollama not reachable from SuperAGI container

```bash
# Find Docker bridge IP
docker network inspect bridge | grep Gateway

# Test from backend container
docker compose exec superagi-backend \
  curl -s http://172.17.0.1:11434/v1/models

# If using host networking
docker run -d --network host ...  # Not compatible with docker compose easily

# Alternative: add Ollama to the same compose network
# Add to docker-compose.yml services:
# ollama:
#   image: ollama/ollama
#   deploy:
#     resources:
#       reservations:
#         devices:
#           - driver: nvidia
#             count: all
#             capabilities: [gpu]
```

### Database connection pool exhausted

```bash
# Increase PostgreSQL max connections in docker-compose.yml
# Under the db service:
command: postgres -c max_connections=200

# Restart database
docker compose restart super__agi-db-1
docker compose restart superagi-backend
```

***

## Further Reading

* [SuperAGI Documentation](https://superagi.com/docs) — official guides, API reference
* [SuperAGI GitHub](https://github.com/TransformerOptimus/SuperAGI) — source code, issues, community
* [Running Ollama on Clore.ai](https://docs.clore.ai/guides/language-models/ollama) — local LLM backend for agents
* [Running vLLM on Clore.ai](https://docs.clore.ai/guides/language-models/vllm) — high-throughput inference for concurrent agents
* [GPU Comparison Guide](https://docs.clore.ai/guides/getting-started/gpu-comparison) — choosing the right Clore.ai tier
* [SuperAGI Tool Marketplace](https://superagi.com/marketplace/) — community-built agent tools
* [SuperAGI Discord](https://discord.gg/dXbRe5BHJC) — community support and discussions
* [FastAPI Docs (SuperAGI API)](http://localhost:8001/docs) — interactive API documentation on your instance
