SuperAGI Agent Framework

Deploy SuperAGI on Clore.ai — a developer-first autonomous AI agent framework with GUI dashboard, tool marketplace, concurrent agents, and optional local LLM support on powerful GPU cloud servers.

Overview

SuperAGIarrow-up-right is an open-source, developer-first autonomous AI agent framework with 15K+ GitHub stars. Unlike simple chatbots, SuperAGI runs autonomous agents — AI systems that independently plan, execute multi-step tasks, use tools, and iterate toward a goal without constant human input.

Why run SuperAGI on Clore.ai?

  • GPU-optional with powerful local LLM support — Run agents backed by local models (Llama, Mistral, etc.) on Clore.ai GPUs for fully private, cost-controlled autonomous AI.

  • Concurrent agent execution — Run multiple agents in parallel on the same server, each working on different tasks simultaneously.

  • Persistent agent memory — Agents maintain context, learn from tool outputs, and store long-term memory in vector databases between runs.

  • Tool marketplace — Pre-built integrations for Google Search, GitHub, Email, Jira, Notion, and more.

  • Clore.ai economics — At ~$0.20/hr for an RTX 3090, you can run capable autonomous agents at a fraction of cloud AI service costs.

Key Features

Feature
Description

Agent provisioning

Create, configure, and deploy agents via GUI

Tool marketplace

30+ built-in tools (search, code, files, APIs)

Multi-model support

OpenAI, Anthropic, local LLMs via custom endpoint

Concurrent agents

Run multiple agents simultaneously

Agent memory

Short-term (context window) + long-term (vector DB)

GUI dashboard

Full web interface for agent management

Resource manager

Track token usage and costs per agent

Workflow templates

Pre-built agent templates for common tasks

Architecture


Requirements

Server Specifications

Component
Minimum
Recommended
Notes

GPU

None (API mode)

RTX 3090 (local LLMs)

GPU needed for local model inference

VRAM

24 GB

For running 13B+ local models

CPU

4 vCPU

8 vCPU

Agent execution is CPU-intensive

RAM

8 GB

16 GB

Multiple concurrent agents need memory

Storage

20 GB

100+ GB

Agent logs, vector DB, model storage

Clore.ai Pricing Reference

Server Type
Approx. Cost
Use Case

CPU (8 vCPU, 16 GB)

~$0.10–0.20/hr

SuperAGI + external API (OpenAI/Anthropic)

RTX 3090 (24 GB)

~$0.20/hr

SuperAGI + Ollama 13B local model

RTX 4090 (24 GB)

~$0.35/hr

SuperAGI + Ollama, faster inference

2× RTX 3090

~$0.40/hr

SuperAGI + 70B model (Q4 quantized)

A100 80 GB

~$1.10/hr

SuperAGI + large models, high concurrency

H100 80 GB

~$2.50/hr

Production-grade autonomous agent systems

💡 Cost tip: For development and testing, use OpenAI or Anthropic APIs (no GPU needed). Switch to a GPU instance only when you need local LLM inference for privacy or cost reasons. See GPU Comparison Guide.

Prerequisites

  • Clore.ai server with SSH access

  • Docker + Docker Compose (pre-installed on Clore.ai)

  • Git (pre-installed)

  • 4+ vCPU, 8+ GB RAM (16 GB recommended for concurrent agents)

  • OpenAI API key or local LLM endpoint (Ollama/vLLM)


Quick Start

SuperAGI's official deployment uses Docker Compose to manage all services.

Step 1: Connect to your Clore.ai server

Step 2: Clone and configure

Step 3: Edit config.yaml

Minimum required configuration:

Step 4: Start the stack

The build process downloads dependencies and compiles the frontend (~5–10 minutes on first run).

Step 5: Monitor startup

Step 6: Access the dashboard

The API is available at:

API documentation:


Method 2: Quick Start with Pre-built Images

For faster startup using pre-built images (skip the build step):


Method 3: Minimal Single-Model Setup

A streamlined setup for testing with just OpenAI:


Configuration

config.yaml Reference

Connecting SuperAGI to Tools

Tools are configured through the GUI at Settings → Toolkit. Each tool can be enabled/disabled per agent.

Built-in tools:

Tool
Purpose
API Key Needed

Google Search

Web search

Yes (Google API)

DuckDuckGo

Web search

No

GitHub

Code repository access

Yes (GitHub token)

Email

Send/read email

Yes (SMTP config)

Code Writer

Write and execute code

No

File Manager

Read/write local files

No

Browser

Headless web browsing

No

Jira

Issue tracking

Yes

Notion

Knowledge base

Yes

Image Generation

DALL-E 3, Stable Diffusion

Yes (OpenAI key)

Creating Your First Agent

Via the GUI (Settings → Agents → Create Agent):

  1. Name — Give your agent a descriptive name

  2. Description — What this agent does

  3. Goals — List the objectives (one per line)

  4. Instructions — System prompt for behavior

  5. Model — Select LLM (GPT-4, Claude, or local)

  6. Tools — Enable relevant tools

  7. Max Iterations — Safety limit (10–50 typical)

Via the REST API:


GPU Acceleration

SuperAGI supports local LLM inference via any OpenAI-compatible endpoint, making it ideal for GPU-backed Clore.ai deployments.

Setting Up Ollama as Agent LLM Backend

See Ollama Guide for full Ollama setup. Integration with SuperAGI:

Step 1: Start Ollama on the same Clore.ai server

Step 2: Configure SuperAGI to use Ollama

In config.yaml:

Or configure in the SuperAGI UI:

  • Settings → Models → Add Custom Model

  • Provider: OpenAI-compatible

  • Base URL: http://172.17.0.1:11434/v1

  • API Key: ollama (any string)

  • Model name: llama3.1:8b

Setting Up vLLM for High-Throughput Agents

For production deployments with many concurrent agents (see vLLM Guide):

GPU Sizing for Agent Workloads

Use Case
Model
GPU
VRAM
Concurrent Agents

Testing

GPT-4o-mini (API)

None

Unlimited (rate-limited)

Light agents

Llama 3.1 8B

RTX 3090

8 GB

2–4

Reasoning tasks

Mistral 7B Instruct

RTX 3090

6 GB

3–5

Complex agents

Llama 3.1 70B Q4

2× RTX 3090

48 GB

1–2

Production

Llama 3.1 70B FP16

A100 80GB

80 GB

3–6


Tips & Best Practices

Agent Design

  • Be specific with goals — Vague goals like "do research" cause agents to loop. Use "Research X and write a 500-word summary to file output.txt."

  • Set iteration limits — Always set max_iterations (20–50). Unlimited agents can consume tokens rapidly.

  • Use task queue mode — For multi-step pipelines, "Task Queue" agents are more reliable than "Don't Limit" mode.

  • Test with cheap models first — Validate agent logic with GPT-4o-mini or a local 7B model before using expensive models.

Cost Management on Clore.ai

Since Clore.ai charges hourly:

Securing SuperAGI

Persistent Storage Between Clore.ai Sessions

Updating SuperAGI


Troubleshooting

Build fails during docker compose up --build

Backend crashes on startup

Frontend not loading (port 3000)

Agents loop indefinitely

Redis connection errors

Ollama not reachable from SuperAGI container

Database connection pool exhausted


Further Reading

Last updated

Was this helpful?