# Flowise AI Agent Builder

## Overview

[Flowise](https://github.com/FlowiseAI/Flowise) is an open-source, drag-and-drop tool for building LLM-powered applications without writing code. With 35K+ GitHub stars and over **5 million Docker Hub pulls**, Flowise has become one of the most deployed self-hosted AI tools in the ecosystem. It enables teams to create chatbots, RAG systems, AI agents, and automated workflows through an intuitive visual interface — and deploy them as REST API endpoints in minutes.

Flowise is built on LangChain.js and provides a node-based canvas where you connect components: LLMs, vector databases, document loaders, memory stores, tools, and agents. Every flow automatically generates an embeddable chat widget and API endpoint that you can integrate into any application.

**Key capabilities:**

* **Drag-and-drop flow builder** — Visual LLM orchestration with 100+ pre-built nodes
* **Chatbot creation** — Embeddable chat widgets for websites and apps
* **RAG pipelines** — Connect document loaders, embedders, and vector stores visually
* **Multi-agent support** — Build agent hierarchies with tool use and delegation
* **Instant API** — Every flow generates a `/api/v1/prediction/<flowId>` endpoint
* **LangChain nodes** — Full access to the LangChain.js ecosystem
* **Credential manager** — Centrally manage API keys, database connections

**Why Clore.ai for Flowise?**

Flowise is a lightweight Node.js server — it handles orchestration, not compute. Pairing it with Clore.ai enables:

* **Local model inference** — Run Ollama or vLLM on the same GPU server, eliminating API costs
* **Private document processing** — RAG pipelines that never send data to external services
* **Persistent deployment** — Always-on chatbot and API hosting at GPU server prices
* **Cost-effective at scale** — Build multi-tenant chatbot platforms without per-call API fees
* **Full-stack AI hosting** — Flowise + Ollama + Qdrant/Chroma all on one affordable server

***

## Requirements

Flowise itself is a Node.js application with minimal resource requirements. GPU is only needed if you add a local LLM backend.

| Configuration                          | GPU       | VRAM  | RAM    | Storage | Est. Price      |
| -------------------------------------- | --------- | ----- | ------ | ------- | --------------- |
| **Flowise only (external APIs)**       | None      | —     | 2–4 GB | 10 GB   | \~$0.03–0.08/hr |
| **+ Ollama (Llama 3.1 8B)**            | RTX 3090  | 24 GB | 16 GB  | 40 GB   | \~$0.20/hr      |
| **+ Ollama (Mistral 7B + embeddings)** | RTX 3090  | 24 GB | 16 GB  | 30 GB   | \~$0.20/hr      |
| **+ Ollama (Qwen2.5 32B)**             | RTX 4090  | 24 GB | 32 GB  | 60 GB   | \~$0.35/hr      |
| **+ vLLM (production)**                | A100 80GB | 80 GB | 64 GB  | 100 GB  | \~$1.10/hr      |

> **Note:** Flowise runs comfortably on any Clore.ai server. GPU is needed only if you want free local inference. See the [GPU Comparison Guide](https://docs.clore.ai/guides/getting-started/gpu-comparison).

**Clore.ai server requirements:**

* Docker Engine (pre-installed on all Clore.ai images)
* NVIDIA Container Toolkit (for GPU/Ollama only)
* Port 3000 accessible (or mapped in Clore.ai dashboard)
* Minimum 2 GB free RAM, 10 GB disk space

***

## Quick Start

### Step 1: Book a Server on Clore.ai

In the [Clore.ai marketplace](https://clore.ai):

* For API-only usage: Any server, filter by RAM ≥ 4 GB
* For local LLM: Filter GPU ≥ 24 GB VRAM
* Ensure Docker is enabled in the template

Connect via SSH:

```bash
ssh root@<server-ip> -p <ssh-port>
```

### Step 2: Run Flowise (Single Command)

```bash
docker run -d \
  --name flowise \
  --restart unless-stopped \
  -p 3000:3000 \
  flowiseai/flowise
```

That's it. Flowise will be available at `http://<server-ip>:3000` within 20–30 seconds.

### Step 3: Verify It's Running

```bash
# Check container status
docker ps | grep flowise

# Check logs
docker logs flowise --tail 20

# Test the API
curl http://localhost:3000/api/v1/chatflows
```

### Step 4: Open the UI

Navigate to `http://<server-ip>:3000` in your browser.

> **Clore.ai Port Mapping:** Ensure port 3000 is forwarded in your Clore.ai server configuration. Go to your server details → Ports → confirm `3000:3000` is mapped. Some templates only expose SSH by default.

***

## Configuration

### Persistent Storage

Mount volumes so your flows, credentials, and uploads survive container restarts:

```bash
mkdir -p /opt/flowise/{data,uploads,logs}

docker run -d \
  --name flowise \
  --restart unless-stopped \
  -p 3000:3000 \
  -v /opt/flowise/data:/root/.flowise \
  -v /opt/flowise/uploads:/app/uploads \
  -e DATABASE_PATH=/root/.flowise \
  -e APIKEY_PATH=/root/.flowise \
  -e SECRETKEY_PATH=/root/.flowise \
  -e LOG_PATH=/root/.flowise/logs \
  flowiseai/flowise
```

### Authentication

Protect your Flowise instance with username/password:

```bash
docker run -d \
  --name flowise \
  --restart unless-stopped \
  -p 3000:3000 \
  -v /opt/flowise/data:/root/.flowise \
  -e FLOWISE_USERNAME=admin \
  -e FLOWISE_PASSWORD=$(openssl rand -base64 16) \
  -e FLOWISE_SECRETKEY_OVERWRITE=$(openssl rand -hex 32) \
  flowiseai/flowise
```

> **Security note:** Always set credentials when exposing Flowise publicly on Clore.ai. Without authentication, anyone with your server IP can access your flows and API keys.

### Full Environment Variables Reference

```bash
docker run -d \
  --name flowise \
  --restart unless-stopped \
  -p 3000:3000 \
  -v /opt/flowise/data:/root/.flowise \
  -e PORT=3000 \
  -e FLOWISE_USERNAME=admin \
  -e FLOWISE_PASSWORD=your-secure-password \
  -e FLOWISE_SECRETKEY_OVERWRITE=your-secret-key \
  -e DATABASE_TYPE=sqlite \
  -e DATABASE_PATH=/root/.flowise \
  -e APIKEY_PATH=/root/.flowise \
  -e SECRETKEY_PATH=/root/.flowise \
  -e LOG_PATH=/root/.flowise/logs \
  -e LOG_LEVEL=info \
  -e TOOL_FUNCTION_BUILTIN_DEP=crypto,fs \
  -e TOOL_FUNCTION_EXTERNAL_DEP=moment,lodash \
  -e CORS_ORIGINS=* \
  -e IFRAME_ORIGINS=* \
  flowiseai/flowise
```

| Variable                      | Description                            | Default          |
| ----------------------------- | -------------------------------------- | ---------------- |
| `PORT`                        | Web server port                        | `3000`           |
| `FLOWISE_USERNAME`            | Admin username (enables auth)          | — (no auth)      |
| `FLOWISE_PASSWORD`            | Admin password                         | —                |
| `FLOWISE_SECRETKEY_OVERWRITE` | Encryption key for credentials         | Auto-generated   |
| `DATABASE_TYPE`               | `sqlite` or `mysql` or `postgres`      | `sqlite`         |
| `DATABASE_PATH`               | SQLite storage path                    | `/root/.flowise` |
| `LOG_LEVEL`                   | `error`, `warn`, `info`, `debug`       | `info`           |
| `TOOL_FUNCTION_BUILTIN_DEP`   | Allowed Node.js builtins in code nodes | —                |
| `TOOL_FUNCTION_EXTERNAL_DEP`  | Allowed npm packages in code nodes     | —                |
| `CORS_ORIGINS`                | Allowed CORS origins for API           | `*`              |
| `IFRAME_ORIGINS`              | Allowed iframe embedding origins       | `*`              |

### Docker Compose (Recommended)

The official Flowise repo includes a Docker Compose configuration. This is the recommended approach for Clore.ai:

```bash
# Download the official docker-compose.yml
curl -o /opt/flowise/docker-compose.yml \
  https://raw.githubusercontent.com/FlowiseAI/Flowise/main/docker/docker-compose.yml

cd /opt/flowise
```

Or create your own with PostgreSQL:

```yaml
# /opt/flowise/docker-compose.yml
version: "3.9"

services:
  flowise:
    image: flowiseai/flowise:latest
    restart: unless-stopped
    ports:
      - "3000:3000"
    environment:
      - PORT=3000
      - FLOWISE_USERNAME=${FLOWISE_USERNAME:-admin}
      - FLOWISE_PASSWORD=${FLOWISE_PASSWORD:-changeme}
      - FLOWISE_SECRETKEY_OVERWRITE=${SECRET_KEY}
      - DATABASE_TYPE=postgres
      - DATABASE_HOST=db
      - DATABASE_PORT=5432
      - DATABASE_USER=flowise
      - DATABASE_PASSWORD=flowise-secret
      - DATABASE_NAME=flowise
    volumes:
      - flowise-data:/root/.flowise
    depends_on:
      db:
        condition: service_healthy

  db:
    image: postgres:16-alpine
    restart: unless-stopped
    environment:
      - POSTGRES_USER=flowise
      - POSTGRES_PASSWORD=flowise-secret
      - POSTGRES_DB=flowise
    volumes:
      - pgdata:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U flowise"]
      interval: 10s
      timeout: 5s
      retries: 5

volumes:
  flowise-data:
  pgdata:
```

```bash
SECRET_KEY=$(openssl rand -hex 32) \
FLOWISE_PASSWORD=yourpassword \
docker compose -f /opt/flowise/docker-compose.yml up -d
```

***

## GPU Acceleration (Local LLM Integration)

Flowise orchestrates — the GPU does the heavy lifting in connected services.

### Flowise + Ollama (Recommended)

Run Ollama on the same Clore.ai server and connect Flowise to it:

```bash
# Step 1: Start Ollama with GPU access
docker run -d \
  --name ollama \
  --gpus all \
  --restart unless-stopped \
  -p 11434:11434 \
  -v ollama-models:/root/.ollama \
  ollama/ollama:latest

# Step 2: Pull models
docker exec ollama ollama pull llama3.1:8b          # Chat/agent model
docker exec ollama ollama pull nomic-embed-text     # Embeddings for RAG
docker exec ollama ollama pull mistral:7b           # Alternative chat model

# Step 3: Start Flowise with host network access
docker run -d \
  --name flowise \
  --restart unless-stopped \
  -p 3000:3000 \
  -v /opt/flowise/data:/root/.flowise \
  --add-host host.docker.internal:host-gateway \
  -e FLOWISE_USERNAME=admin \
  -e FLOWISE_PASSWORD=changeme \
  flowiseai/flowise
```

**In the Flowise UI:**

1. Create a new Chatflow
2. Add **Ollama** node (under Chat Models)
   * Base URL: `http://host.docker.internal:11434`
   * Model Name: `llama3.1:8b`
3. Add **OllamaEmbeddings** node (for RAG)
   * Base URL: `http://host.docker.internal:11434`
   * Model Name: `nomic-embed-text`
4. Connect to your Vector Store (Chroma, FAISS, Qdrant)

> See the complete [Ollama guide](https://docs.clore.ai/guides/language-models/ollama) for model download and GPU setup.

### Flowise + vLLM (Production Scale)

For OpenAI-compatible high-throughput serving:

```bash
# Start vLLM
docker run -d \
  --name vllm \
  --gpus all \
  --restart unless-stopped \
  -p 8000:8000 \
  --ipc=host \
  vllm/vllm-openai:latest \
  --model mistralai/Mistral-7B-Instruct-v0.3 \
  --gpu-memory-utilization 0.85

# In Flowise, use the ChatOpenAI node with custom base URL:
# Base URL: http://host.docker.internal:8000/v1
# OpenAI API Key: (any value)
# Model Name: mistralai/Mistral-7B-Instruct-v0.3
```

> See the [vLLM guide](https://docs.clore.ai/guides/language-models/vllm) for quantization and multi-GPU configurations.

### Building a Local-Only RAG Chatbot

Complete Flowise flow with zero external API calls on Clore.ai:

| Node | Component                   | Settings                    |
| ---- | --------------------------- | --------------------------- |
| 1    | PDF File Loader             | Upload document             |
| 2    | Recursive Text Splitter     | Chunk: 1000, Overlap: 200   |
| 3    | Ollama Embeddings           | Model: `nomic-embed-text`   |
| 4    | In-Memory Vector Store      | (or Chroma for persistence) |
| 5    | Ollama Chat                 | Model: `llama3.1:8b`        |
| 6    | Conversational Retrieval QA | Chain type: Stuff           |
| 7    | Buffer Memory               | Session-based memory        |

Export this as an API and embed the chat widget on any website.

***

## Tips & Best Practices

### 1. Export Flows Regularly

Before stopping or switching Clore.ai servers:

```bash
# Export all flows via API
curl http://localhost:3000/api/v1/chatflows \
  -H "Authorization: Basic $(echo -n admin:password | base64)" \
  > /opt/flowise/backup-flows.json

# Or use the UI: Chatflows → Export All
```

### 2. Use the Embed Widget

Every Flowise chatflow generates a production-ready chat widget:

1. Open your chatflow → Click **\</>** (Embed) button
2. Copy the script snippet
3. Paste into any HTML page — instant customer support bot

### 3. Manage API Keys Securely

Store all LLM API keys in Flowise's **Credentials** panel (not hardcoded in flows):

* Menu → Credentials → Add Credential
* Keys are encrypted with `FLOWISE_SECRETKEY_OVERWRITE`

### 4. Rate Limiting

For public-facing deployments, add rate limiting via Nginx or Caddy in front of Flowise:

```bash
# Simple nginx reverse proxy config
docker run -d \
  --name nginx \
  -p 80:80 \
  -v /opt/flowise/nginx.conf:/etc/nginx/conf.d/default.conf:ro \
  --link flowise:flowise \
  nginx:alpine
```

### 5. Monitor Performance

```bash
# Real-time resource monitoring
watch -n 3 'docker stats flowise ollama --no-stream'

# Check Flowise logs for errors
docker logs flowise --tail 50 -f

# Check if flows are being called
docker logs flowise 2>&1 | grep "Prediction"
```

### 6. Back Up the SQLite Database

```bash
# Create a timestamped backup
cp /opt/flowise/data/database.sqlite \
   /opt/flowise/backup-$(date +%Y%m%d-%H%M%S).sqlite

# Or automate with cron
echo "0 */6 * * * cp /opt/flowise/data/database.sqlite /opt/flowise/backup-\$(date +\%Y\%m\%d-\%H\%M\%S).sqlite" | crontab -
```

***

## Troubleshooting

### Container Exits Immediately

```bash
# Check exit reason
docker logs flowise

# Common causes:
# 1. Port 3000 already in use
lsof -i :3000
# Fix: use a different port
docker run ... -p 3001:3000 ...

# 2. Volume permission error
ls -la /opt/flowise/
chown -R 1000:1000 /opt/flowise/data
```

### UI Shows "Connection Failed"

```bash
# Verify Flowise is actually running
docker ps -a | grep flowise
docker stats flowise --no-stream

# Check it's binding to all interfaces
docker logs flowise | grep "listening"
# Should show: Server is listening at port 3000

# Test locally first
curl -s http://localhost:3000/api/v1/chatflows | head -20
```

### Flows Fail with LLM Errors

```bash
# Test Ollama connectivity from Flowise container
docker exec flowise wget -qO- http://host.docker.internal:11434/api/tags

# If that fails, verify --add-host was included:
docker inspect flowise | grep -A5 ExtraHosts

# Test Ollama is running
curl http://localhost:11434/api/tags
```

### Database Migration Errors on Update

```bash
# When upgrading Flowise version, back up first
cp -r /opt/flowise/data /opt/flowise/data-backup-$(date +%Y%m%d)

# Pull new image
docker pull flowiseai/flowise:latest

# Restart with same volume (migrations run automatically)
docker stop flowise && docker rm flowise
docker run -d --name flowise ... flowiseai/flowise:latest
docker logs flowise -f  # Watch migration output
```

### Credential Decryption Errors After Restart

```bash
# If you didn't set FLOWISE_SECRETKEY_OVERWRITE, a new key is generated on restart
# Always set it explicitly:
-e FLOWISE_SECRETKEY_OVERWRITE=your-stable-32-char-secret

# To recover: re-enter credentials in the UI after setting a stable key
```

### Chat Widget CORS Errors

```bash
# Allow specific origins (replace * with your domain for production)
-e CORS_ORIGINS=https://yourdomain.com,http://localhost:3000
-e IFRAME_ORIGINS=https://yourdomain.com
```

***

## Further Reading

* [Flowise GitHub Repository](https://github.com/FlowiseAI/Flowise) — Source code, releases, official docker-compose
* [Flowise Documentation](https://docs.flowiseai.com) — Node reference, API docs, deployment guides
* [Flowise Discord](https://discord.gg/jn8n7yb9N) — Community templates, flow sharing, support
* [Docker Hub: flowiseai/flowise](https://hub.docker.com/r/flowiseai/flowise) — 5M+ pulls, available tags
* [Ollama on Clore.ai](https://docs.clore.ai/guides/language-models/ollama) — Run local LLMs for free Flowise inference
* [vLLM on Clore.ai](https://docs.clore.ai/guides/language-models/vllm) — Production-scale LLM serving for Flowise
* [GPU Comparison Guide](https://docs.clore.ai/guides/getting-started/gpu-comparison) — Select the right GPU for your stack
* [LangChain.js Documentation](https://js.langchain.com/docs/) — Underlying framework reference
