Flowise AI Agent Builder

Deploy Flowise on Clore.ai — build and host visual LLM chatbots, AI agents, and RAG pipelines on affordable GPU cloud servers with a drag-and-drop no-code interface and instant API endpoints.

Overview

Flowisearrow-up-right is an open-source, drag-and-drop tool for building LLM-powered applications without writing code. With 35K+ GitHub stars and over 5 million Docker Hub pulls, Flowise has become one of the most deployed self-hosted AI tools in the ecosystem. It enables teams to create chatbots, RAG systems, AI agents, and automated workflows through an intuitive visual interface — and deploy them as REST API endpoints in minutes.

Flowise is built on LangChain.js and provides a node-based canvas where you connect components: LLMs, vector databases, document loaders, memory stores, tools, and agents. Every flow automatically generates an embeddable chat widget and API endpoint that you can integrate into any application.

Key capabilities:

  • Drag-and-drop flow builder — Visual LLM orchestration with 100+ pre-built nodes

  • Chatbot creation — Embeddable chat widgets for websites and apps

  • RAG pipelines — Connect document loaders, embedders, and vector stores visually

  • Multi-agent support — Build agent hierarchies with tool use and delegation

  • Instant API — Every flow generates a /api/v1/prediction/<flowId> endpoint

  • LangChain nodes — Full access to the LangChain.js ecosystem

  • Credential manager — Centrally manage API keys, database connections

Why Clore.ai for Flowise?

Flowise is a lightweight Node.js server — it handles orchestration, not compute. Pairing it with Clore.ai enables:

  • Local model inference — Run Ollama or vLLM on the same GPU server, eliminating API costs

  • Private document processing — RAG pipelines that never send data to external services

  • Persistent deployment — Always-on chatbot and API hosting at GPU server prices

  • Cost-effective at scale — Build multi-tenant chatbot platforms without per-call API fees

  • Full-stack AI hosting — Flowise + Ollama + Qdrant/Chroma all on one affordable server


Requirements

Flowise itself is a Node.js application with minimal resource requirements. GPU is only needed if you add a local LLM backend.

Configuration
GPU
VRAM
RAM
Storage
Est. Price

Flowise only (external APIs)

None

2–4 GB

10 GB

~$0.03–0.08/hr

+ Ollama (Llama 3.1 8B)

RTX 3090

24 GB

16 GB

40 GB

~$0.20/hr

+ Ollama (Mistral 7B + embeddings)

RTX 3090

24 GB

16 GB

30 GB

~$0.20/hr

+ Ollama (Qwen2.5 32B)

RTX 4090

24 GB

32 GB

60 GB

~$0.35/hr

+ vLLM (production)

A100 80GB

80 GB

64 GB

100 GB

~$1.10/hr

Note: Flowise runs comfortably on any Clore.ai server. GPU is needed only if you want free local inference. See the GPU Comparison Guide.

Clore.ai server requirements:

  • Docker Engine (pre-installed on all Clore.ai images)

  • NVIDIA Container Toolkit (for GPU/Ollama only)

  • Port 3000 accessible (or mapped in Clore.ai dashboard)

  • Minimum 2 GB free RAM, 10 GB disk space


Quick Start

Step 1: Book a Server on Clore.ai

In the Clore.ai marketplacearrow-up-right:

  • For API-only usage: Any server, filter by RAM ≥ 4 GB

  • For local LLM: Filter GPU ≥ 24 GB VRAM

  • Ensure Docker is enabled in the template

Connect via SSH:

Step 2: Run Flowise (Single Command)

That's it. Flowise will be available at http://<server-ip>:3000 within 20–30 seconds.

Step 3: Verify It's Running

Step 4: Open the UI

Navigate to http://<server-ip>:3000 in your browser.

Clore.ai Port Mapping: Ensure port 3000 is forwarded in your Clore.ai server configuration. Go to your server details → Ports → confirm 3000:3000 is mapped. Some templates only expose SSH by default.


Configuration

Persistent Storage

Mount volumes so your flows, credentials, and uploads survive container restarts:

Authentication

Protect your Flowise instance with username/password:

Security note: Always set credentials when exposing Flowise publicly on Clore.ai. Without authentication, anyone with your server IP can access your flows and API keys.

Full Environment Variables Reference

Variable
Description
Default

PORT

Web server port

3000

FLOWISE_USERNAME

Admin username (enables auth)

— (no auth)

FLOWISE_PASSWORD

Admin password

FLOWISE_SECRETKEY_OVERWRITE

Encryption key for credentials

Auto-generated

DATABASE_TYPE

sqlite or mysql or postgres

sqlite

DATABASE_PATH

SQLite storage path

/root/.flowise

LOG_LEVEL

error, warn, info, debug

info

TOOL_FUNCTION_BUILTIN_DEP

Allowed Node.js builtins in code nodes

TOOL_FUNCTION_EXTERNAL_DEP

Allowed npm packages in code nodes

CORS_ORIGINS

Allowed CORS origins for API

*

IFRAME_ORIGINS

Allowed iframe embedding origins

*

The official Flowise repo includes a Docker Compose configuration. This is the recommended approach for Clore.ai:

Or create your own with PostgreSQL:


GPU Acceleration (Local LLM Integration)

Flowise orchestrates — the GPU does the heavy lifting in connected services.

Run Ollama on the same Clore.ai server and connect Flowise to it:

In the Flowise UI:

  1. Create a new Chatflow

  2. Add Ollama node (under Chat Models)

    • Base URL: http://host.docker.internal:11434

    • Model Name: llama3.1:8b

  3. Add OllamaEmbeddings node (for RAG)

    • Base URL: http://host.docker.internal:11434

    • Model Name: nomic-embed-text

  4. Connect to your Vector Store (Chroma, FAISS, Qdrant)

See the complete Ollama guide for model download and GPU setup.

Flowise + vLLM (Production Scale)

For OpenAI-compatible high-throughput serving:

See the vLLM guide for quantization and multi-GPU configurations.

Building a Local-Only RAG Chatbot

Complete Flowise flow with zero external API calls on Clore.ai:

Node
Component
Settings

1

PDF File Loader

Upload document

2

Recursive Text Splitter

Chunk: 1000, Overlap: 200

3

Ollama Embeddings

Model: nomic-embed-text

4

In-Memory Vector Store

(or Chroma for persistence)

5

Ollama Chat

Model: llama3.1:8b

6

Conversational Retrieval QA

Chain type: Stuff

7

Buffer Memory

Session-based memory

Export this as an API and embed the chat widget on any website.


Tips & Best Practices

1. Export Flows Regularly

Before stopping or switching Clore.ai servers:

2. Use the Embed Widget

Every Flowise chatflow generates a production-ready chat widget:

  1. Open your chatflow → Click </> (Embed) button

  2. Copy the script snippet

  3. Paste into any HTML page — instant customer support bot

3. Manage API Keys Securely

Store all LLM API keys in Flowise's Credentials panel (not hardcoded in flows):

  • Menu → Credentials → Add Credential

  • Keys are encrypted with FLOWISE_SECRETKEY_OVERWRITE

4. Rate Limiting

For public-facing deployments, add rate limiting via Nginx or Caddy in front of Flowise:

5. Monitor Performance

6. Back Up the SQLite Database


Troubleshooting

Container Exits Immediately

UI Shows "Connection Failed"

Flows Fail with LLM Errors

Database Migration Errors on Update

Credential Decryption Errors After Restart

Chat Widget CORS Errors


Further Reading

Last updated

Was this helpful?