# Langflow Visual AI Builder

## Overview

[Langflow](https://github.com/langflow-ai/langflow) is an open-source, low-code platform for building AI applications using a visual drag-and-drop interface. With 55K+ GitHub stars and a rapidly growing community, it has become one of the go-to tools for prototyping and deploying LLM-powered workflows without writing complex boilerplate code.

Langflow is built on top of LangChain and provides a graphical editor where you can connect components — LLMs, vector stores, document loaders, retrievers, agents, tools — by drawing lines between nodes. The resulting pipeline can be exported as an API endpoint, shared as a template, or embedded into your application.

**Key capabilities:**

* **Visual RAG builder** — Connect document loaders → embeddings → vector stores → retrievers in minutes
* **Multi-agent workflows** — Chain multiple AI agents with memory, tools, and decision logic
* **LangChain integration** — Access the full LangChain ecosystem via UI nodes
* **Component marketplace** — Community-contributed components for dozens of services
* **API-first** — Every flow auto-generates a REST API endpoint
* **Memory and state** — Built-in conversation memory, session management

**Why Clore.ai for Langflow?**

Langflow's compute requirements are minimal — it's a Python web server handling workflow orchestration. However, Clore.ai opens up powerful use cases:

* **Self-hosted embeddings** — Run local embedding models (nomic-embed, BGE) on GPU for fast, free vector generation
* **Local LLM backends** — Connect Langflow to Ollama or vLLM running on the same server
* **Private data pipelines** — Process sensitive documents without sending data to external APIs
* **Cost optimization** — Replace expensive OpenAI embedding calls with free local inference
* **Persistent workflows** — Long-running flows on dedicated servers (vs. ephemeral cloud functions)

***

## Requirements

Langflow itself is lightweight and CPU-based. GPU is optional but enables free local LLM/embedding inference.

| Configuration                        | GPU         | VRAM  | RAM   | Storage | Est. Price      |
| ------------------------------------ | ----------- | ----- | ----- | ------- | --------------- |
| **Langflow only (API backends)**     | None needed | —     | 4 GB  | 10 GB   | \~$0.03–0.08/hr |
| **+ Local embeddings (nomic-embed)** | RTX 3090    | 24 GB | 8 GB  | 20 GB   | \~$0.20/hr      |
| **+ Ollama (Llama 3.1 8B)**          | RTX 3090    | 24 GB | 16 GB | 40 GB   | \~$0.20/hr      |
| **+ Ollama (Qwen2.5 32B)**           | RTX 4090    | 24 GB | 32 GB | 60 GB   | \~$0.35/hr      |
| **+ vLLM (production RAG)**          | A100 80GB   | 80 GB | 64 GB | 100 GB  | \~$1.10/hr      |

> For a comparison of GPU options on Clore.ai, see the [GPU Comparison Guide](https://docs.clore.ai/guides/getting-started/gpu-comparison).

**Software requirements on the Clore.ai server:**

* Docker Engine (pre-installed on all Clore.ai images)
* NVIDIA Container Toolkit (pre-installed on GPU images, required only for local LLM)
* 10+ GB free disk space for the Langflow image and flow data
* Outbound internet access (for pulling Docker images and reaching external APIs)

***

## Quick Start

### Step 1: Connect to Your Clore.ai Server

Book a server on [Clore.ai marketplace](https://clore.ai). For Langflow-only usage, any server with ≥4 GB RAM works. Connect via SSH:

```bash
ssh root@<server-ip> -p <ssh-port>
```

### Step 2: Run Langflow with Docker

The simplest deployment — single command:

```bash
docker run -d \
  --name langflow \
  --restart unless-stopped \
  -p 7860:7860 \
  langflowai/langflow:latest
```

Wait \~30–60 seconds for startup, then access at `http://<server-ip>:7860`

### Step 3: Expose Port 7860 on Clore.ai

In the Clore.ai dashboard, navigate to your server → Ports section → ensure `7860` is mapped. If using a custom template, add `7860:7860` to your port configuration before starting the server.

### Step 4: First Launch

On first visit, Langflow will:

1. Show a welcome screen and ask to create an account (or skip)
2. Offer a set of starter templates (RAG, chatbot, agent)
3. Open the visual canvas editor

You're ready to build your first flow!

***

## Configuration

### Persistent Data Storage

By default, Langflow stores flows and data inside the container. Mount a volume to persist across restarts:

```bash
mkdir -p /opt/langflow/data

docker run -d \
  --name langflow \
  --restart unless-stopped \
  -p 7860:7860 \
  -v /opt/langflow/data:/app/langflow \
  -e LANGFLOW_DATABASE_URL=sqlite:////app/langflow/langflow.db \
  langflowai/langflow:latest
```

### Environment Variables Reference

```bash
docker run -d \
  --name langflow \
  --restart unless-stopped \
  -p 7860:7860 \
  -v /opt/langflow/data:/app/langflow \
  -e LANGFLOW_HOST=0.0.0.0 \
  -e LANGFLOW_PORT=7860 \
  -e LANGFLOW_DATABASE_URL=sqlite:////app/langflow/langflow.db \
  -e LANGFLOW_SECRET_KEY=your-secret-key-here \
  -e LANGFLOW_AUTO_LOGIN=false \
  -e LANGFLOW_SUPERUSER=admin \
  -e LANGFLOW_SUPERUSER_PASSWORD=your-password \
  -e LANGFLOW_WORKERS=2 \
  -e LANGFLOW_LOG_LEVEL=info \
  langflowai/langflow:latest
```

| Variable                      | Description                         | Default          |
| ----------------------------- | ----------------------------------- | ---------------- |
| `LANGFLOW_HOST`               | Bind address                        | `0.0.0.0`        |
| `LANGFLOW_PORT`               | Web server port                     | `7860`           |
| `LANGFLOW_DATABASE_URL`       | Database connection string          | SQLite in memory |
| `LANGFLOW_SECRET_KEY`         | Session secret (set for production) | Random           |
| `LANGFLOW_AUTO_LOGIN`         | Skip login screen                   | `true`           |
| `LANGFLOW_SUPERUSER`          | Admin username                      | `admin`          |
| `LANGFLOW_SUPERUSER_PASSWORD` | Admin password                      | —                |
| `LANGFLOW_WORKERS`            | Number of API workers               | `1`              |
| `LANGFLOW_LOG_LEVEL`          | Logging verbosity                   | `critical`       |
| `OPENAI_API_KEY`              | Pre-load OpenAI key                 | —                |

### Using PostgreSQL (Production)

For multi-user or production deployments, use PostgreSQL instead of SQLite:

```bash
# Start PostgreSQL
docker run -d \
  --name langflow-db \
  --restart unless-stopped \
  -e POSTGRES_USER=langflow \
  -e POSTGRES_PASSWORD=langflow-secret \
  -e POSTGRES_DB=langflow \
  -v langflow-pgdata:/var/lib/postgresql/data \
  postgres:16-alpine

# Start Langflow with PostgreSQL backend
docker run -d \
  --name langflow \
  --restart unless-stopped \
  --link langflow-db:db \
  -p 7860:7860 \
  -v /opt/langflow/data:/app/langflow \
  -e LANGFLOW_DATABASE_URL=postgresql://langflow:langflow-secret@db:5432/langflow \
  -e LANGFLOW_SECRET_KEY=$(openssl rand -hex 32) \
  -e LANGFLOW_AUTO_LOGIN=false \
  -e LANGFLOW_SUPERUSER=admin \
  -e LANGFLOW_SUPERUSER_PASSWORD=changeme \
  langflowai/langflow:latest
```

### Docker Compose (Full Stack)

For a complete setup with PostgreSQL and Nginx reverse proxy:

```yaml
# /opt/langflow/docker-compose.yml
version: "3.9"

services:
  langflow:
    image: langflowai/langflow:latest
    restart: unless-stopped
    ports:
      - "7860:7860"
    environment:
      - LANGFLOW_DATABASE_URL=postgresql://langflow:secret@db:5432/langflow
      - LANGFLOW_SECRET_KEY=${SECRET_KEY:-changeme}
      - LANGFLOW_AUTO_LOGIN=false
      - LANGFLOW_SUPERUSER=admin
      - LANGFLOW_SUPERUSER_PASSWORD=${ADMIN_PASSWORD:-changeme}
      - LANGFLOW_WORKERS=2
    volumes:
      - langflow-data:/app/langflow
    depends_on:
      db:
        condition: service_healthy

  db:
    image: postgres:16-alpine
    restart: unless-stopped
    environment:
      - POSTGRES_USER=langflow
      - POSTGRES_PASSWORD=secret
      - POSTGRES_DB=langflow
    volumes:
      - pgdata:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U langflow"]
      interval: 10s
      timeout: 5s
      retries: 5

volumes:
  langflow-data:
  pgdata:
```

```bash
cd /opt/langflow
SECRET_KEY=$(openssl rand -hex 32) ADMIN_PASSWORD=yourpassword docker compose up -d
```

### Specific Version Pinning

For reproducible deployments, pin a specific version:

```bash
# List available versions: https://hub.docker.com/r/langflowai/langflow/tags
docker run -d \
  --name langflow \
  -p 7860:7860 \
  langflowai/langflow:1.1.4
```

***

## GPU Acceleration (Local Model Integration)

Langflow itself runs on CPU, but connecting it to local GPU-powered services on the same Clore.ai server unlocks free, private inference.

### Connect Langflow to Ollama

```bash
# Step 1: Start Ollama with GPU
docker run -d \
  --name ollama \
  --gpus all \
  --restart unless-stopped \
  -p 11434:11434 \
  -v ollama-models:/root/.ollama \
  ollama/ollama:latest

# Step 2: Pull models for different use cases
docker exec ollama ollama pull llama3.1:8b          # General chat
docker exec ollama ollama pull nomic-embed-text     # Embeddings for RAG
docker exec ollama ollama pull qwen2.5-coder:7b    # Code generation

# Step 3: Start Langflow with Ollama network access
docker run -d \
  --name langflow \
  --restart unless-stopped \
  -p 7860:7860 \
  -v /opt/langflow/data:/app/langflow \
  --add-host host.docker.internal:host-gateway \
  langflowai/langflow:latest
```

In Langflow UI, use the **Ollama** component with:

* Base URL: `http://host.docker.internal:11434`
* Model: `llama3.1:8b`

For embeddings, use the **OllamaEmbeddings** component with:

* Base URL: `http://host.docker.internal:11434`
* Model: `nomic-embed-text`

> Full Ollama configuration: see the [Ollama guide](https://docs.clore.ai/guides/language-models/ollama)

### Connect Langflow to vLLM (OpenAI-compatible)

```bash
# Start vLLM with OpenAI-compatible API
docker run -d \
  --name vllm \
  --gpus all \
  --restart unless-stopped \
  -p 8000:8000 \
  --ipc=host \
  vllm/vllm-openai:latest \
  --model mistralai/Mistral-7B-Instruct-v0.3 \
  --gpu-memory-utilization 0.85

# In Langflow, use the OpenAI component with custom base URL:
# Base URL: http://host.docker.internal:8000/v1
# API Key: (any value, e.g. "none")
# Model: mistralai/Mistral-7B-Instruct-v0.3
```

> Full vLLM configuration: see the [vLLM guide](https://docs.clore.ai/guides/language-models/vllm)

### Building a Local RAG Pipeline

Example RAG flow using only local models (zero API cost):

1. **File Loader** node → Load PDF/text documents
2. **Text Splitter** node → Chunk documents (size: 512, overlap: 50)
3. **OllamaEmbeddings** node → Generate embeddings (model: `nomic-embed-text`)
4. **Chroma** or **FAISS** node → Store vectors locally
5. **OllamaEmbeddings** node → Embed the user's query
6. **Retriever** node → Find top-k similar chunks
7. **Ollama** node → Generate answer (model: `llama3.1:8b`)
8. **Chat Output** node → Return response

This entire pipeline runs on your Clore.ai server with zero external API calls.

***

## Tips & Best Practices

### 1. Export Flows as Backups

Before stopping your Clore.ai server, export your flows:

* In the UI: Flows → Select all → Export → Download JSON
* Or via API: `curl http://localhost:7860/api/v1/flows/`

Store them in a persistent volume or download to your local machine.

### 2. Use the API for Automation

Every Langflow flow generates an API endpoint. Trigger flows programmatically:

```bash
# Get your flow ID from the UI (shown in the URL)
FLOW_ID="your-flow-id-here"

curl -X POST \
  "http://<server-ip>:7860/api/v1/run/$FLOW_ID" \
  -H "Content-Type: application/json" \
  -d '{
    "input_value": "Summarize the latest AI research papers",
    "input_type": "chat",
    "output_type": "chat"
  }'
```

### 3. Secure Your Instance

For anything beyond local testing:

```bash
# Always set authentication
-e LANGFLOW_AUTO_LOGIN=false \
-e LANGFLOW_SUPERUSER=admin \
-e LANGFLOW_SUPERUSER_PASSWORD=$(openssl rand -base64 16)

# Use a strong secret key
-e LANGFLOW_SECRET_KEY=$(openssl rand -hex 32)
```

### 4. Monitor Memory Usage

Langflow can accumulate memory over time with many active flows:

```bash
docker stats langflow
# If memory grows unbounded, restart periodically:
docker restart langflow
```

### 5. Use Starter Templates

Langflow ships with production-ready templates:

* **Basic RAG** — Document Q\&A with vector store
* **Memory Chatbot** — Conversational agent with history
* **Research Assistant** — Web search + LLM synthesis
* Access via: New Flow → Starter Projects

### 6. Component Caching

Enable caching to speed up repeated flow runs:

* In flow settings: Enable "Cache" on expensive nodes (embeddings, LLM calls)
* Particularly useful for RAG retrieval during development

***

## Troubleshooting

### Container Fails to Start

```bash
# Check logs for errors
docker logs langflow --tail 50

# Common issue: port already in use
lsof -i :7860
# Kill the conflicting process or change the port:
docker run ... -p 7861:7860 ...
```

### UI Loads but Flows Don't Run

```bash
# Check worker process status
docker exec langflow ps aux | grep langflow

# Check for Python package errors
docker logs langflow 2>&1 | grep -i error

# Restart the container
docker restart langflow
```

### Can't Connect to Ollama

```bash
# Test connectivity from inside the Langflow container
docker exec langflow curl http://host.docker.internal:11434/api/tags

# If --add-host flag is missing, recreate the container with:
--add-host host.docker.internal:host-gateway

# Verify Ollama is running
docker ps | grep ollama
curl http://localhost:11434/api/tags
```

### Database Errors on Restart

```bash
# If using SQLite and getting lock errors:
docker stop langflow
ls -la /opt/langflow/data/
# Check for stale .lock files
rm -f /opt/langflow/data/langflow.db-wal
rm -f /opt/langflow/data/langflow.db-shm
docker start langflow
```

### Slow Flow Execution

```bash
# Increase workers for parallel processing
-e LANGFLOW_WORKERS=4

# For embedding-heavy workloads, ensure GPU is used by Ollama:
docker exec ollama nvidia-smi
# Should show GPU utilization when embedding
```

### Reset Admin Password

```bash
docker exec -it langflow python -c "
from langflow.services.database.utils import initialize_db
from langflow.services.deps import get_settings_service
# Use Langflow CLI instead:
"
# Use the CLI method:
docker exec -it langflow langflow superuser --username admin --password newpassword
```

***

## Further Reading

* [Langflow GitHub Repository](https://github.com/langflow-ai/langflow) — Source code, issues, changelog
* [Langflow Documentation](https://docs.langflow.org) — Official docs, component reference, API docs
* [Langflow Discord](https://discord.com/invite/EqksyE2EX9) — Community support and flow sharing
* [Ollama on Clore.ai](https://docs.clore.ai/guides/language-models/ollama) — Set up local LLM backend for Langflow
* [vLLM on Clore.ai](https://docs.clore.ai/guides/language-models/vllm) — High-throughput LLM serving for production flows
* [GPU Comparison Guide](https://docs.clore.ai/guides/getting-started/gpu-comparison) — Choose the right Clore.ai GPU for your workload
* [LangChain Documentation](https://python.langchain.com/docs/) — Underlying framework reference
* [Docker Hub: langflowai/langflow](https://hub.docker.com/r/langflowai/langflow) — Available image tags and versions
