# n8n AI Workflows

## Overview

[n8n](https://github.com/n8n-io/n8n) is a fair-code workflow automation platform with **55K+ GitHub stars**. Unlike fully closed-source alternatives (Zapier, Make), n8n lets you self-host the entire stack — with complete data control — while offering native AI agent capabilities, a JavaScript/Python code node, and a growing library of 400+ integrations.

On **Clore.ai**, n8n itself runs on CPU (no GPU required), but pairs powerfully with GPU-accelerated services like Ollama or vLLM running on the same server, giving you a completely local AI automation stack. You can have a production n8n instance running for under **$0.10–0.20/hr**.

**Key capabilities:**

* 🔗 **400+ integrations** — Slack, GitHub, Google Sheets, Postgres, HTTP Request, webhooks, and much more
* 🤖 **AI Agent nodes** — built-in LangChain-powered agents with tool use and memory
* 💻 **Code nodes** — run arbitrary JavaScript or Python inline in workflows
* 🔄 **Trigger variety** — webhooks, cron schedules, database polling, email, queue events
* 📊 **Sub-workflows** — modular, reusable workflow components
* 🔐 **Credential vault** — encrypted storage for API keys and OAuth tokens
* 🏠 **Self-hosted** — your data never leaves your server

***

## Requirements

n8n is a Node.js application packaged as a Docker image. It is **CPU-only** — no GPU needed for the automation engine itself. A GPU becomes useful only if you're running a local LLM alongside it (e.g. Ollama).

| Configuration             | GPU        | VRAM  | System RAM | Disk   | Clore.ai Price   |
| ------------------------- | ---------- | ----- | ---------- | ------ | ---------------- |
| **Minimal** (n8n only)    | None / CPU | —     | 2 GB       | 10 GB  | \~$0.03/hr (CPU) |
| **Standard**              | None / CPU | —     | 4 GB       | 20 GB  | \~$0.05/hr       |
| **+ Local LLM (Ollama)**  | RTX 3090   | 24 GB | 16 GB      | 60 GB  | \~$0.20/hr       |
| **+ High-throughput LLM** | A100 40 GB | 40 GB | 32 GB      | 100 GB | \~$0.80/hr       |
| **AI Starter Kit (full)** | RTX 4090   | 24 GB | 32 GB      | 100 GB | \~$0.35/hr       |

> **Tip:** The [n8n Self-hosted AI Starter Kit](https://github.com/n8n-io/self-hosted-ai-starter-kit) bundles n8n + Ollama + Qdrant + PostgreSQL into one Docker Compose stack. See [AI Starter Kit](#ai-starter-kit-recommended) below.

***

## Quick Start

### 1. Rent a Clore.ai server

Log in to [clore.ai](https://clore.ai) and deploy a server:

* **CPU-only instance** if you only need n8n automation
* **RTX 3090/4090** if you want local LLMs via Ollama
* Expose port **5678** in the offer's port mapping settings
* Enable SSH access

### 2. Connect to the server

```bash
ssh root@<clore-server-ip> -p <ssh-port>

# Verify Docker is installed
docker --version
docker compose version
```

### 3. Option A — Minimal single-container start

The fastest way to get n8n running:

```bash
# Create a named volume for persistence
docker volume create n8n_data

# Run n8n
docker run -d \
  --name n8n \
  --restart unless-stopped \
  -p 5678:5678 \
  -v n8n_data:/home/node/.n8n \
  -e N8N_HOST=<clore-server-ip> \
  -e N8N_PORT=5678 \
  -e N8N_PROTOCOL=http \
  -e WEBHOOK_URL=http://<clore-server-ip>:5678/ \
  docker.n8n.io/n8nio/n8n

# Check logs
docker logs -f n8n
```

Access the UI at `http://<clore-server-ip>:5678`

### 4. Option B — Docker Compose with Postgres (production)

For production use, replace the default SQLite with Postgres:

```bash
mkdir n8n-prod && cd n8n-prod
cat > docker-compose.yml << 'EOF'
version: "3.8"

services:
  postgres:
    image: postgres:15-alpine
    restart: unless-stopped
    environment:
      POSTGRES_USER: n8n
      POSTGRES_PASSWORD: n8npassword   # Change this!
      POSTGRES_DB: n8n
    volumes:
      - postgres_data:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U n8n"]
      interval: 10s
      timeout: 5s
      retries: 5

  n8n:
    image: docker.n8n.io/n8nio/n8n
    restart: unless-stopped
    depends_on:
      postgres:
        condition: service_healthy
    ports:
      - "5678:5678"
    environment:
      DB_TYPE: postgresdb
      DB_POSTGRESDB_HOST: postgres
      DB_POSTGRESDB_PORT: 5432
      DB_POSTGRESDB_DATABASE: n8n
      DB_POSTGRESDB_USER: n8n
      DB_POSTGRESDB_PASSWORD: n8npassword
      N8N_HOST: <clore-server-ip>
      N8N_PORT: 5678
      N8N_PROTOCOL: http
      WEBHOOK_URL: http://<clore-server-ip>:5678/
      N8N_ENCRYPTION_KEY: your-32-char-encryption-key-here
      EXECUTIONS_MODE: regular
      N8N_BASIC_AUTH_ACTIVE: "true"
      N8N_BASIC_AUTH_USER: admin
      N8N_BASIC_AUTH_PASSWORD: changeme!
    volumes:
      - n8n_data:/home/node/.n8n

volumes:
  postgres_data:
  n8n_data:
EOF

docker compose up -d
docker compose logs -f n8n
```

***

## AI Starter Kit (Recommended)

The [n8n Self-hosted AI Starter Kit](https://github.com/n8n-io/self-hosted-ai-starter-kit) is the fastest path to a full local AI automation stack. It ships:

* **n8n** — workflow automation
* **Ollama** — local LLM inference (GPU or CPU)
* **Qdrant** — vector database for RAG
* **PostgreSQL** — persistent storage

```bash
# Clone the starter kit
git clone https://github.com/n8n-io/self-hosted-ai-starter-kit.git
cd self-hosted-ai-starter-kit

# For GPU-enabled servers (RTX 3090, 4090, A100, etc.)
docker compose --profile gpu-nvidia up -d

# For CPU-only servers
docker compose --profile cpu up -d

# Monitor startup
docker compose logs -f

# Pull a model into Ollama (after stack starts)
docker exec ollama ollama pull llama3:8b
docker exec ollama ollama pull nomic-embed-text  # For embeddings
```

Services after startup:

| Service    | URL                          |
| ---------- | ---------------------------- |
| n8n UI     | `http://<ip>:5678`           |
| Ollama API | `http://<ip>:11434`          |
| Qdrant UI  | `http://<ip>:6333/dashboard` |

> **Note:** Ollama with GPU requires the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html), which Clore.ai servers have pre-installed.

***

## Configuration

### Environment variables reference

```bash
# ── Core settings ────────────────────────────────────────────────────────────
N8N_HOST=0.0.0.0                  # Bind address
N8N_PORT=5678                     # Port to listen on
N8N_PROTOCOL=http                 # http or https
WEBHOOK_URL=http://<ip>:5678/     # Public webhook base URL (critical!)

# ── Encryption ───────────────────────────────────────────────────────────────
N8N_ENCRYPTION_KEY=<32-char-key>  # Encrypts stored credentials
# Generate: openssl rand -hex 16

# ── Authentication ───────────────────────────────────────────────────────────
N8N_BASIC_AUTH_ACTIVE=true
N8N_BASIC_AUTH_USER=admin
N8N_BASIC_AUTH_PASSWORD=your-secure-password

# ── Execution settings ───────────────────────────────────────────────────────
EXECUTIONS_MODE=regular           # regular (single) or queue (Redis-backed)
EXECUTIONS_DATA_SAVE_ON_SUCCESS=all
EXECUTIONS_DATA_SAVE_ON_ERROR=all
EXECUTIONS_DATA_MAX_AGE=336       # Keep execution data for 14 days

# ── Timezone ─────────────────────────────────────────────────────────────────
GENERIC_TIMEZONE=UTC

# ── Disable telemetry (optional) ─────────────────────────────────────────────
N8N_DIAGNOSTICS_ENABLED=false
N8N_VERSION_NOTIFICATIONS_ENABLED=false
```

### Connecting n8n to Ollama for AI agents

Once Ollama is running on the same server:

1. In n8n, add a new credential: **Ollama API**
   * Base URL: `http://ollama:11434` (if using Compose) or `http://localhost:11434`
2. In a workflow, add an **AI Agent** node
3. Under **Chat Model**, select Ollama and choose your model (e.g. `llama3:8b`)
4. Add tools like **HTTP Request**, **Postgres**, or **Code** nodes
5. Execute!

***

## Tips & Best Practices

### Cost optimization on Clore.ai

```bash
# Export all workflows before stopping your instance
# Use n8n CLI inside the container:
docker exec n8n n8n export:workflow --all --output=/home/node/.n8n/workflows-backup.json

# Copy backup to local machine
docker cp n8n:/home/node/.n8n/workflows-backup.json ./n8n-workflows-backup.json

# Import on a new instance:
docker exec n8n n8n import:workflow --input=/home/node/.n8n/workflows-backup.json
```

### Webhook reliability on Clore.ai

Clore.ai servers have dynamic IPs. If your webhooks break after a redeploy:

```bash
# 1. Use a static domain with Caddy + Let's Encrypt
# Caddyfile:
# n8n.yourdomain.com {
#     reverse_proxy n8n:5678
# }

# 2. Or use Cloudflare Tunnel (free, no open ports needed):
docker run -d \
  --name cloudflared \
  --network n8n-prod_default \
  cloudflare/cloudflared:latest \
  tunnel --no-autoupdate run \
  --token <your-cloudflare-tunnel-token>
```

### Queue mode for high-volume workflows

```bash
# Add Redis and run n8n in queue mode for parallel execution
# Add to docker-compose.yml:
  redis:
    image: redis:7-alpine
    restart: unless-stopped

  n8n-worker:
    image: docker.n8n.io/n8nio/n8n
    command: worker
    depends_on: [redis, postgres]
    environment:
      <<: *n8n-env
      EXECUTIONS_MODE: queue
      QUEUE_BULL_REDIS_HOST: redis
```

### Useful n8n CLI commands

```bash
# List all workflows
docker exec n8n n8n list:workflow

# Run a specific workflow manually
docker exec n8n n8n execute --id=<workflow-id>

# Update n8n to latest version
docker pull docker.n8n.io/n8nio/n8n
docker compose up -d n8n

# Check n8n version
docker exec n8n n8n --version
```

### Security hardening

```bash
# Run n8n behind a reverse proxy with HTTPS
# Never expose port 5678 directly in production
# Use N8N_BASIC_AUTH or configure SSO via LDAP/SAML

# Restrict which nodes are allowed (whitelist mode):
NODE_FUNCTION_ALLOW_EXTERNAL=axios,lodash
NODES_EXCLUDE='["n8n-nodes-base.executeCommand"]'  # Block shell execution
```

***

## Troubleshooting

### n8n container exits immediately

```bash
docker logs n8n

# Common issues:
# 1. DB connection failed — check Postgres is healthy
docker compose ps postgres

# 2. Encryption key mismatch — if you changed N8N_ENCRYPTION_KEY
#    Credentials become unreadable; re-enter them after key change

# 3. Port conflict
ss -tlnp | grep 5678
```

### Webhooks return 404

```bash
# Ensure WEBHOOK_URL matches the public address your webhook senders use
# It must include the trailing slash
WEBHOOK_URL=http://<your-public-ip>:5678/

# Restart n8n after changing this env var
docker compose restart n8n
```

### AI Agent node can't reach Ollama

```bash
# Test from within n8n container
docker exec n8n wget -qO- http://ollama:11434/api/tags
# or
docker exec n8n curl http://localhost:11434/api/tags

# If using separate containers not in same Compose network:
# Use host.docker.internal or the server's LAN IP
```

### "ENOSPC: no space left on device"

```bash
# Check disk usage
df -h
docker system df

# Prune old execution data in n8n UI:
# Settings → Executions → Delete old executions

# Or set auto-pruning:
EXECUTIONS_DATA_PRUNE=true
EXECUTIONS_DATA_MAX_AGE=72   # 72 hours
```

### Slow workflow execution

```bash
# Check CPU usage
top -c

# Enable queue mode for parallel execution (see Tips above)
# Increase Node.js memory if needed:
NODE_OPTIONS=--max-old-space-size=4096
```

***

## Further Reading

* [n8n Official Documentation](https://docs.n8n.io)
* [n8n GitHub Repository](https://github.com/n8n-io/n8n)
* [n8n Self-hosted AI Starter Kit](https://github.com/n8n-io/self-hosted-ai-starter-kit)
* [n8n Docker Installation Guide](https://docs.n8n.io/hosting/installation/docker/)
* [Running Ollama on Clore.ai](https://docs.clore.ai/guides/language-models/ollama)
* [Clore.ai GPU Comparison](https://docs.clore.ai/guides/getting-started/gpu-comparison)
* [n8n Community Forum](https://community.n8n.io)
