n8n AI Workflows

Deploy n8n on Clore.ai — self-host a powerful AI workflow automation platform with 400+ integrations and native LLM agent support for pennies per hour.

Overview

n8narrow-up-right is a fair-code workflow automation platform with 55K+ GitHub stars. Unlike fully closed-source alternatives (Zapier, Make), n8n lets you self-host the entire stack — with complete data control — while offering native AI agent capabilities, a JavaScript/Python code node, and a growing library of 400+ integrations.

On Clore.ai, n8n itself runs on CPU (no GPU required), but pairs powerfully with GPU-accelerated services like Ollama or vLLM running on the same server, giving you a completely local AI automation stack. You can have a production n8n instance running for under $0.10–0.20/hr.

Key capabilities:

  • 🔗 400+ integrations — Slack, GitHub, Google Sheets, Postgres, HTTP Request, webhooks, and much more

  • 🤖 AI Agent nodes — built-in LangChain-powered agents with tool use and memory

  • 💻 Code nodes — run arbitrary JavaScript or Python inline in workflows

  • 🔄 Trigger variety — webhooks, cron schedules, database polling, email, queue events

  • 📊 Sub-workflows — modular, reusable workflow components

  • 🔐 Credential vault — encrypted storage for API keys and OAuth tokens

  • 🏠 Self-hosted — your data never leaves your server


Requirements

n8n is a Node.js application packaged as a Docker image. It is CPU-only — no GPU needed for the automation engine itself. A GPU becomes useful only if you're running a local LLM alongside it (e.g. Ollama).

Configuration
GPU
VRAM
System RAM
Disk
Clore.ai Price

Minimal (n8n only)

None / CPU

2 GB

10 GB

~$0.03/hr (CPU)

Standard

None / CPU

4 GB

20 GB

~$0.05/hr

+ Local LLM (Ollama)

RTX 3090

24 GB

16 GB

60 GB

~$0.20/hr

+ High-throughput LLM

A100 40 GB

40 GB

32 GB

100 GB

~$0.80/hr

AI Starter Kit (full)

RTX 4090

24 GB

32 GB

100 GB

~$0.35/hr

Tip: The n8n Self-hosted AI Starter Kitarrow-up-right bundles n8n + Ollama + Qdrant + PostgreSQL into one Docker Compose stack. See AI Starter Kit below.


Quick Start

1. Rent a Clore.ai server

Log in to clore.aiarrow-up-right and deploy a server:

  • CPU-only instance if you only need n8n automation

  • RTX 3090/4090 if you want local LLMs via Ollama

  • Expose port 5678 in the offer's port mapping settings

  • Enable SSH access

2. Connect to the server

3. Option A — Minimal single-container start

The fastest way to get n8n running:

Access the UI at http://<clore-server-ip>:5678

4. Option B — Docker Compose with Postgres (production)

For production use, replace the default SQLite with Postgres:


The n8n Self-hosted AI Starter Kitarrow-up-right is the fastest path to a full local AI automation stack. It ships:

  • n8n — workflow automation

  • Ollama — local LLM inference (GPU or CPU)

  • Qdrant — vector database for RAG

  • PostgreSQL — persistent storage

Services after startup:

Service
URL

n8n UI

http://<ip>:5678

Ollama API

http://<ip>:11434

Qdrant UI

http://<ip>:6333/dashboard

Note: Ollama with GPU requires the NVIDIA Container Toolkitarrow-up-right, which Clore.ai servers have pre-installed.


Configuration

Environment variables reference

Connecting n8n to Ollama for AI agents

Once Ollama is running on the same server:

  1. In n8n, add a new credential: Ollama API

    • Base URL: http://ollama:11434 (if using Compose) or http://localhost:11434

  2. In a workflow, add an AI Agent node

  3. Under Chat Model, select Ollama and choose your model (e.g. llama3:8b)

  4. Add tools like HTTP Request, Postgres, or Code nodes

  5. Execute!


Tips & Best Practices

Cost optimization on Clore.ai

Webhook reliability on Clore.ai

Clore.ai servers have dynamic IPs. If your webhooks break after a redeploy:

Queue mode for high-volume workflows

Useful n8n CLI commands

Security hardening


Troubleshooting

n8n container exits immediately

Webhooks return 404

AI Agent node can't reach Ollama

"ENOSPC: no space left on device"

Slow workflow execution


Further Reading

Last updated

Was this helpful?