Aider AI Coding

Terminal-based AI coding with Aider on Clore.ai — Git-aware, multi-file, local models via Ollama

Aider is a terminal-based AI coding assistant with 39K+ GitHub stars. It edits files directly in your repo, creates Git commits automatically, and supports both cloud APIs (OpenAI, Anthropic) and fully local models via Ollama. On a Clore.ai GPU, you can run large coding models like DeepSeek-R1 32B or Qwen2.5-Coder-32B entirely on your own hardware — private, fast, and cost-effective.

circle-check

Key Features

  • Terminal-native — works over SSH, perfect for headless Clore.ai servers

  • Git-aware — auto-commits every change with descriptive messages, easy to review and revert

  • Multi-file editing — add multiple files to context and edit them simultaneously

  • Local model support — connect to Ollama for fully private, zero-API-cost coding

  • Architect mode — use a strong reasoning model to plan, then a fast model to implement

  • Repository map — automatically indexes your codebase for context-aware edits

  • Linting and testing — run linters/tests after each edit, auto-fix failures

  • Voice input — dictate coding instructions via microphone

Requirements

Component
Minimum
Recommended

GPU

RTX 3060 12 GB

RTX 4090 24 GB

VRAM

12 GB

24 GB

RAM

16 GB

32 GB

Disk

30 GB

60 GB

Python

3.9

3.11

Clore.ai pricing: RTX 4090 ≈ $0.5–2/day · RTX 3090 ≈ $0.3–1/day · RTX 3060 ≈ $0.15–0.3/day

For cloud-only models (no local inference), a GPU is not required — but Clore.ai GPUs let you run Ollama models locally for full privacy.

Quick Start

1. Install Aider

2. Set Up Ollama for Local Models

3. Start Aider with a Local Model

4. Start Coding

Inside the Aider REPL:

Aider will:

  1. Read the files and understand the codebase

  2. Propose changes as a diff

  3. Apply the changes to disk

  4. Create a Git commit with a descriptive message

Usage Examples

Architect Mode (Two-Model Setup)

Use a strong model for reasoning and a fast model for code generation:

The architect model plans the changes, and the editor model writes the actual code — combining high-quality reasoning with fast implementation.

Add Files and Edit

Use with Cloud APIs

Git Integration

Lint and Auto-Fix

Non-Interactive (Scripted) Mode

Model Recommendations

Model
VRAM
Speed
Quality
Best For

deepseek-r1:32b

~20 GB

Medium

High

Complex refactoring

qwen2.5-coder:32b

~20 GB

Medium

High

Code generation

qwen2.5-coder:7b

~5 GB

Fast

Good

Quick edits, RTX 3060

codellama:34b

~20 GB

Medium

Good

Legacy code, C/C++

deepseek-coder-v2:16b

~10 GB

Fast

Good

Balanced performance

Tips

  • Use /add selectively — only add files Aider needs to see. Too many files waste context tokens

  • Architect mode is powerful for complex changes — the reasoning model catches edge cases the editor model might miss

  • /undo reverts the last change cleanly via Git — experiment freely

  • /diff shows the proposed changes before applying — use for review

  • Set --auto-commits (default) for full Git history of every AI change

  • Use .aiderignore to exclude files from the repo map (node_modules, .venv, etc.)

  • For large repos, Aider's repo map helps the model understand code structure — let it run on first load

  • Run tests after edits/test pytest catches regressions immediately

Troubleshooting

Problem
Solution

Ollama model too slow

Use a smaller quantization (q4_0) or smaller model

CUDA out of memory with Ollama

Pull a smaller model variant or use OLLAMA_NUM_GPU=0 for CPU

Git commit errors

Ensure git config user.email and user.name are set

Aider ignores my files

Use /add filename.py explicitly — Aider only edits added files

Model produces poor edits

Try a stronger model, or use architect mode

Connection refused (Ollama)

Ensure Ollama is running: ollama serve or systemctl start ollama

Context window exceeded

Remove files with /drop, keep only relevant ones

Resources

Last updated

Was this helpful?