# Aider AI Coding

Aider is a terminal-based AI coding assistant with 39K+ GitHub stars. It edits files directly in your repo, creates Git commits automatically, and supports both cloud APIs (OpenAI, Anthropic) and fully local models via Ollama. On a Clore.ai GPU, you can run large coding models like DeepSeek-R1 32B or Qwen2.5-Coder-32B entirely on your own hardware — private, fast, and cost-effective.

{% hint style="success" %}
All examples run on GPU servers rented through the [CLORE.AI Marketplace](https://clore.ai/marketplace).
{% endhint %}

## Key Features

* **Terminal-native** — works over SSH, perfect for headless Clore.ai servers
* **Git-aware** — auto-commits every change with descriptive messages, easy to review and revert
* **Multi-file editing** — add multiple files to context and edit them simultaneously
* **Local model support** — connect to Ollama for fully private, zero-API-cost coding
* **Architect mode** — use a strong reasoning model to plan, then a fast model to implement
* **Repository map** — automatically indexes your codebase for context-aware edits
* **Linting and testing** — run linters/tests after each edit, auto-fix failures
* **Voice input** — dictate coding instructions via microphone

## Requirements

| Component | Minimum        | Recommended    |
| --------- | -------------- | -------------- |
| GPU       | RTX 3060 12 GB | RTX 4090 24 GB |
| VRAM      | 12 GB          | 24 GB          |
| RAM       | 16 GB          | 32 GB          |
| Disk      | 30 GB          | 60 GB          |
| Python    | 3.9            | 3.11           |

**Clore.ai pricing:** RTX 4090 ≈ $0.5–2/day · RTX 3090 ≈ $0.3–1/day · RTX 3060 ≈ $0.15–0.3/day

For cloud-only models (no local inference), a GPU is not required — but Clore.ai GPUs let you run Ollama models locally for full privacy.

## Quick Start

### 1. Install Aider

```bash
pip install aider-chat
```

### 2. Set Up Ollama for Local Models

```bash
# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh

# Pull a coding model
ollama pull deepseek-r1:32b

# Or a smaller model for lower VRAM
ollama pull qwen2.5-coder:7b
```

### 3. Start Aider with a Local Model

```bash
cd /workspace/your-project

# Use DeepSeek-R1 32B via Ollama (needs ~20 GB VRAM)
aider --model ollama/deepseek-r1:32b

# Or use a smaller model on RTX 3060
aider --model ollama/qwen2.5-coder:7b
```

### 4. Start Coding

Inside the Aider REPL:

```
> /add src/main.py src/utils.py
> Add error handling to the parse_config function and write unit tests for it
```

Aider will:

1. Read the files and understand the codebase
2. Propose changes as a diff
3. Apply the changes to disk
4. Create a Git commit with a descriptive message

## Usage Examples

### Architect Mode (Two-Model Setup)

Use a strong model for reasoning and a fast model for code generation:

```bash
# Cloud architect + local editor
aider --architect --model ollama/deepseek-r1:32b --editor-model ollama/qwen2.5-coder:7b
```

The architect model plans the changes, and the editor model writes the actual code — combining high-quality reasoning with fast implementation.

### Add Files and Edit

```bash
# Add specific files to the chat context
> /add src/api/routes.py src/models/user.py

# Ask for changes
> Refactor the user registration endpoint to use async/await and add input validation with Pydantic

# Add a whole directory
> /add src/tests/

# Run tests after the edit
> /test pytest src/tests/ -v
```

### Use with Cloud APIs

```bash
# OpenAI
export OPENAI_API_KEY=sk-...
aider --model gpt-4o

# Anthropic
export ANTHROPIC_API_KEY=sk-ant-...
aider --model claude-sonnet-4-20250514
```

### Git Integration

```bash
# Every change creates a commit automatically
git log --oneline -5
# a1b2c3d aider: Add error handling to parse_config
# d4e5f6g aider: Write unit tests for parse_config
# h7i8j9k aider: Refactor user registration endpoint

# Undo the last aider change
aider --undo
```

### Lint and Auto-Fix

```bash
# Configure a linter
aider --lint-cmd "ruff check --fix" --auto-lint

# Aider runs the linter after each edit and auto-fixes issues
```

### Non-Interactive (Scripted) Mode

```bash
# Run a single instruction and exit
aider --model ollama/deepseek-r1:32b \
  --message "Add type hints to all functions in src/utils.py" \
  --yes  # auto-accept changes
```

## Model Recommendations

| Model                 | VRAM    | Speed  | Quality | Best For              |
| --------------------- | ------- | ------ | ------- | --------------------- |
| deepseek-r1:32b       | \~20 GB | Medium | High    | Complex refactoring   |
| qwen2.5-coder:32b     | \~20 GB | Medium | High    | Code generation       |
| qwen2.5-coder:7b      | \~5 GB  | Fast   | Good    | Quick edits, RTX 3060 |
| codellama:34b         | \~20 GB | Medium | Good    | Legacy code, C/C++    |
| deepseek-coder-v2:16b | \~10 GB | Fast   | Good    | Balanced performance  |

## Tips

* **Use `/add` selectively** — only add files Aider needs to see. Too many files waste context tokens
* **Architect mode** is powerful for complex changes — the reasoning model catches edge cases the editor model might miss
* **`/undo`** reverts the last change cleanly via Git — experiment freely
* **`/diff`** shows the proposed changes before applying — use for review
* **Set `--auto-commits`** (default) for full Git history of every AI change
* **Use `.aiderignore`** to exclude files from the repo map (node\_modules, .venv, etc.)
* **For large repos**, Aider's repo map helps the model understand code structure — let it run on first load
* **Run tests after edits** — `/test pytest` catches regressions immediately

## Troubleshooting

| Problem                          | Solution                                                             |
| -------------------------------- | -------------------------------------------------------------------- |
| Ollama model too slow            | Use a smaller quantization (q4\_0) or smaller model                  |
| `CUDA out of memory` with Ollama | Pull a smaller model variant or use `OLLAMA_NUM_GPU=0` for CPU       |
| Git commit errors                | Ensure `git config user.email` and `user.name` are set               |
| Aider ignores my files           | Use `/add filename.py` explicitly — Aider only edits added files     |
| Model produces poor edits        | Try a stronger model, or use architect mode                          |
| Connection refused (Ollama)      | Ensure Ollama is running: `ollama serve` or `systemctl start ollama` |
| Context window exceeded          | Remove files with `/drop`, keep only relevant ones                   |

## Resources

* [Aider GitHub](https://github.com/Aider-AI/aider)
* [Aider Documentation](https://aider.chat)
* [Ollama Model Library](https://ollama.com/library)
* [CLORE.AI Marketplace](https://clore.ai/marketplace)


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.clore.ai/guides/ai-coding-tools/aider.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
