Axolotl Universal Fine-tuning
YAML-driven LLM fine-tuning with Axolotl on Clore.ai — LoRA, QLoRA, DPO, multi-GPU
Key Features
Requirements
Component
Minimum
Recommended
Quick Start
1. Install Axolotl
2. Create a Config File
3. Launch Training
Configuration Deep Dive
Dataset Formats
Multi-GPU with DeepSpeed
DPO / ORPO Alignment
Full Fine-Tune (No LoRA)
Usage Examples
Inference After Training
Merge LoRA into Base Model
Preprocess Dataset (Validate Before Training)
VRAM Usage Reference
Model
Method
GPUs
VRAM/GPU
Config
Tips
Troubleshooting
Problem
Solution
Resources
Last updated
Was this helpful?