Unsloth 2x Faster Fine-tuning
Fine-tune LLMs 2x faster with 70% less VRAM using Unsloth on Clore.ai
Key Features
Requirements
Component
Minimum
Recommended
Quick Start
1. Install Unsloth
2. Load a Model with 4-bit Quantization
3. Apply LoRA Adapters
4. Prepare Data and Train
Exporting the Model
Save LoRA Adapter Only
Merge and Save Full Model (float16)
Export to GGUF for Ollama / llama.cpp
Usage Examples
Fine-Tune on a Custom Chat Dataset
DPO / ORPO Alignment Training
VRAM Usage Reference
Model
Quant
Method
VRAM
GPU
Tips
Troubleshooting
Problem
Solution
Resources
Last updated
Was this helpful?