# Overview

Train and fine-tune AI models on CLORE.AI GPUs.

## Available Guides

| Guide                                                                            | Use Case                           | Difficulty |
| -------------------------------------------------------------------------------- | ---------------------------------- | ---------- |
| [Jupyter ML Training](https://docs.clore.ai/guides/training/jupyter-ml-training) | Interactive training               | Easy       |
| [DreamBooth](https://docs.clore.ai/guides/training/dreambooth)                   | Custom SD subjects                 | Medium     |
| [Kohya Training](https://docs.clore.ai/guides/training/kohya-training)           | LoRA/LyCORIS training              | Medium     |
| [Fine-tune LLM](https://docs.clore.ai/guides/training/finetune-llm)              | LLM fine-tuning                    | Advanced   |
| [DeepSpeed](https://docs.clore.ai/guides/training/deepspeed-training)            | Distributed training               | Advanced   |
| [HuggingFace](https://docs.clore.ai/guides/training/huggingface-transformers)    | Transformers training              | Medium     |
| [Unsloth](https://docs.clore.ai/guides/training/unsloth-finetune)                | Fast LLM fine-tuning               | Medium     |
| [Axolotl](https://docs.clore.ai/guides/training/axolotl-training)                | YAML-first fine-tuning             | Medium     |
| [LLaMA-Factory](https://docs.clore.ai/guides/training/llama-factory)             | Easy LLM training UI               | Easy       |
| [TRL](https://docs.clore.ai/guides/training/trl)                                 | Transformer reinforcement learning | Advanced   |
| [LitGPT](https://docs.clore.ai/guides/training/litgpt)                           | Lightning-based training           | Medium     |
| [Mergekit](https://docs.clore.ai/guides/training/mergekit)                       | Model merging toolkit              | Easy       |

## GPU Recommendations

| Task                | Minimum  | Recommended |
| ------------------- | -------- | ----------- |
| LoRA (SD)           | RTX 3060 | RTX 3090    |
| DreamBooth          | RTX 3090 | RTX 4090    |
| LLM fine-tune (7B)  | RTX 3090 | A100 40GB   |
| LLM fine-tune (70B) | 4x A100  | 8x A100     |

## Training Types

### Image Models

* **LoRA** - Lightweight adapter, fast training
* **DreamBooth** - Full fine-tuning for concepts
* **Textual Inversion** - Learn new tokens

### Language Models

* **LoRA/QLoRA** - Memory-efficient fine-tuning
* **Full fine-tune** - Best quality, needs more VRAM

## Tips

* Use **Spot** orders for long training runs
* Enable gradient checkpointing to save VRAM
* Monitor training with TensorBoard
* Save checkpoints frequently

## Related Guides

* [Language Models](https://docs.clore.ai/guides/language-models/language-models)
* [Image Generation](https://docs.clore.ai/guides/image-generation/image-generation)


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.clore.ai/guides/training/training.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
