# Overview

Run large language models (LLMs) on CLORE.AI GPUs for inference and chat applications.

## Popular Tools

| Tool                                                                                        | Use Case                           | Difficulty |
| ------------------------------------------------------------------------------------------- | ---------------------------------- | ---------- |
| [Ollama](https://docs.clore.ai/guides/language-models/ollama)                               | Easiest LLM setup                  | Beginner   |
| [Open WebUI](https://docs.clore.ai/guides/language-models/open-webui)                       | ChatGPT-like interface             | Beginner   |
| [vLLM](https://docs.clore.ai/guides/language-models/vllm)                                   | High-throughput production serving | Medium     |
| [Llama.cpp Server](https://docs.clore.ai/guides/language-models/llamacpp-server)            | Efficient GGUF inference           | Easy       |
| [Text Generation WebUI](https://docs.clore.ai/guides/language-models/text-generation-webui) | Full-featured chat UI              | Easy       |
| [ExLlamaV2](https://docs.clore.ai/guides/language-models/exllamav2-fast)                    | Fastest EXL2 inference             | Medium     |
| [LocalAI](https://docs.clore.ai/guides/language-models/localai-openai-compatible)           | OpenAI-compatible API              | Medium     |
| [SGLang](https://docs.clore.ai/guides/language-models/sglang)                               | Fast structured generation         | Medium     |
| [Text Generation Inference (TGI)](https://docs.clore.ai/guides/language-models/tgi)         | HuggingFace serving solution       | Medium     |
| [LMDeploy](https://docs.clore.ai/guides/language-models/lmdeploy)                           | MMlab serving toolkit              | Medium     |
| [Aphrodite Engine](https://docs.clore.ai/guides/language-models/aphrodite-engine)           | vLLM fork with extra features      | Medium     |
| [MLC-LLM](https://docs.clore.ai/guides/language-models/mlc-llm)                             | Machine learning compilation       | Hard       |
| [LiteLLM](https://docs.clore.ai/guides/language-models/litellm)                             | Unified API proxy                  | Medium     |
| [PowerInfer](https://docs.clore.ai/guides/language-models/powerinfer)                       | Sparse model inference             | Hard       |
| [Mistral.rs](https://docs.clore.ai/guides/language-models/mistral-rs)                       | Rust-based inference engine        | Medium     |

## Model Guides

### Latest & Best Models

| Model                                                                   | Parameters | Best For                  |
| ----------------------------------------------------------------------- | ---------- | ------------------------- |
| [DeepSeek-V3](https://docs.clore.ai/guides/language-models/deepseek-v3) | 671B MoE   | Reasoning, code, math     |
| [DeepSeek-R1](https://docs.clore.ai/guides/language-models/deepseek-r1) | 671B MoE   | Advanced reasoning        |
| [DeepSeek V4](https://docs.clore.ai/guides/language-models/deepseek-v4) | TBA        | Next-generation DeepSeek  |
| [Qwen2.5](https://docs.clore.ai/guides/language-models/qwen25)          | 0.5B-72B   | Multilingual, code        |
| [Qwen3.5](https://docs.clore.ai/guides/language-models/qwen35)          | TBA        | Latest Qwen generation    |
| [Llama 3.3](https://docs.clore.ai/guides/language-models/llama33)       | 70B        | Meta's latest 70B         |
| [Llama 4](https://docs.clore.ai/guides/language-models/llama4)          | TBA        | Scout & Maverick variants |

### Specialized Models

| Model                                                                         | Parameters | Best For                |
| ----------------------------------------------------------------------------- | ---------- | ----------------------- |
| [DeepSeek Coder](https://docs.clore.ai/guides/language-models/deepseek-coder) | 6.7B-33B   | Code generation         |
| [CodeLlama](https://docs.clore.ai/guides/language-models/codellama)           | 7B-34B     | Code completion         |
| [GLM-4.7-Flash](https://docs.clore.ai/guides/language-models/glm-47-flash)    | 4.7B       | Fast Chinese/English    |
| [GLM-5](https://docs.clore.ai/guides/language-models/glm5)                    | TBA        | Zhipu AI latest         |
| [Kimi K2.5](https://docs.clore.ai/guides/language-models/kimi-k2)             | TBA        | Moonshot AI model       |
| [Ling-2.5-1T](https://docs.clore.ai/guides/language-models/ling25)            | 1T         | Massive open-source LLM |
| [LFM2-24B](https://docs.clore.ai/guides/language-models/lfm2-24b)             | 24B        | Liquid AI model         |
| [MiMo-V2-Flash](https://docs.clore.ai/guides/language-models/mimo-v2-flash)   | TBA        | Fast inference model    |

### Efficient Models

| Model                                                                           | Parameters | Best For                  |
| ------------------------------------------------------------------------------- | ---------- | ------------------------- |
| [Gemma 2](https://docs.clore.ai/guides/language-models/gemma2)                  | 2B-27B     | Efficient inference       |
| [Gemma 3](https://docs.clore.ai/guides/language-models/gemma3)                  | TBA        | Google's latest compact   |
| [Phi-4](https://docs.clore.ai/guides/language-models/phi4)                      | 14B        | Small but capable         |
| [Mistral/Mixtral](https://docs.clore.ai/guides/language-models/mistral-mixtral) | 7B / 8x7B  | General purpose           |
| [Mistral Large 3](https://docs.clore.ai/guides/language-models/mistral-large3)  | 675B MoE   | Enterprise-grade          |
| [Mistral Small 3.1](https://docs.clore.ai/guides/language-models/mistral-small) | TBA        | Efficient Mistral variant |

## GPU Recommendations

| Model Size | Minimum GPU   | Recommended |
| ---------- | ------------- | ----------- |
| 7B (Q4)    | RTX 3060 12GB | RTX 3090    |
| 13B (Q4)   | RTX 3090 24GB | RTX 4090    |
| 34B (Q4)   | 2x RTX 3090   | A100 40GB   |
| 70B (Q4)   | A100 80GB     | 2x A100     |

## Quantization Guide

| Format   | VRAM Usage | Quality   | Speed   |
| -------- | ---------- | --------- | ------- |
| Q2\_K    | Lowest     | Poor      | Fastest |
| Q4\_K\_M | Low        | Good      | Fast    |
| Q5\_K\_M | Medium     | Great     | Medium  |
| Q8\_0    | High       | Excellent | Slower  |
| FP16     | Highest    | Best      | Slowest |

## See Also

* [Training & Fine-tuning](https://docs.clore.ai/guides/training/training)
* [Vision-Language Models](https://docs.clore.ai/guides/vision-models/vision-models)


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.clore.ai/guides/language-models/language-models.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
