# HuggingFace Transformers

在 GPU 上使用 Transformers 库进行自然语言处理、视觉和音频任务。

{% hint style="success" %}
所有示例都可以在通过以下方式租用的 GPU 服务器上运行： [CLORE.AI 市场](https://clore.ai/marketplace).
{% endhint %}

## 在 CLORE.AI 上租用

1. 访问 [CLORE.AI 市场](https://clore.ai/marketplace)
2. 按 GPU 类型、显存和价格筛选
3. 选择 **按需** （固定费率）或 **竞价** （出价价格）
4. 配置您的订单：
   * 选择 Docker 镜像
   * 设置端口（用于 SSH 的 TCP，Web 界面的 HTTP）
   * 如有需要，添加环境变量
   * 输入启动命令
5. 选择支付方式： **CLORE**, **BTC**，或 **USDT/USDC**
6. 创建订单并等待部署

### 访问您的服务器

* 在以下位置查找连接详情： **我的订单**
* Web 界面：使用 HTTP 端口的 URL
* SSH： `ssh -p <port> root@<proxy-address>`

## 什么是 Transformers？

Hugging Face Transformers 提供：

* 100,000+ 预训练模型
* 简便的模型加载和推理
* 微调支持
* 多模态能力

## 快速部署

**Docker 镜像：**

```
pytorch/pytorch:2.5.1-cuda12.4-cudnn9-devel
```

**端口：**

```
22/tcp
```

**命令：**

```bash
pip install transformers accelerate datasets huggingface_hub
```

## 安装

```bash
pip install transformers[torch]
pip install accelerate  # 用于大型模型
pip install datasets    # 用于训练数据
```

## 文本生成

### 基础生成

```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model_name = "mistralai/Mistral-7B-Instruct-v0.2"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype=torch.float16,
    device_map="auto"
)

prompt = "用简单的语言解释量子计算："
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")

outputs = model.generate(
    **inputs,
    max_new_tokens=200,
    temperature=0.7,
    do_sample=True
)

response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```

### 聊天模型

```python
from transformers import pipeline

pipe = pipeline(
    "text-generation",
    model="meta-llama/Llama-2-7b-chat-hf",
    torch_dtype=torch.float16,
    device_map="auto"
)

messages = [
    {"role": "user", "content": "什么是机器学习？"}
]

outputs = pipe(
    messages,
    max_new_tokens=256,
    do_sample=True,
    temperature=0.7
)

print(outputs[0]["generated_text"][-1]["content"])
```

### 流式传输

```python
from transformers import TextStreamer

streamer = TextStreamer(tokenizer)

model.generate(
    **inputs,
    max_new_tokens=200,
    streamer=streamer
)
```

## 量化

### 4 位量化

```python
from transformers import AutoModelForCausalLM, BitsAndBytesConfig
import torch

bnb_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_compute_dtype=torch.float16,
    bnb_4bit_use_double_quant=True
)

model = AutoModelForCausalLM.from_pretrained(
    "meta-llama/Llama-2-13b-hf",
    quantization_config=bnb_config,
    device_map="auto"
)
```

### 8 位量化

```python
model = AutoModelForCausalLM.from_pretrained(
    "meta-llama/Llama-2-7b-hf",
    load_in_8bit=True,
    device_map="auto"
)
```

## 嵌入

```python
from transformers import AutoModel, AutoTokenizer
import torch

model_name = "sentence-transformers/all-MiniLM-L6-v2"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name).cuda()

def get_embedding(text):
    inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True).to("cuda")
    with torch.no_grad():
        outputs = model(**inputs)
    # 平均池化
    embedding = outputs.last_hidden_state.mean(dim=1)
    return embedding

emb = get_embedding("Hello, world!")
print(f"Embedding shape: {emb.shape}")
```

## 图像分类

```python
from transformers import pipeline
from PIL import Image

classifier = pipeline("image-classification", model="google/vit-base-patch16-224", device=0)

image = Image.open("cat.jpg")
results = classifier(image)

for result in results:
    print(f"{result['label']}: {result['score']:.4f}")
```

## 目标检测

```python
from transformers import pipeline
from PIL import Image

detector = pipeline("object-detection", model="facebook/detr-resnet-50", device=0)

image = Image.open("street.jpg")
results = detector(image)

for result in results:
    print(f"{result['label']}: {result['score']:.4f} at {result['box']}")
```

## 图像分割

```python
from transformers import pipeline
from PIL import Image

segmenter = pipeline("image-segmentation", model="facebook/maskformer-swin-base-ade", device=0)

image = Image.open("scene.jpg")
results = segmenter(image)

for segment in results:
    print(f"{segment['label']}: score {segment['score']:.4f}")
```

## 语音识别

```python
from transformers import pipeline

transcriber = pipeline(
    "automatic-speech-recognition",
    model="openai/whisper-large-v3",
    device=0
)

result = transcriber("audio.mp3")
print(result["text"])
```

## 文本转语音

```python
from transformers import pipeline
import scipy

synthesizer = pipeline("text-to-speech", model="microsoft/speecht5_tts", device=0)

speech = synthesizer("Hello, this is a test of text to speech.")

scipy.io.wavfile.write("output.wav", rate=speech["sampling_rate"], data=speech["audio"])
```

## 微调

### 准备数据集

```python
from datasets import load_dataset

dataset = load_dataset("imdb")
train_dataset = dataset["train"].select(range(1000))
eval_dataset = dataset["test"].select(range(200))
```

### 训练

```python
from transformers import (
    AutoModelForSequenceClassification,
    AutoTokenizer,
    TrainingArguments,
    Trainer
)

model_name = "bert-base-uncased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=2)

def tokenize_function(examples):
    return tokenizer(examples["text"], padding="max_length", truncation=True)

tokenized_train = train_dataset.map(tokenize_function, batched=True)
tokenized_eval = eval_dataset.map(tokenize_function, batched=True)

training_args = TrainingArguments(
    output_dir="./results",
    evaluation_strategy="epoch",
    learning_rate=2e-5,
    per_device_train_batch_size=16,
    per_device_eval_batch_size=16,
    num_train_epochs=3,
    weight_decay=0.01,
    fp16=True,
)

trainer = Trainer(
    model=model,
    args=training_args,
    train_dataset=tokenized_train,
    eval_dataset=tokenized_eval,
)

trainer.train()
```

## 管道任务

| 任务    | 管道名称                 |
| ----- | -------------------- |
| 文本生成  | `文本生成`               |
| 填充掩码  | `fill-mask`          |
| 摘要    | `summarization`      |
| 翻译    | `translation`        |
| 问答    | `question-answering` |
| 情感分析  | `sentiment-analysis` |
| 图像分类  | `图像分类`               |
| 目标检测  | `目标检测`               |
| 语音识别  | `自动语音识别`             |
| 文本转语音 | `文本转语音`              |

## 多 GPU

```python
from transformers import AutoModelForCausalLM

# 自动设备放置
model = AutoModelForCausalLM.from_pretrained(
    "meta-llama/Llama-2-70b-hf",
    device_map="auto",
    torch_dtype=torch.float16
)

# 手动设备映射
device_map = {
    "model.embed_tokens": 0,
    "model.layers.0": 0,
    "model.layers.1": 0,
    # ...
    "model.layers.39": 1,
    "model.norm": 1,
    "lm_head": 1
}

model = AutoModelForCausalLM.from_pretrained(
    "model_name",
    device_map=device_map
)
```

## 内存优化

```python

# Flash Attention 2
model = AutoModelForCausalLM.from_pretrained(
    "meta-llama/Llama-2-7b-hf",
    torch_dtype=torch.float16,
    attn_implementation="flash_attention_2",
    device_map="auto"
)

# 梯度检查点
model.gradient_checkpointing_enable()
```

## 模型中心

### 下载模型

```python
from huggingface_hub import snapshot_download

snapshot_download(
    repo_id="meta-llama/Llama-2-7b-hf",
    local_dir="./llama-2-7b"
)
```

### 上传模型

```python
from huggingface_hub import HfApi

api = HfApi()
api.upload_folder(
    folder_path="./my_model",
    repo_id="username/my-model",
    repo_type="model"
)
```

## 性能优化建议

| 提示                             | 效果            |
| ------------------------------ | ------------- |
| 使用 `torch_dtype=torch.float16` | 内存减少 50%      |
| 启用 Flash Attention 2           | 注意力计算速度提高 2 倍 |
| 使用量化                           | 内存减少 75%      |
| 批量推理                           | 更高吞吐量         |

## # 使用固定种子以获得一致结果

## 下载所有所需的检查点

检查文件完整性

| GPU     | 验证 CUDA 兼容性 | 费用估算    | CLORE.AI 市场的典型费率（截至 2024 年）： |
| ------- | ----------- | ------- | ---------------------------- |
| 按小时费率   | \~$0.03     | \~$0.70 | \~$0.12                      |
| 速度      | \~$0.06     | \~$1.50 | \~$0.25                      |
| 512x512 | \~$0.10     | \~$2.30 | \~$0.40                      |
| 按日费率    | \~$0.17     | \~$4.00 | \~$0.70                      |
| 4 小时会话  | \~$0.25     | \~$6.00 | \~$1.00                      |

*RTX 3060* [*CLORE.AI 市场*](https://clore.ai/marketplace) *A100 40GB*

**A100 80GB**

* 使用 **竞价** 价格随提供商和需求而异。请查看
* 以获取当前费率。 **CLORE** 节省费用：
* 市场用于灵活工作负载（通常便宜 30-50%）

## 使用以下方式支付

* vLLM 推理 - 生产部署
* [微调大型语言模型](https://docs.clore.ai/guides/guides_v2-zh/xun-lian/finetune-llm) - LoRA 训练
* [DeepSpeed 训练](https://docs.clore.ai/guides/guides_v2-zh/xun-lian/deepspeed-training) - 分布式训练


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.clore.ai/guides/guides_v2-zh/xun-lian/huggingface-transformers.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
