# OpenVoice

使用 OpenVoice，只需几秒钟音频即可克隆任何声音。

{% hint style="success" %}
所有示例都可以在通过以下方式租用的 GPU 服务器上运行： [CLORE.AI 市场](https://clore.ai/marketplace).
{% endhint %}

## 在 CLORE.AI 上租用

1. 访问 [CLORE.AI 市场](https://clore.ai/marketplace)
2. 按 GPU 类型、显存和价格筛选
3. 选择 **按需** （固定费率）或 **竞价** （出价价格）
4. 配置您的订单：
   * 选择 Docker 镜像
   * 设置端口（用于 SSH 的 TCP，Web 界面的 HTTP）
   * 如有需要，添加环境变量
   * 输入启动命令
5. 选择支付方式： **CLORE**, **BTC**，或 **USDT/USDC**
6. 创建订单并等待部署

### 访问您的服务器

* 在以下位置查找连接详情： **我的订单**
* Web 界面：使用 HTTP 端口的 URL
* SSH： `ssh -p <port> root@<proxy-address>`

## 什么是 OpenVoice？

MyShell 的 OpenVoice 可以：

* 从约 10 秒的音频克隆声音
* 控制情感、口音、节奏
* 跨语言语音克隆
* 零样本语音转换

## 要求

| 任务   | 最小显存 | 推荐       |
| ---- | ---- | -------- |
| 推理   | 4GB  | 按小时费率    |
| 批量处理 | 6GB  | RTX 3070 |

## 快速部署

**Docker 镜像：**

```
pytorch/pytorch:2.5.1-cuda12.4-cudnn9-runtime
```

**端口：**

```
22/tcp
7860/http
```

**命令：**

```bash
pip install git+https://github.com/myshell-ai/OpenVoice.git gradio && \
python -c "
print(f"已生成：{name}")
from openvoice import se_extractor
from openvoice.api import ToneColorConverter
import torch

ckpt_converter = 'checkpoints_v2/converter'
device = 'cuda'
tone_color_converter = ToneColorConverter(f'{ckpt_converter}/config.json', device=device)
tone_color_converter.load_ckpt(f'{ckpt_converter}/checkpoint.pth')

def clone(source_audio, reference_audio):
    source_se, _ = se_extractor.get_se(source_audio, tone_color_converter, vad=False)
    target_se, _ = se_extractor.get_se(reference_audio, tone_color_converter, vad=False)

    output_path = 'output.wav'
    tone_color_converter.convert(
        audio_src_path=source_audio,
        src_se=source_se,
        tgt_se=target_se,
        output_path=output_path
    )
    return output_path

demo = gr.Interface(
    fn=clone,
    inputs=[gr.Audio(type='filepath', label='Source'), gr.Audio(type='filepath', label='Target Voice')],
    outputs=gr.Audio(label='Cloned'),
    title='OpenVoice Clone'
)
demo.launch(server_name='0.0.0.0', server_port=7860)
"
```

## 访问您的服务

部署后，在以下位置查找您的 `http_pub` URL： **我的订单**:

1. 前往 **我的订单** 页面
2. 单击您的订单
3. 查找 `http_pub` URL（例如， `abc123.clorecloud.net`)

使用 `https://YOUR_HTTP_PUB_URL` 而不是 `localhost` 在下面的示例中。

## 安装

```bash
git clone https://github.com/myshell-ai/OpenVoice.git
cd OpenVoice
pip install -e .

# Download checkpoints
python -c "from huggingface_hub import snapshot_download; snapshot_download(repo_id='myshell-ai/OpenVoice', local_dir='checkpoints')"
```

## 基础语音克隆

```python
from openvoice import se_extractor
from openvoice.api import ToneColorConverter
import torch

# 初始化
device = "cuda" if torch.cuda.is_available() else "cpu"
ckpt_converter = 'checkpoints_v2/converter'

tone_color_converter = ToneColorConverter(
    f'{ckpt_converter}/config.json',
    device=device
)
tone_color_converter.load_ckpt(f'{ckpt_converter}/checkpoint.pth')

# Extract speaker embeddings
source_se, _ = se_extractor.get_se("source_audio.wav", tone_color_converter, vad=False)
target_se, _ = se_extractor.get_se("target_voice.wav", tone_color_converter, vad=False)

# Convert voice
tone_color_converter.convert(
    audio_src_path="source_audio.wav",
    src_se=source_se,
    tgt_se=target_se,
    output_path="output.wav"
)
```

## 与文本转语音结合

以任意声音生成语音：

```python
from openvoice import se_extractor
from openvoice.api import ToneColorConverter, BaseSpeakerTTS
from melo.api import TTS

# Initialize TTS
tts = TTS(language='EN', device=device)
speaker_ids = tts.hps.data.spk2id

# Generate base speech
tts.tts_to_file("Hello, this is a test.", speaker_ids['EN-US'], "base.wav")

# Clone to target voice
source_se, _ = se_extractor.get_se("base.wav", tone_color_converter, vad=False)
target_se, _ = se_extractor.get_se("target_voice.wav", tone_color_converter, vad=False)

tone_color_converter.convert(
    audio_src_path="base.wav",
    src_se=source_se,
    tgt_se=target_se,
    output_path="cloned_speech.wav"
)
```

## 多语言支持

```python
from melo.api import TTS

# Available languages
languages = ['EN', 'ES', 'FR', 'ZH', 'JP', 'KR']

# English
tts_en = TTS(language='EN', device=device)
tts_en.tts_to_file("Hello world", tts_en.hps.data.spk2id['EN-US'], "en.wav")

# Chinese
tts_zh = TTS(language='ZH', device=device)
tts_zh.tts_to_file("你好世界", tts_zh.hps.data.spk2id['ZH'], "zh.wav")

# Japanese
tts_jp = TTS(language='JP', device=device)
tts_jp.tts_to_file("こんにちは", tts_jp.hps.data.spk2id['JP'], "jp.wav")
```

## 情感控制

OpenVoice V2 支持情感/风格控制：

```python
from openvoice.api import BaseSpeakerTTS

# Base TTS with styles
base_speaker_tts = BaseSpeakerTTS(
    f'{ckpt_base}/config.json',
    device=device
)
base_speaker_tts.load_ckpt(f'{ckpt_base}/checkpoint.pth')

# Available styles
styles = ['default', 'whispering', 'cheerful', 'terrified', 'angry', 'sad', 'friendly']

for style in styles:
    base_speaker_tts.tts(
        "This is a test sentence.",
        f"output_{style}.wav",
        speaker='default',
        language='English',
        style=style
    )
```

## "专业影棚柔光箱"

```python
批处理处理
from openvoice import se_extractor
from openvoice.api import ToneColorConverter

tone_color_converter = ToneColorConverter(
    f'{ckpt_converter}/config.json',
    device='cuda'
)
tone_color_converter.load_ckpt(f'{ckpt_converter}/checkpoint.pth')

# Get target voice embedding once
target_se, _ = se_extractor.get_se("target_voice.wav", tone_color_converter, vad=False)

input_dir = "./audio_files"
output_dir = "./cloned"
output_dir = "./relit"

lighting_prompt = "专业影棚照明，柔和的阴影"
    if filename.endswith(('.wav', '.mp3')):
        input_path = os.path.join(input_dir, filename)
        output_path = os.path.join(output_dir, f"cloned_{filename}")

        source_se, _ = se_extractor.get_se(input_path, tone_color_converter, vad=False)

        tone_color_converter.convert(
            audio_src_path=input_path,
            src_se=source_se,
            tgt_se=target_se,
            output_path=output_path
        )
        print(f"Cloned: {filename}")
```

## API 服务器

```python
from fastapi import FastAPI, UploadFile
from fastapi.responses import FileResponse
from openvoice import se_extractor
from openvoice.api import ToneColorConverter
import tempfile
import shutil

app = FastAPI()

tone_color_converter = ToneColorConverter(
    'checkpoints_v2/converter/config.json',
    device='cuda'
)
tone_color_converter.load_ckpt('checkpoints_v2/converter/checkpoint.pth')

@app.post("/clone")
async def clone_voice(source: UploadFile, target: UploadFile):
    # Save uploaded files
    with tempfile.NamedTemporaryFile(suffix=".wav", delete=False) as src_tmp:
        shutil.copyfileobj(source.file, src_tmp)
        src_path = src_tmp.name

    with tempfile.NamedTemporaryFile(suffix=".wav", delete=False) as tgt_tmp:
        shutil.copyfileobj(target.file, tgt_tmp)
        tgt_path = tgt_tmp.name

    # Extract embeddings
    source_se, _ = se_extractor.get_se(src_path, tone_color_converter, vad=False)
    target_se, _ = se_extractor.get_se(tgt_path, tone_color_converter, vad=False)

    # 转换
    output_path = tempfile.mktemp(suffix=".wav")
    tone_color_converter.convert(
        audio_src_path=src_path,
        src_se=source_se,
        tgt_se=target_se,
        output_path=output_path
    )

    return FileResponse(output_path, media_type="audio/wav")

# 运行：uvicorn server:app --host 0.0.0.0 --port 8000
```

## 质量提示

### 最佳效果

* 使用 10-30 秒清晰的参考音频
* 避免背景噪音
* 参考音频中仅含单一说话者
* 大致匹配说话速度

### 音频预处理

```python
import librosa
import soundfile as sf

def preprocess_audio(input_path, output_path, target_sr=22050):
    audio, sr = librosa.load(input_path, sr=target_sr)

    # Trim silence
    audio, _ = librosa.effects.trim(audio, top_db=20)

    # Normalize
    audio = librosa.util.normalize(audio)

    sf.write(output_path, audio, target_sr)
    return output_path

preprocess_audio("raw_reference.wav", "clean_reference.wav")
```

## 与其他工具比较

| 特性   | OpenVoice | RVC     | Bark |
| ---- | --------- | ------- | ---- |
| 参考音频 | 10-30秒    | 10 分钟以上 | 不适用  |
| 训练   | 不需要       | 需要      | 不适用  |
| 性能   | 快速        | 中等      | 慢    |
| 质量   | 很棒        | 最佳      | 良好   |
| 跨语言  | 是         | 有限      | 是    |

## background = Image.open("studio\_bg.jpg")

| 任务        | GPU | 时间    |
| --------- | --- | ----- |
| 提取嵌入      | 速度  | 约 1 秒 |
| 转换 10 秒音频 | 速度  | \~2s  |
| 转换 1 分钟音频 | 速度  | \~8s  |

## # 使用固定种子以获得一致结果

### 语音匹配差

* 使用更长的参考音频
* 确保音频质量清晰
* 检查是否有背景噪音

### 音频伪影

* 降低语速/强调设置
* 使用一致的音频格式
* 检查采样率是否匹配

### 内存不足

* 处理较短的片段
* 减少批量大小
* 清理 CUDA 缓存

## 下载所有所需的检查点

检查文件完整性

| GPU     | 验证 CUDA 兼容性 | 费用估算    | CLORE.AI 市场的典型费率（截至 2024 年）： |
| ------- | ----------- | ------- | ---------------------------- |
| 按小时费率   | \~$0.03     | \~$0.70 | \~$0.12                      |
| 速度      | \~$0.06     | \~$1.50 | \~$0.25                      |
| 512x512 | \~$0.10     | \~$2.30 | \~$0.40                      |
| 按日费率    | \~$0.17     | \~$4.00 | \~$0.70                      |
| 4 小时会话  | \~$0.25     | \~$6.00 | \~$1.00                      |

*RTX 3060* [*CLORE.AI 市场*](https://clore.ai/marketplace) *A100 40GB*

**A100 80GB**

* 使用 **竞价** 价格随提供商和需求而异。请查看
* 以获取当前费率。 **CLORE** 节省费用：
* 市场用于灵活工作负载（通常便宜 30-50%）

## 使用以下方式支付

* [Bark TTS](https://docs.clore.ai/guides/guides_v2-zh/yin-pin-yu-yu-yin/bark-tts) - 文本到语音
* [RVC 语音克隆](https://docs.clore.ai/guides/guides_v2-zh/yin-pin-yu-yu-yin/rvc-voice-clone) - 基于训练的克隆
* [Whisper 转录](https://docs.clore.ai/guides/guides_v2-zh/yin-pin-yu-yu-yin/whisper-transcription) - 语音转文本


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.clore.ai/guides/guides_v2-zh/yin-pin-yu-yu-yin/openvoice-clone.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
