# AudioCraft 音乐

使用 Meta 的 AudioCraft（MusicGen）生成音乐和音频。

{% hint style="success" %}
所有示例都可以在通过以下方式租用的 GPU 服务器上运行： [CLORE.AI 市场](https://clore.ai/marketplace).
{% endhint %}

## 在 CLORE.AI 上租用

1. 访问 [CLORE.AI 市场](https://clore.ai/marketplace)
2. 按 GPU 类型、显存和价格筛选
3. 选择 **按需** （固定费率）或 **竞价** （出价价格）
4. 配置您的订单：
   * 选择 Docker 镜像
   * 设置端口（用于 SSH 的 TCP，Web 界面的 HTTP）
   * 如有需要，添加环境变量
   * 输入启动命令
5. 选择支付方式： **CLORE**, **BTC**，或 **USDT/USDC**
6. 创建订单并等待部署

### 访问您的服务器

* 在以下位置查找连接详情： **我的订单**
* Web 界面：使用 HTTP 端口的 URL
* SSH： `ssh -p <port> root@<proxy-address>`

## 什么是 AudioCraft？

AudioCraft 包含：

* **MusicGen** - 文本到音乐的生成
* **AudioGen** - 音效生成
* **EnCodec** - 音频压缩
* **MAGNeT** - 更快的生成

## 模型规模

| A100   | 显存   | 质量             | 性能 |
| ------ | ---- | -------------- | -- |
| small  | 4GB  | 良好             | 快速 |
| medium | 8GB  | 很棒             | 中等 |
| large  | 16GB | 最佳             | 慢  |
| 旋律     | 8GB  | Great + melody | 中等 |

## 快速部署

**Docker 镜像：**

```
pytorch/pytorch:2.5.1-cuda12.4-cudnn9-devel
```

**端口：**

```
22/tcp
7860/http
```

**命令：**

```bash
pip install audiocraft gradio scipy && \
python -c "
print(f"已生成：{name}")
from audiocraft.models import MusicGen
import scipy.io.wavfile as wav
import tempfile

model = MusicGen.get_pretrained('facebook/musicgen-medium')
model.set_generation_params(duration=10)

def generate(prompt, duration):
    model.set_generation_params(duration=duration)
    output = model.generate([prompt])
    audio = output[0].cpu().numpy().T
    with tempfile.NamedTemporaryFile(suffix='.wav', delete=False) as f:
        wav.write(f.name, 32000, audio)
        return f.name

demo = gr.Interface(
    fn=generate,
    inputs=[gr.Textbox(label='Prompt'), gr.Slider(5, 30, value=10, label='Duration (s)')],
    outputs=gr.Audio(label='Generated Music'),
    title='MusicGen'
)
demo.launch(server_name='0.0.0.0', server_port=7860)
"
```

## 访问您的服务

部署后，在以下位置查找您的 `http_pub` URL： **我的订单**:

1. 前往 **我的订单** 页面
2. 单击您的订单
3. 查找 `http_pub` URL（例如， `abc123.clorecloud.net`)

使用 `https://YOUR_HTTP_PUB_URL` 而不是 `localhost` 在下面的示例中。

## 安装

```bash
pip install audiocraft
pip install scipy torchaudio
```

## MusicGen：文本到音乐

### 基础生成

```python
from audiocraft.models import MusicGen
import torchaudio

# 加载模型
model = MusicGen.get_pretrained('facebook/musicgen-medium')
model.set_generation_params(duration=15)  # seconds

# 生成
prompt = "upbeat electronic dance music with heavy bass"
output = model.generate([prompt])

# 保存
audio = output[0].cpu()
torchaudio.save("music.wav", audio, sample_rate=32000)
```

### 多重提示

```python
prompts = [
    "relaxing piano jazz",
    "epic orchestral cinematic",
    "acoustic guitar folk song",
    "aggressive heavy metal"
]

outputs = model.generate(prompts)

for i, output in enumerate(outputs):
    torchaudio.save(f"music_{i}.wav", output.cpu(), sample_rate=32000)
```

### 旋律条件化

使用旋律作为参考：

```python
from audiocraft.models import MusicGen
import torchaudio

# 加载旋律模型
model = MusicGen.get_pretrained('facebook/musicgen-melody')
model.set_generation_params(duration=15)

# 加载参考旋律
melody, sr = torchaudio.load("reference.wav")
melody = melody.unsqueeze(0).cuda()

# 使用旋律生成
output = model.generate_with_chroma(
    ["jazz piano version"],
    melody,
    sr
)

torchaudio.save("jazz_version.wav", output[0].cpu(), sample_rate=32000)
```

### 续写

从现有音频继续：

```python

# 加载要继续的音频
audio, sr = torchaudio.load("start.wav")
audio = audio.unsqueeze(0).cuda()

# 继续
output = model.generate_continuation(
    audio,
    prompt_sample_rate=sr,
    descriptions=["more energetic with drums"],
    progress=True
)

torchaudio.save("continued.wav", output[0].cpu(), sample_rate=32000)
```

## AudioGen：音效

```python
from audiocraft.models import AudioGen

# 加载模型
model = AudioGen.get_pretrained('facebook/audiogen-medium')
model.set_generation_params(duration=5)

# 生成声音
prompts = [
    "dog barking in the distance",
    "rain on a window",
    "car engine starting",
    "crowd cheering at a concert"
]

outputs = model.generate(prompts)

for i, output in enumerate(outputs):
    torchaudio.save(f"sound_{i}.wav", output.cpu(), sample_rate=16000)
```

## 生成参数

```python
model.set_generation_params(
    duration=30,           # Length in seconds
    top_k=250,             # Top-k sampling
    top_p=0.0,             # Nucleus sampling (0 = disabled)
    temperature=1.0,       # Randomness
    cfg_coef=3.0,          # Classifier-free guidance
    two_step_cfg=False,    # Two-step CFG
)
```

### 参数影响

| 参数          | 低值    | 高值     |
| ----------- | ----- | ------ |
| temperature | 保守    | 有创造性   |
| top\_k      | 更集中   | 更多变化   |
| cfg\_coef   | 宽松的解读 | 严格遵循提示 |

## "专业影棚柔光箱"

```python
from audiocraft.models import MusicGen
import torchaudio
批处理处理

model = MusicGen.get_pretrained('facebook/musicgen-medium')
model.set_generation_params(duration=15)

prompts = [
    {"name": "intro", "prompt": "mysterious ambient intro, slow build"},
    {"name": "verse", "prompt": "chill lo-fi hip hop beat"},
    {"name": "chorus", "prompt": "energetic electronic pop chorus"},
    {"name": "outro", "prompt": "calm piano fade out"},
]

output_dir = "./music_parts"
output_dir = "./relit"

for item in prompts:
    output = model.generate([item["prompt"]])
    torchaudio.save(
        os.path.join(output_dir, f"{item['name']}.wav"),
        output[0].cpu(),
        sample_rate=32000
    )
    print(f"Generated: {item['name']}")
```

## 流式生成

```python
from audiocraft.models import MusicGen
import torch

model = MusicGen.get_pretrained('facebook/musicgen-small')

# 启用流式
streamer = model.get_streaming_generator(
    "upbeat pop music",
    max_gen_len=256  # tokens
)

all_tokens = []
for tokens in streamer:
    all_tokens.append(tokens)
    # 处理该块...

# 解码全部
audio = model.decode(torch.cat(all_tokens, dim=-1))
```

## 立体声生成

```python
from audiocraft.models import MusicGen

# 加载立体声模型
model = MusicGen.get_pretrained('facebook/musicgen-stereo-medium')
model.set_generation_params(duration=15)

output = model.generate(["cinematic orchestral score"])

# 输出形状：对于立体声为 [batch, 2, samples]

torchaudio.save("stereo_music.wav", output[0].cpu(), sample_rate=32000)
```

## API 服务器

```python
from fastapi import FastAPI
from fastapi.responses import FileResponse
from audiocraft.models import MusicGen
import torchaudio
import tempfile

app = FastAPI()
model = MusicGen.get_pretrained('facebook/musicgen-medium')

@app.post("/generate")
async def generate_music(prompt: str, duration: int = 10):
    model.set_generation_params(duration=duration)
    output = model.generate([prompt])

    with tempfile.NamedTemporaryFile(suffix=".wav", delete=False) as f:
        torchaudio.save(f.name, output[0].cpu(), sample_rate=32000)
        return FileResponse(f.name, media_type="audio/wav")

@app.post("/generate_with_melody")
async def generate_with_melody(prompt: str, melody_path: str, duration: int = 15):
    melody, sr = torchaudio.load(melody_path)

    model_melody = MusicGen.get_pretrained('facebook/musicgen-melody')
    model_melody.set_generation_params(duration=duration)

    output = model_melody.generate_with_chroma([prompt], melody.unsqueeze(0).cuda(), sr)

    with tempfile.NamedTemporaryFile(suffix=".wav", delete=False) as f:
        torchaudio.save(f.name, output[0].cpu(), sample_rate=32000)
        return FileResponse(f.name, media_type="audio/wav")

# 运行：uvicorn server:app --host 0.0.0.0 --port 8000
```

## 提示工程

### 有效的提示

```python

# 风格 + 乐器 + 情绪
"upbeat jazz with saxophone and piano, happy and energetic"

# 风格参考
"lo-fi hip hop beat, chill study music, vinyl crackle"

# 电影风格
"epic orchestral trailer music, building tension, dramatic"

# 具体元素
"acoustic guitar strumming pattern, folk song, campfire vibes"
```

### 不良提示

```python

# 太模糊
"nice music"  # 不够具体

# 歌词
"Happy birthday to you..."  # 无效

# 艺术家名字
"like Beatles"  # 无法理解艺术家
```

## 后期处理

### 合并片段

```python
from pydub import AudioSegment

intro = AudioSegment.from_wav("intro.wav")
verse = AudioSegment.from_wav("verse.wav")
chorus = AudioSegment.from_wav("chorus.wav")

# 交叉淡入淡出
song = intro.append(verse, crossfade=1000)
song = song.append(chorus, crossfade=1000)

song.export("full_song.mp3", format="mp3")
```

### 添加效果

```python
from pydub import AudioSegment
from pydub.effects import normalize, compress_dynamic_range

audio = AudioSegment.from_wav("generated.wav")

# 规范化音量
audio = normalize(audio)

# 添加压缩
audio = compress_dynamic_range(audio)

# 淡入/淡出
audio = audio.fade_in(2000).fade_out(3000)

audio.export("processed.wav", format="wav")
```

## 内存优化

```python
import torch
from audiocraft.models import MusicGen

# 使用更小的模型
model = MusicGen.get_pretrained('facebook/musicgen-small')

# 启用 CPU 卸载
model.to('cpu')

# 在 GPU 上生成，立即卸载
with torch.cuda.amp.autocast():
    output = model.generate(["prompt"])
    output = output.cpu()
    torch.cuda.empty_cache()
```

## background = Image.open("studio\_bg.jpg")

| A100   | GPU     | 30 秒生成 |
| ------ | ------- | ------ |
| small  | 速度      | \~10 秒 |
| medium | 速度      | \~25 秒 |
| large  | 512x512 | \~45s  |
| 旋律     | 速度      | \~30s  |

## 比较

| 特性   | MusicGen | Stable Audio | Riffusion |
| ---- | -------- | ------------ | --------- |
| 质量   | 很棒       | 很棒           | 良好        |
| 长度   | 30 秒     | 90 秒         | 循环        |
| 旋律输入 | 是        | 否            | 否         |
| 开源   | 是        | 否            | 是         |

## # 使用固定种子以获得一致结果

### 内存不足

* 使用更小的模型（使用 small 而不是 large）
* 减少时长
* 清除缓存： `torch.cuda.empty_cache()`

### 质量差

* 使用更具体的提示
* 尝试 medium 或 large 模型
* 调整温度（0.8-1.2）

### 重复的输出

* 增加 top\_k
* 降低 cfg\_coef
* 尝试不同的提示

## 下载所有所需的检查点

检查文件完整性

| GPU     | 验证 CUDA 兼容性 | 费用估算    | CLORE.AI 市场的典型费率（截至 2024 年）： |
| ------- | ----------- | ------- | ---------------------------- |
| 按小时费率   | \~$0.03     | \~$0.70 | \~$0.12                      |
| 速度      | \~$0.06     | \~$1.50 | \~$0.25                      |
| 512x512 | \~$0.10     | \~$2.30 | \~$0.40                      |
| 按日费率    | \~$0.17     | \~$4.00 | \~$0.70                      |
| 4 小时会话  | \~$0.25     | \~$6.00 | \~$1.00                      |

*RTX 3060* [*CLORE.AI 市场*](https://clore.ai/marketplace) *A100 40GB*

**A100 80GB**

* 使用 **竞价** 价格随提供商和需求而异。请查看
* 以获取当前费率。 **CLORE** 节省费用：
* 市场用于灵活工作负载（通常便宜 30-50%）

## 使用以下方式支付

* [Bark TTS](/guides/guides_v2-zh/yin-pin-yu-yu-yin/bark-tts.md) - 语音生成
* [RVC 语音克隆](/guides/guides_v2-zh/yin-pin-yu-yu-yin/rvc-voice-clone.md) - 语音转换
* [Demucs 分离](/guides/guides_v2-zh/yin-pin-yu-yu-yin/demucs-separation.md) - 音频分离


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.clore.ai/guides/guides_v2-zh/yin-pin-yu-yu-yin/audiocraft-music.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
