本地部署指南
安装 HeartMuLa
在您自己的硬件上运行开源 AI 音乐生成器
24GB VRAMRTX 3090+Apache 2.0~12GB
系统要求
| 组件 | 最低配置 | 推荐配置 |
|---|---|---|
| GPU | 16GB VRAM (FP16 quantized) | 24GB+ VRAM |
| RAM | 32GB | 64GB |
| Storage | 20GB free space | 50GB+ SSD |
| OS | Linux / Windows 10+ | Ubuntu 22.04 / Windows 11 |
| Python | 3.10 | 3.10 - 3.11 |
| CUDA | 11.8 | 12.1+ |
支持的 GPU
最佳配置 (24GB+ 显存)
NVIDIA A100 80GB80GB VRAM
NVIDIA A100 40GB40GB VRAM
NVIDIA H10080GB VRAM
RTX 409024GB VRAM
RTX 309024GB VRAM
RTX 3090 Ti24GB VRAM
RTX A600048GB VRAM
RTX A500024GB VRAM
RTX 5000 Ada32GB VRAM
推荐配置 (16-24GB)
RTX 408016GB VRAM
RTX 4070 Ti Super16GB VRAM
RTX A450020GB VRAM
最低配置 (16GB 需量化)
RTX 4080 Super16GB VRAM
RTX 4070 Ti16GB VRAM
云 GPU 服务
没有强大的 GPU?从这些云服务商租用
推荐
RunPod
GPU cloud platform with easy deployment and competitive pricing
价格:$0.39 - $1.99/hr
特性:
- RTX 4090 and A100 available
- Pre-built templates
- Persistent storage
- Serverless option
Vast.ai
Marketplace for GPU rentals with lowest prices
价格:$0.20 - $2.00/hr
特性:
- Bid-based pricing
- Wide GPU selection
- Docker support
- Community instances
Lambda Labs
ML-focused cloud with high-end GPUs
价格:$0.50 - $2.49/hr
特性:
- H100 and A100 available
- Pre-installed ML stack
- On-demand and reserved
- Enterprise support
Google Colab
Free tier available, good for testing
价格:Free - $49.99/mo
特性:
- Free T4 GPU tier
- Jupyter notebook
- Google Drive integration
- A100 on Pro+ plan
Paperspace
Developer-friendly GPU cloud platform
价格:$0.45 - $3.09/hr
特性:
- Gradient notebooks
- A100 available
- Persistent storage
- Team collaboration
安装方法
ComfyUI Workflow
简单Visual node-based interface for music generation
步骤:
- 1Install ComfyUI following official guide
- 2Install HeartMuLa custom nodes from ComfyUI Manager
- 3Download HeartMuLa model from Hugging Face
- 4Place model in ComfyUI/models/heartmula/
- 5Load the example workflow and start creating
命令:
cd ComfyUI/custom_nodes
git clone https://github.com/m-a-p/ComfyUI-HeartMuLa
pip install -r ComfyUI-HeartMuLa/requirements.txtPython Package
中等Direct Python API for programmatic access
步骤:
- 1Create a virtual environment
- 2Install PyTorch with CUDA support
- 3Install HeartMuLa from pip
- 4Download model weights
- 5Run inference script
命令:
python -m venv heartmula-env
source heartmula-env/bin/activate
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
pip install heartmula
heartmula download --model 3bDocker Container
高级Containerized deployment for production use
步骤:
- 1Install Docker and NVIDIA Container Toolkit
- 2Pull the official HeartMuLa Docker image
- 3Run container with GPU access
- 4Access the web UI or API endpoint
命令:
docker pull heartmula/heartmula:latest
docker run --gpus all -p 7860:7860 heartmula/heartmula:latest常见问题
显存不足 24GB 可以运行 HeartMuLa 吗?
可以,使用 FP16 量化可以在 16GB 显存的 GPU(如 RTX 4080)上运行。质量可能略有下降。
HeartMuLa 支持 Mac Apple Silicon 吗?
目前不支持。HeartMuLa 需要 CUDA(NVIDIA GPU)。暂不支持 Apple Silicon 的 macOS。
生成一首歌需要多长时间?
在 RTX 4090 上,生成 3 分钟的歌曲大约需要 2-3 分钟。生成时间与歌曲时长成正比。
生成的音乐可以商用吗?
可以!HeartMuLa 采用 Apache 2.0 许可证。您完全拥有所生成音乐的版权。