Install HeartMuLa
Run the open source AI music generator on your own hardware
System Requirements
| Component | Minimum | Recommended |
|---|---|---|
| GPU | 16GB VRAM (FP16 quantized) | 24GB+ VRAM |
| RAM | 32GB | 64GB |
| Storage | 20GB free space | 50GB+ SSD |
| OS | Linux / Windows 10+ | Ubuntu 22.04 / Windows 11 |
| Python | 3.10 | 3.10 - 3.11 |
| CUDA | 11.8 | 12.1+ |
Supported GPUs
Optimal (24GB+ VRAM)
Recommended (16-24GB)
Minimum (16GB with quantization)
Cloud GPU Services
Don't have a powerful GPU? Rent one from these cloud providers
RunPod
GPU cloud platform with easy deployment and competitive pricing
Features:
- RTX 4090 and A100 available
- Pre-built templates
- Persistent storage
- Serverless option
Vast.ai
Marketplace for GPU rentals with lowest prices
Features:
- Bid-based pricing
- Wide GPU selection
- Docker support
- Community instances
Lambda Labs
ML-focused cloud with high-end GPUs
Features:
- H100 and A100 available
- Pre-installed ML stack
- On-demand and reserved
- Enterprise support
Google Colab
Free tier available, good for testing
Features:
- Free T4 GPU tier
- Jupyter notebook
- Google Drive integration
- A100 on Pro+ plan
Paperspace
Developer-friendly GPU cloud platform
Features:
- Gradient notebooks
- A100 available
- Persistent storage
- Team collaboration
Installation Methods
ComfyUI Workflow
EasyVisual node-based interface for music generation
Steps:
- 1Install ComfyUI following official guide
- 2Install HeartMuLa custom nodes from ComfyUI Manager
- 3Download HeartMuLa model from Hugging Face
- 4Place model in ComfyUI/models/heartmula/
- 5Load the example workflow and start creating
Commands:
cd ComfyUI/custom_nodes
git clone https://github.com/m-a-p/ComfyUI-HeartMuLa
pip install -r ComfyUI-HeartMuLa/requirements.txtPython Package
MediumDirect Python API for programmatic access
Steps:
- 1Create a virtual environment
- 2Install PyTorch with CUDA support
- 3Install HeartMuLa from pip
- 4Download model weights
- 5Run inference script
Commands:
python -m venv heartmula-env
source heartmula-env/bin/activate
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
pip install heartmula
heartmula download --model 3bDocker Container
AdvancedContainerized deployment for production use
Steps:
- 1Install Docker and NVIDIA Container Toolkit
- 2Pull the official HeartMuLa Docker image
- 3Run container with GPU access
- 4Access the web UI or API endpoint
Commands:
docker pull heartmula/heartmula:latest
docker run --gpus all -p 7860:7860 heartmula/heartmula:latestFrequently Asked Questions
Can I run HeartMuLa with less than 24GB VRAM?
Yes, with FP16 quantization you can run on 16GB VRAM GPUs like RTX 4080. Quality may be slightly reduced.
Does HeartMuLa work on Mac with Apple Silicon?
Currently no. HeartMuLa requires CUDA (NVIDIA GPU). macOS with Apple Silicon is not supported yet.
How long does it take to generate a song?
On RTX 4090, a 3-minute song takes about 2-3 minutes. Generation time scales with song duration.
Can I use the generated music commercially?
Yes! HeartMuLa is Apache 2.0 licensed. You own full rights to any music you generate.