Local Deployment Guide

Install HeartMuLa

Run the open source AI music generator on your own hardware

24GB VRAMRTX 3090+Apache 2.0~12GB

System Requirements

ComponentMinimumRecommended
GPU16GB VRAM (FP16 quantized)24GB+ VRAM
RAM32GB64GB
Storage20GB free space50GB+ SSD
OSLinux / Windows 10+Ubuntu 22.04 / Windows 11
Python3.103.10 - 3.11
CUDA11.812.1+

Supported GPUs

Optimal (24GB+ VRAM)

NVIDIA A100 80GB80GB VRAM
NVIDIA A100 40GB40GB VRAM
NVIDIA H10080GB VRAM
RTX 409024GB VRAM
RTX 309024GB VRAM
RTX 3090 Ti24GB VRAM
RTX A600048GB VRAM
RTX A500024GB VRAM
RTX 5000 Ada32GB VRAM

Recommended (16-24GB)

RTX 408016GB VRAM
RTX 4070 Ti Super16GB VRAM
RTX A450020GB VRAM

Minimum (16GB with quantization)

RTX 4080 Super16GB VRAM
RTX 4070 Ti16GB VRAM

Cloud GPU Services

Don't have a powerful GPU? Rent one from these cloud providers

Recommended

RunPod

GPU cloud platform with easy deployment and competitive pricing

Price:$0.39 - $1.99/hr

Features:

  • RTX 4090 and A100 available
  • Pre-built templates
  • Persistent storage
  • Serverless option
Visit Site

Vast.ai

Marketplace for GPU rentals with lowest prices

Price:$0.20 - $2.00/hr

Features:

  • Bid-based pricing
  • Wide GPU selection
  • Docker support
  • Community instances
Visit Site

Lambda Labs

ML-focused cloud with high-end GPUs

Price:$0.50 - $2.49/hr

Features:

  • H100 and A100 available
  • Pre-installed ML stack
  • On-demand and reserved
  • Enterprise support
Visit Site

Google Colab

Free tier available, good for testing

Price:Free - $49.99/mo

Features:

  • Free T4 GPU tier
  • Jupyter notebook
  • Google Drive integration
  • A100 on Pro+ plan
Visit Site

Paperspace

Developer-friendly GPU cloud platform

Price:$0.45 - $3.09/hr

Features:

  • Gradient notebooks
  • A100 available
  • Persistent storage
  • Team collaboration
Visit Site

Installation Methods

ComfyUI Workflow

Easy

Visual node-based interface for music generation

Steps:

  1. 1Install ComfyUI following official guide
  2. 2Install HeartMuLa custom nodes from ComfyUI Manager
  3. 3Download HeartMuLa model from Hugging Face
  4. 4Place model in ComfyUI/models/heartmula/
  5. 5Load the example workflow and start creating

Commands:

cd ComfyUI/custom_nodes git clone https://github.com/m-a-p/ComfyUI-HeartMuLa pip install -r ComfyUI-HeartMuLa/requirements.txt

Python Package

Medium

Direct Python API for programmatic access

Steps:

  1. 1Create a virtual environment
  2. 2Install PyTorch with CUDA support
  3. 3Install HeartMuLa from pip
  4. 4Download model weights
  5. 5Run inference script

Commands:

python -m venv heartmula-env source heartmula-env/bin/activate pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121 pip install heartmula heartmula download --model 3b

Docker Container

Advanced

Containerized deployment for production use

Steps:

  1. 1Install Docker and NVIDIA Container Toolkit
  2. 2Pull the official HeartMuLa Docker image
  3. 3Run container with GPU access
  4. 4Access the web UI or API endpoint

Commands:

docker pull heartmula/heartmula:latest docker run --gpus all -p 7860:7860 heartmula/heartmula:latest

Download from Hugging Face

Get the HeartMuLa 3B model weights directly from Hugging Face. Apache 2.0 licensed for commercial use.

Frequently Asked Questions

Can I run HeartMuLa with less than 24GB VRAM?

Yes, with FP16 quantization you can run on 16GB VRAM GPUs like RTX 4080. Quality may be slightly reduced.

Does HeartMuLa work on Mac with Apple Silicon?

Currently no. HeartMuLa requires CUDA (NVIDIA GPU). macOS with Apple Silicon is not supported yet.

How long does it take to generate a song?

On RTX 4090, a 3-minute song takes about 2-3 minutes. Generation time scales with song duration.

Can I use the generated music commercially?

Yes! HeartMuLa is Apache 2.0 licensed. You own full rights to any music you generate.

Need Help Getting Started?

Try HeartMuLa online first, or browse the style tags to learn what's possible.