RunPod gives you a full Linux machine with SSH access, which means more control and cheaper hourly rates (~$1.64/hr vs Modal's ~$2.20/hr). The RunPod T2V script also includes Lightning LoRA pre-merging, which trains your character LoRA against a speed-optimized base model — producing LoRAs that generate videos in 4 steps instead of 30. The tradeoff: more manual setup, and you need to manage your own tmux sessions so training survives terminal disconnects.
You need: a RunPod account (runpod.io) with credits loaded, a HuggingFace token (same as Modal quickstart), and your dataset ready on your local PC.
In the RunPod console, deploy a new GPU pod: A100-80GB (required for 14B models). Set container disk to 50GB and volume disk to 200GB. Under environment variables, add HF_TOKEN with your HuggingFace token. Start the pod and wait for it to be ready.
Open Jupyter Lab from the pod's dashboard (the easiest way to upload). Navigate to /workspace/ and upload these files:
train_runpod_t2v.py — the training scriptsetup_runpod.sh — the environment setup scriptwan21-dataset-config.toml — dataset configurationdatasets/ folder (with Images/ and Videos/ subfolders + captions)Alternatively, use SCP from PowerShell:
scp -P PORT -i ~/.ssh/id_ed25519 train_runpod_t2v.py root@IP:/workspace/
scp -P PORT -i ~/.ssh/id_ed25519 -r datasets root@IP:/workspace/
Open a terminal in Jupyter (or SSH in) and run:
bash /workspace/setup_runpod.sh
This clones musubi-tuner, installs dependencies, and downloads ~35GB of model weights. Takes about 15 minutes on a fast connection.