Quickstart: Wan 2.2 T2V on RunPod

Why RunPod for T2V?

RunPod gives you a full Linux machine with SSH access, which means more control and cheaper hourly rates (~$1.64/hr vs Modal's ~$2.20/hr). The RunPod T2V script also includes Lightning LoRA pre-merging, which trains your character LoRA against a speed-optimized base model — producing LoRAs that generate videos in 4 steps instead of 30. The tradeoff: more manual setup, and you need to manage your own tmux sessions so training survives terminal disconnects.

Prerequisites

You need: a RunPod account (runpod.io) with credits loaded, a HuggingFace token (same as Modal quickstart), and your dataset ready on your local PC.

Step 1: Deploy a Pod

In the RunPod console, deploy a new GPU pod: A100-80GB (required for 14B models). Set container disk to 50GB and volume disk to 200GB. Under environment variables, add HF_TOKEN with your HuggingFace token. Start the pod and wait for it to be ready.

Step 2: Upload Your Files

Open Jupyter Lab from the pod's dashboard (the easiest way to upload). Navigate to /workspace/ and upload these files:

  1. train_runpod_t2v.py — the training script
  2. setup_runpod.sh — the environment setup script
  3. wan21-dataset-config.toml — dataset configuration
  4. Your datasets/ folder (with Images/ and Videos/ subfolders + captions)

Alternatively, use SCP from PowerShell:

scp -P PORT -i ~/.ssh/id_ed25519 train_runpod_t2v.py root@IP:/workspace/
scp -P PORT -i ~/.ssh/id_ed25519 -r datasets root@IP:/workspace/

Step 3: Run the Setup Script

Open a terminal in Jupyter (or SSH in) and run:

bash /workspace/setup_runpod.sh

This clones musubi-tuner, installs dependencies, and downloads ~35GB of model weights. Takes about 15 minutes on a fast connection.

Step 4: Fix Filenames (Critical!)