Claude Code is Anthropic's command-line coding agent. Timothy and Minta developed this agentic training workflow together — Timothy architected the Claude Code integration approach and Minta did the bulk of the vibe coding to build out the reference documentation and script templates. Instead of manually editing config variables, running commands, and debugging errors yourself, you tell Claude Code what you want in plain English and it reads, writes, and executes everything for you. This is particularly powerful for this pipeline because the training scripts have dozens of interdependent parameters where a single mismatch (wrong precision, wrong timestep boundary, wrong task flag) silently ruins a run. Claude Code validates all of it before launching.

What You Need

On your PC (both paths):

  1. Node.js 18+ — Download from nodejs.org. Verify: node --version
  2. Claude Code — Install globally: npm install -g @anthropic-ai/claude-code
  3. Anthropic API key — From console.anthropic.com. Set it: $env:ANTHROPIC_API_KEY = "sk-ant-..."
  4. Your dataset readydatasets/CharacterName/ with Images/ and Videos/ subfolders, every file captioned

For Modal path additionally: Modal CLI installed (pip install modal) and authenticated (python -m modal setup), HuggingFace secret created (python -m modal secret create my-huggingface-secret HF_TOKEN=hf_xxx)

For RunPod path additionally: RunPod account with credits, an A100-80GB pod deployed, SSH access configured

The Reference Doc

The key to making Claude Code effective is giving it the right context. The WAN_LORA_CLAUDE_CODE_REFERENCE.md file (attached to this project) is the single source of truth — it contains every parameter, every platform procedure, every known bug, and the full validation checklist. When Claude Code reads this file, it knows exactly how to configure a training run correctly.

Place this file in your working directory alongside your training scripts.

How Claude Code Helps

Instead of manually editing Python config blocks and hoping you didn't introduce a bug, you have conversations like:

you: Train annika on Wan 2.2 I2V with Lightning on Modal. Use dim 24 for high noise and dim 16 for low noise.

claude: [reads reference doc, updates OUTPUT_NAME, verifies fp16 precision, confirms I2V timestep boundary is 900 not 875, checks --i2v flag on latent caching, validates --preserve_distribution_shape is present, runs upload and dispatches training]

Or for debugging:

you: Training finished but outputs look hazy. Here's my config.

claude: [checks reference doc diagnostic table, identifies low-noise expert undertrained based on epoch count, suggests extending low-noise training with --network_weights resume from best checkpoint]

Path A: Modal (Serverless)

Modal is the simpler path — everything runs as Python functions dispatched to cloud GPUs. No SSH, no tmux, no manual environment setup.