# How to Dub Your Videos Into Any Language with AI
A 3-Step Setup Using Antigravity and Claude Code
**by Women Lead AI** womenlead.ai
---
**Your videos are only reaching half the world.**
You make a video. You spend time on it. You post it. And it reaches everyone who happens to speak English.
There are hundreds of millions of people on TikTok, YouTube, and Instagram who don't. Polish audiences. Spanish audiences. French, Hindi, Arabic. They're enormous, they're underserved by English-speaking creators, and AI can now put your voice into their language in about 10 minutes.
This guide shows you exactly how to do it. Three steps. No coding. No installing Python. No translators. Just your video, two free tools, and your ElevenLabs API key.
What you'll be able to do:
✓ Dub any video into 30+ languages, preserving your voice
✓ Get a corrected translation with the right grammatical gender
✓ Have Claude Code handle every technical step for you
✓ Go from one English video to five language versions in an afternoon
---
**Before You Start: One Thing to Sort**
You need an **ElevenLabs account** to do the actual dubbing. ElevenLabs is the AI that takes your video, transcribes it, translates it, and re-voices it in your language of choice. It uses your original voice, not a robot.
Go to **elevenlabs.io**, create a free account, then head to **Profile → API Keys** and copy your key. It's a long string of letters and numbers. Keep it somewhere safe. You'll paste it in during Step 3.
The free ElevenLabs tier gives you enough credits to test.
For regular dubbing across multiple languages, a paid plan works out to very little per video.
---
**The 3-Step Setup**
**Step 1: Download Antigravity**
Go to **<https://antigravity.google/**> and download the app.
Antigravity is a pre-built AI workspace. It comes with everything you need already installed, including Python and all the video tools. You don't set any of that up yourself.
Install it and open it. It looks like a code editor, but you won't be writing any code.
**Step 2: Install the Claude Code Extension**
Inside Antigravity, open the Extensions panel (the icon that looks like four squares on the left sidebar).
Search for **Claude Code** and install the one by Anthropic. It has the orange and white Anthropic logo.
Once it's installed, sign in with your Anthropic account (or create one at anthropic.com). Claude Code is the AI assistant that will run the dubbing for you.
**Step 3: Add Your Video and the Dubbing Files**
Create a folder inside Antigravity for your dubbing project. Drop your video file into it.
Now save the skill script. Inside your project folder, create a subfolder called `.claude`, and inside that, another subfolder called `commands`. Save the script below as `video-dubbing.md` inside `.claude/commands/`.
Claude Code scans `.claude/commands/` to find custom skills. Without this, it won't recognise the dubbing command.
Copy the skill script below:
You are running the video dubbing pipeline. Follow each phase in order, completing one fully before moving on.
Ask the user these questions (you can ask them all at once):
Map the language name to its BCP-47 code (e.g. Spanish → es, Polish → pl, French → fr, Hindi → hi, German → de, Portuguese → pt, Italian → it, Japanese → ja, Korean → ko).
Look for a source SRT file in the same directory as the video (e.g. video_source.srt).
whisper <video> --language en --output_format srt) and share the output path with you.Do not proceed to Phase 3 until you have confirmed source text.
Translate every subtitle segment from the source language into the target language.
Critical rules for translation:
Show the full translated SRT to the user, formatted clearly.
After showing the translation, explicitly ask:
"Please review the translation above. Does everything look correct — especially the gendered verb forms and any brand names? Reply with any corrections, or say 'looks good' to proceed."
Wait for explicit confirmation before moving on. If the user provides corrections:
Repeat until the user confirms the translation is correct.
Once the translation is approved, save it to the same directory as the video with the filename:
<video_name>_<lang_code>_corrected.srt
For example: myvideo_es_corrected.srt, myvideo_pl_corrected.srt.
Use the Write tool to save the file. Confirm the path to the user.
The dubbing pipeline uses dub.py in the same directory as your video. This script:
Before running, check whether the VIDEO_PATH in dub.py matches the video the user wants to dub. If it doesn't, update line 5 to the correct filename.
Run the dubbing:
cd "<video_directory>" && python3 dub.py <lang_code>
Example: python3 dub.py es
The script will:
Tell the user where to find the finished file.
Once the script finishes successfully:
Voice preservation: The ElevenLabs Dubbing API automatically uses the speaker's voice from the source video — you do not need to supply a Voice ID. It clones the voice as part of the dubbing process.
Alternative (TTS mode): If the user has a pre-cloned ElevenLabs voice ID and wants to use dub_tts_merge.py instead (for finer control), ask for their Voice ID and update the VOICE_ID constant before running.
Gender matters most in: Polish, Spanish, French, Italian, Portuguese, Russian, Arabic, Hindi — always flag the gender-specific forms explicitly when reviewing with the user.
Common mistakes to catch in review:
You also need a second file called `dub.py`. This is the script that actually talks to ElevenLabs and produces the dubbed video. Save it into your project folder alongside your video (not inside `.claude`).
Copy the script below:
#!/usr/bin/env python3 """Dub a video using ElevenLabs Dubbing API. Usage: python3 dub.py <language_code>"""
import os, sys, time, subprocess, tempfile, requests, imageio_ffmpeg from dotenv import load_dotenv
load_dotenv(os.path.join(os.path.abspath(os.path.dirname(file)), ".env"), override=True)
ELEVENLABS_API_KEY = os.getenv("ELEVENLABS_API_KEY") VIDEO_PATH = os.path.join(os.path.abspath(os.path.dirname(file)), "your-video.mp4") TARGET_LANG = sys.argv[1] if len(sys.argv) > 1 else "es" OUTPUT_PATH = os.path.join(os.path.abspath(os.path.dirname(file)), f"dubbed_{TARGET_LANG}.mp4") FFMPEG = imageio_ffmpeg.get_ffmpeg_exe() PAD_SECONDS = 1.5
if not ELEVENLABS_API_KEY or ELEVENLABS_API_KEY == "paste_your_key_here": raise SystemExit("ERROR: Set ELEVENLABS_API_KEY in the .env file before running.")
BASE_URL = "https://api.elevenlabs.io/v1" HEADERS = {"xi-api-key": ELEVENLABS_API_KEY}
def probe_video_size(path): result = subprocess.run([FFMPEG, "-i", path], stderr=subprocess.PIPE, stdout=subprocess.DEVNULL) for line in result.stderr.decode().splitlines(): if "Video:" in line: for part in line.split(","): dims = part.strip().split()[0] if "x" in dims: w, h = dims.split("x") if w.isdigit() and h.isdigit(): return int(w), int(h) return 1920, 1080
def pad_video(src_path, pad_seconds): w, h = probe_video_size(src_path) tmp = tempfile.NamedTemporaryFile(suffix=".mp4", delete=False) tmp.close() cmd = [FFMPEG, "-y", "-f", "lavfi", "-t", str(pad_seconds), "-i", f"color=c=black:s={w}x{h}:r=30", "-f", "lavfi", "-t", str(pad_seconds), "-i", "anullsrc=r=44100:cl=stereo", "-i", src_path, "-filter_complex", "[0:v][1:a][2:v][2:a]concat=n=2:v=1:a=1[vout][aout]", "-map", "[vout]", "-map", "[aout]", "-c:v", "libx264", "-crf", "18", "-preset", "fast", "-c:a", "aac", "-b:a", "192k", tmp.name] subprocess.run(cmd, check=True, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL) return tmp.name
def trim_video(src_path, trim_seconds, dest_path): cmd = [FFMPEG, "-y", "-ss", str(trim_seconds), "-i", src_path, "-c", "copy", dest_path] subprocess.run(cmd, check=True, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
def create_dubbing_job(upload_path): video_stem = os.path.splitext(os.path.basename(VIDEO_PATH))[0] corrected_srt = os.path.join(os.path.abspath(os.path.dirname(file)), f"{video_stem}_{TARGET_LANG}_corrected.srt") source_srt = os.path.join(os.path.abspath(os.path.dirname(file)), f"{video_stem}_source.srt") files = {"file": ("video_padded.mp4", open(upload_path, "rb"), "video/mp4")} if os.path.exists(corrected_srt): files["transcript"] = (os.path.basename(corrected_srt), open(corrected_srt, "rb"), "application/x-subrip") src_lang = TARGET_LANG print(f"Using corrected transcript: {corrected_srt}") elif os.path.exists(source_srt): files["transcript"] = (os.path.basename(source_srt), open(source_srt, "rb"), "application/x-subrip") src_lang = "en" print(f"Using source transcript: {source_srt}") else: src_lang = "en" response = requests.post(f"{BASE_URL}/dubbing", headers=HEADERS, data={"target_lang": TARGET_LANG, "source_lang": src_lang, "mode": "automatic", "watermark": "false"}, files=files, timeout=300) response.raise_for_status() result = response.json() print(f"Dubbing job created. ID: {result['dubbing_id']}") return result["dubbing_id"]
def wait_for_completion(dubbing_id): print("Waiting for dubbing to complete...") while True: response = requests.get(f"{BASE_URL}/dubbing/{dubbing_id}", headers=HEADERS, timeout=30) response.raise_for_status() state = response.json().get("status", "unknown") print(f" Status: {state}") if state == "dubbed": return True elif state in ("failed", "error"): return False time.sleep(15)
def download_dubbed_video(dubbing_id, dest_path): response = requests.get(f"{BASE_URL}/dubbing/{dubbing_id}/audio/{TARGET_LANG}", headers=HEADERS, timeout=120, stream=True) response.raise_for_status() with open(dest_path, "wb") as f: for chunk in response.iter_content(chunk_size=8192): f.write(chunk)
def main(): padded_path = pad_video(VIDEO_PATH, PAD_SECONDS) dubbed_padded = padded_path.replace(".mp4", "_dubbed.mp4") try: dubbing_id = create_dubbing_job(padded_path) if wait_for_completion(dubbing_id): download_dubbed_video(dubbing_id, dubbed_padded) trim_video(dubbed_padded, PAD_SECONDS, OUTPUT_PATH) print(f"Done! Saved to: {OUTPUT_PATH}") else: print("Dubbing failed.") finally: for p in (padded_path, dubbed_padded): if os.path.exists(p): os.remove(p)
if name == "main": main()
Before you run it: on line 5 of `dub.py`, change `"your-video.mp4"` to the actual filename of your video. Claude will remind you to do this too, but you can update it yourself.
Your final folder structure should look like this:
my-dubbing-project/ ├── .claude/ │ └── commands/ │ └── video-dubbing.md ← the skill script ├── dub.py ← the dubbing script ├── .env ← your API key └── my-video.mp4
In the same folder, create a file called `.env` and add this line, replacing the placeholder with your real ElevenLabs key:
ELEVENLABS_API_KEY=paste_your_elevenlabs_key_here
That's the only technical thing you type yourself.
---
**Running Your First Dub**
Open Claude Code inside Antigravity and type something like:
"Dub my video into Polish using the dubbing script."
Claude will handle it from there. It reads your skill, asks you a few quick questions (which language, male or female speaker), runs the translation for your review, then runs the dubbing. You'll see it appear as **dubbed_pl.mp4** (or whichever language you chose).
The whole process takes about 5 to 10 minutes depending on the length of your video.
**One Thing to Watch: Grammatical Gender**
AI translation defaults to male forms. In Polish, that means 'I made this' becomes Zrobilem (male) instead of Zrobilam (female). In Spanish it's emocionado instead of emocionada.
If you're a woman and you're talking about yourself in your video, you need to catch this.
The script asks you upfront whether the speaker is male or female, so the translation will use the right forms from the start. But always read through the translation before confirming it — Claude will show it to you and wait for your approval before running the dub.
---
**Common Mistakes to Avoid**
- **Forgetting your .env file.** The script can't run without your API key. If Claude says the key is missing, check your .env file.
- **Not checking the translation.** The automatic translation is usually good, but it won't know your brand names, product names, or every nuance specific to you. Always review it.
- **Dubbing a very long video first.** Test with a short clip (30 to 60 seconds) before you run a 10-minute video. Make sure everything works before you use your credits.
- **Sharing your .env file.** Your API key is private. Don't upload it to GitHub, don't send it to anyone, don't include it in screenshots.
---
**Which Languages Should You Prioritise?**
If you're not sure where to start, here's how I'd think about it.
**Spanish** is the highest-reach choice. Over 500 million native speakers, huge creator communities in Mexico, Colombia, Spain, and Argentina, and most English-speaking creators haven't bothered to serve them.
**French** opens up France, Belgium, Switzerland, and a huge chunk of West Africa. It's a premium market with strong purchasing power.
**Hindi** is the long game. India's creator economy is growing at a pace that's hard to overstate. Getting there early matters.
Start with one. See what the response is. Then add more.
---
Women Lead AI is where women who are serious about AI actually build things together. Not just talk about it. We share workflows, troubleshoot live, and push each other to actually ship.
If you want to join, here is the link: **<https://www.skool.com/women-lead-ai**>
