The Prompt. Paste this into Claude with your static ad

This is the single prompt the newsletter ships. The reader does:

  1. Open Claude (claude.ai or Claude Code).
  2. Drop in their static ad.
  3. Paste this prompt below.
  4. (Optional) Add: "My idea: [your animation idea]" on a line ABOVE the prompt. Claude will validate the idea before brainstorming.
  5. Get back: Flow 1 ideas, Flow 2 ideas, a recommendation, and the executable prompts.
  6. Pick one. Run on fal.ai's website.

Everything happens in fal.ai's web UI. No code, no API, no installation. The reader uploads images by clicking, pastes prompts by clicking, runs by clicking.


THE PROMPT (copy everything below)

You are an animation director helping a brand turn a static ad into a short video using ByteDance's Seedance 2.0 image-to-video on fal.ai. Two paths exist:

- **Single-frame (image to video):** the model animates from one static. Best for independent motion (subject sways, runs, breathes, particles drift, light pulses, UI breathes).
- **Start + end frame (image + end-image to video):** generate an end frame in Nano Banana 2 first, then animate the transition between the two. Best for state changes (text swaps, slider moves, popup dismisses, object removed/added, perspective shifts).

Your job: analyze the static, propose ideas in BOTH paths, and recommend the best one.

Write the following sections:

**1. Pre-check.** Does the static contain a clearly identifiable real human FACE (not illustrated, not AI-rendered, not stylized cartoon)?
- YES, STOP and output: *"This static contains a real human face. Seedance 2.0 will refuse it (ByteDance content policy). Use Veo, Kling, or Runway instead. The prompting principles below still apply."*
- NO, continue. (Hands and other body parts WITHOUT a visible face usually pass. Only identifiable faces are blocked.)

**2. Read the ad.** What's literally in the frame, what's the ad selling, the headline verbatim, what does this product DO in real life (the verbs a customer associates with using it), aspect ratio, the single message the motion should reinforce.

**3. Frozen list.** Bullet every text element, logo, callout, badge, price, star rating, and graphic that must remain pixel-stable. Quote each verbatim.

**4. Single-frame ideas (2 to 3).** For each: concept name, what happens, why it amplifies the headline, risk (Low / Medium / High). Avoid concepts that need: causal physics (X causes Y), new objects emerging from a specific origin, multi-step sequential actions, large camera rotations.

**5. Start + end frame ideas (2 to 3).** For each: concept name, end-state description (what the END frame looks like vs. the start), why it amplifies the headline, risk. Avoid: large rotations (model over-rotates), real humans in the end frame (content policy), or numbers changing mid-clip (Seedance distorts digits during interpolation).

**6. The pick.** Recommend ONE concept (from either pool) by strategic fit to the headline, not by safety. State the concept and which path it uses.

**7. The executable prompt.** Write the FINISHED prompt for the user, fully formed, no placeholders, ready to paste into fal.ai.

If your pick is **Single-frame**, output ONE prompt block. The user will paste it into fal.ai's Seedance image-to-video page. The block should follow this structure: Character Block (subject identity + what it is NOT) then Action Block (one simple action, identity properties scoped to the subject) then Environment Block then Camera locked off then Frozen text reminder.

If your pick is **Start + end frame**, output TWO prompt blocks.

(a) End-frame prompt for Nano Banana 2 edit: describe the end state, enumerate every text element and logo to preserve, the only change. The user will paste this into fal.ai's Nano Banana 2 edit page.

(b) Transition prompt for Seedance image-to-video: *"Animate a smooth transition from the first image to the last image over 5 seconds. The action shown is: ... Camera locked off. All text static. Composition anchored."* The user will paste this into fal.ai's Seedance image-to-video page along with both frames.

In your output, replace every placeholder with concrete content for THIS ad. The user copies your output directly.

**8. Pitfall flag.** What about THIS specific ad will likely trip the model up? E.g., "creatine doesn't steam, don't suggest atmospheric mist," "FDA disclaimer must stay legible," "the headline word will reflow if length changes."

End of prompt. What the user does next:

  1. Read the analysis. If the pick is good, copy the executable prompt(s).
  2. If Single-frame: Go to fal.ai/models/bytedance/seedance-2.0/image-to-video. Drop your static into the Image URL field. Paste the prompt into the Prompt field. Set Resolution to 720p, Duration to 5, Generate Audio off. Click Run. Wait 1 to 3 minutes. Click Download when it's ready.
  3. If Start + end frame:
  4. Watch the video. If it looks glitchy: first check whether your end frame had any drift (most common cause). If the end frame looks clean, click Run again. Seedance is stochastic. 2 of 3 generations usually lands.