How Claude Code skills chain together to go from brief to finished clips.
The idea: instead of doing each step by hand, you save each step as a reusable "skill" in Claude Code. A skill is just a set of instructions Claude follows when you tell it to. You run them one at a time, or chain them together. You stay in control of the key decisions.
A .md file is a Markdown file. It's just a plain text file with some simple formatting (headings, bold, lists) that any text editor can open. Think of it as a step between a plain .txt file and a Word document.
They matter here because Claude Code reads .md files natively. Every skill, every reference doc, every log file in this system is a .md file. They're lightweight, easy to edit, and Claude understands them perfectly.
Download these two files. They're the detailed knowledge base behind this guide.
Full technical guide: folder structure, fal.ai setup, prompt file shape, runner script, skill template, troubleshooting.
Meeting notes: the full content workflow, what's manual today, what gets automated, tools, cost tips, next steps.
Drop both files into a docs/ folder in your project:
your-project/
docs/
fal-video-pipeline-guide.md
workflow-notes.md
Skills work best when they're short and focused. Instead of cramming everything into the skill file, you point the skill at these docs for the detail. Claude reads them when it needs context and ignores them when it doesn't.
Add this line to any skill that needs the technical detail:
Read docs/fal-video-pipeline-guide.md for API setup, prompt file format, and runner script details.
Or for the workflow context:
Read docs/workflow-notes.md for the full pipeline overview and production requirements.
The brain. Runs inside the Claude Desktop app. Orchestrates the pipeline, calls tools, checks quality.
Image and video generation. Hosts models including OpenAI's. You pay per generation.
Voiceover and sound effects.
Local audio transcription. Download here (set price to $0).
Each step below becomes its own skill. You run them in order, checking the output at each stage.
Claude searches the web for recent player photos. No manual downloading.
Batch generate from your brief. Each image is checked automatically and retried if it's off.
Faces, jersey numbers, and likeness verified. Copyrighted source material flagged for you.
Approved images become short 3 to 5 second clips with tailored scene descriptions.
ElevenLabs generates voiceover or sound effects. Existing audio is kept if it works.
Files named, sorted into folders, and logged automatically.
A skill is a .md file called SKILL.md saved in your project. It tells Claude exactly what to do for a specific task.
You trigger it by typing its name as a slash command:
/generate-images
Skills can reference scripts, docs, and each other. They only run when you tell them to. No surprise API calls.
Search the web for player images and save them locally.
Batch generate with auto retries and quality checks.
Rewrite a rough idea into a detailed, cinematic description.
Turn approved images into short clips via fal.ai.
Generate voiceover or sound effects with ElevenLabs.
Find the latest prompting advice from the past month.
This is the part that stops you repeating the same mistakes.
Run a skill → check the output → log what went wrong → improve the skill in a new session → run it again.
Put this block at the end of every SKILL.md you create. It tells Claude to log failures so you can fix the skill later:
## Logging When any step fails or produces a result that doesn't match the brief: 1. Log what happened to docs/skill-logs.md 2. Include: which step failed, what the input was, what went wrong, and a one-line suggestion for fixing it 3. Keep log entries short. Just enough context to debug, not a wall of text. 4. Do not stop the rest of the batch for a single failure.
Don't try to fix a skill in the same session that broke it. Open a fresh Claude Code session, point it at the log, and say:
Read docs/skill-logs.md and improve the [skill name] skill based on what failed.
Each skill should do one thing well. If it needs detailed context, point it at a file in docs/ rather than cramming it all into the skill itself. Less context means faster, more accurate results.
Prompting techniques and model capabilities change fast. Use the 30-day search skill regularly to check for better approaches before locking in a new workflow.
3 to 5 seconds each. Combine in the edit. A 40-second video costs ~4x more than a 10-second one.
Verify likeness before committing to a full batch. One wasted run can burn 40+ minutes.
If a batch keeps failing, stop automatically rather than running up credits.
1 or 2 test jobs first. Scale up once it's working.
Add this block to any image or video generation skill:
## Quality checks Before marking a generated image as approved: 1. Check the image matches the original brief. 2. Verify the player's face looks like the right person. 3. Check jersey numbers match the current squad list. 4. Flag any images that appear to use copyrighted source photos. Set these aside for human review. 5. If a check fails, retry up to 2 times before logging the failure and moving on.
Add these in the local environment editor inside the Code tab (not in any file):
FAL_KEY=your-fal-key-here ELEVENLABS_API_KEY=your-elevenlabs-key-here
Never paste API keys into files, chat, or Git. Environment variables only.
Get it working with 1 or 2 test jobs. Then move to the next. Once you trust a skill, relax the permissions so it runs with less hand-holding.
Do: Keep keys in env variables. Download generated files (URLs expire). Log every run. Start small. Improve skills from logged failures. Keep skills focused.
Don't: Guess model fields (copy from docs). Run big batches first. Skip quality checks. Rely on written instructions alone for repeatable work (use scripts). Ignore the logs.