Programmatic Video for OpenClaw
Teach your AI agent to render animations into video — camera-based intros, branded sequences, narrated explainers. All open-source, all self-hosted. No cloud rendering, no GUI editors.
Download
⬇ Download video-creation.skill
Install with OpenClaw:
openclaw skill install programmatic-video.skill
Or drop the file into your ~/.openclaw/skills/ directory.
What This Skill Does
This skill gives your OpenClaw agent the procedural knowledge to create videos using code-first tools. It covers:
- Tool selection — choosing between Three.js, Remotion, Motion Canvas, FFmpeg, and AI generation based on the task
- Three.js + headless Chrome capture — rendering 3D browser scenes to video via Puppeteer frame capture
- Camera composition math — edge-anchoring, FOV geometry, foreground layer visibility control, easing curves
- Remotion templates — data-driven narrated explainers, social clips, branded content
- FFmpeg patterns — encoding, subtitles, audio mixing, multi-format export, concatenation
The skill includes a reusable capture-frames.mjs script that handles the Puppeteer → FFmpeg pipeline out of the box.
When Your Agent Uses It
The skill triggers automatically when you ask your agent to:
- Make a video, animate a scene, or create an intro/outro
- Render a Three.js or browser-based animation to MP4
- Solve camera framing problems (edge anchoring, breathing room, reveal timing)
- Choose between video creation approaches
- Debug blank WebGL renders in headless Chrome
Tool Selection Guide
| You want to make… | Best tool |
|---|---|
| Branded intro with camera moves | Three.js + Puppeteer + FFmpeg |
| Narrated explainer with slides | Remotion + FFmpeg |
| Technical diagram animation | Motion Canvas + FFmpeg |
| AI-generated footage | Replicate + FFmpeg |
| Stitch, subtitle, or transcode clips | FFmpeg only |
What's in the Package
programmatic-video/
├── SKILL.md # Core instructions + tool selection
├── references/
│ ├── threejs-capture.md # Headless Chrome WebGL pipeline
│ ├── camera-composition.md # FOV math, edge anchoring, easing
│ ├── remotion-guide.md # Template-driven video authoring
│ └── ffmpeg-patterns.md # Encoding, subtitles, multi-format
└── scripts/
└── capture-frames.mjs # Generic Puppeteer frame capture
Requirements
- OpenClaw — any recent version
- Node.js — v18+
- FFmpeg — installed and in PATH
- Google Chrome — for headless WebGL capture
- puppeteer-core — npm package (installed per-project)
Background
This skill was built from real production experience creating a branded intro video for Freedom Lab NYC. Through 11 iterations, we discovered and solved problems that no documentation covers well — like WebGL failing silently in headless Chrome, the right way to hide foreground layers during camera pullbacks, and how to anchor asset edges to frame edges using perspective geometry instead of trial-and-error nudging.
The lessons are generalized here so any agent can use them for any video project.
Freedom Tech Perspective
Every tool in this skill runs entirely on your machine. Nothing leaves your hardware unless you choose to upload the final video somewhere.
- Fully self-hosted — Three.js, Puppeteer, FFmpeg, Remotion, and Motion Canvas all run locally. No cloud rendering services, no API calls to generate your video.
- 100% open source — every tool in the stack is open-source software. Three.js (MIT), FFmpeg (LGPL/GPL), Puppeteer (Apache 2.0), Remotion (company license for cloud use, but local rendering is free), Motion Canvas (MIT).
- No accounts required — no sign-ups, no API keys, no telemetry. Install the tools and go.
- Your assets stay yours — brand files, source images, and rendered frames never leave your disk. Compare this to cloud-based video tools where your brand assets live on someone else's server.
The one optional exception: Replicate is listed as a tool for AI-generated insert shots. That's a cloud API. The skill treats it as a sidecar for generating supplementary footage — never as the primary pipeline. If you want to stay fully local, skip it entirely.