FrameLab is a lightweight multimodal AI app for cinematic analysis. Bring text, images, or video, get streamed insights, then refine in a second pass.
🌐 Use it on the web: framelab.streamlit.app
Tip
🔐 Keep your API key secure: FrameLab does not provide API credits. To run analyses (local or web), bring your own compatible API key in the sidebar, never share keys/screenshots publicly, and monitor token usage/cost on your provider account.
Important
Vibecoding Disclaimer: This project is built using an AI-assisted "vibecoding" approach (rapid iteration with human supervision). It is experimental, may contain bugs, and could break unintentionally during updates. Use it as a creative tool, but keep your expectations grounded.
hb_framelabv2_case1.mp4
Frame Breakdown — structured analysis of a single cinematic still.
hb_framelabv2_case2.mp4
Shotlist Script — turn reference images into a camera-ready shooting script.
hb_framelabv2_case3.mp4
Video-to-Screenplay — convert video footage into a production-ready screenplay.
For full step-by-step walkthroughs, see docs/TUTORIAL.md.
- Text-only or multimodal workflow (images/videos are optional)
- Two-phase flow: Primary Analysis → Refinement Loop
- Live streaming output with a Thought Process panel when available
- Prompt and output editing directly in the UI
- Copy actions for both plain text and markdown
- Provider flexibility via OpenAI-compatible endpoints
FrameLab uses uv for dependency + runtime management.
git clone https://github.com/taruma/framelab.git
cd framelab
uv sync
uv run run.pyOptional (recommended):
# Windows
copy .env.example .env
# macOS / Linux
cp .env.example .envThen add your key to .env (for example LLM_API_KEY=...) or paste it in the sidebar at runtime.
-
Configure in the sidebar
- Choose provider/model and set API key/endpoint as needed.
- Load a prompt preset, then edit it freely.
-
Phase 1 — Primary Analysis
- Add optional reference media + prompt/context, then click Analyze.
-
Phase 2 — Refinement Loop
- Add refinement notes and optional follow-up media to iterate on the result.
- Since v2.1.0: POS highlighting controls are feature-gated and hidden by default.
- To enable locally, set this in
config.toml:
[features]
pos_highlighting = trueYou can customize behavior without editing app code:
prompts/system/prompts/initial/prompts/correction/
Optional: add name.meta.toml sidecars (title, description, order) for cleaner UI labels.
docs/INTERFACE.md— complete map of every UI elementdocs/TUTORIAL.md— step-by-step walkthrough from first launch to advanced workflowsdocs/REFERENCE.md— technical/runtime details, advanced config, API behavior, contracts, troubleshootingdocs/TESTING.md— testing strategy and commandsCHANGELOG.md— release history and detailed changesAGENTS.md— AI contributor/agent rules and architecture constraints
Distributed under the MIT License. See LICENSE for details.
Built with ❤️ by Taruma Sakti · Vibecoding with Cline + GPT-5.3-Codex