Releases: taruma/framelab
v2.1.0
FrameLab v2.0.0: From Single Uploads to Full Multi‑Media Control
🎉 FrameLab v2.0.0
This is a major workflow and UX upgrade for FrameLab. v2.0 introduces flexible input modes (including text-only), true multi-media handling with editable tagging, richer editing controls, and stronger transparency/testing/documentation support.
✨ Highlights
- Text-only mode is now supported in both Primary Analysis and Refinement Loop (media is optional).
- Multi-media uploads are now supported in both phases (single or multiple image/video files).
- Per-item media tagging (
@imageN/@videoN) is now editable and used in tagged payload composition. - Added a dedicated Manage media tags dialog with full-size preview + tag editing.
- Added output editing dialogs for both phases, with edited outputs persisted into conversation context.
- Added a larger sidebar dialog editor for the System Prompt.
- Added optional session request logging with JSON export.
- Added configurable hero notices via
config.toml. - CI/testing improvements: offline pytest coverage + opt-in live smoke execution with
--liveand.envloading.
🔄 Changes
- Workflow terminology updated to:
- Primary Analysis (formerly Phase 1)
- Refinement Loop (formerly Phase 2)
- System Prompt behavior updated:
- Editable textbox is now the source of truth
- Preset dropdown now applies via explicit Load action
- Request Transparency previews improved for multi-media payloads (including media summaries + media-tag mapping).
- Copy actions now support both Copy Plain Text and Copy Markdown.
- Phase headers and reasoning sections updated with Material icon labels.
- In-app notices refined (including Xiaomi/OpenRouter update messaging).
🛠️ Fixes
- Improved media-tag persistence when media items are added/removed (signature-based mapping).
- Improved setup reliability by bundling
en_core_web_smvia dependency wheel and aligning docs arounduv sync. - Improved editing ergonomics with wider system prompt and phase-output edit dialogs.
📚 Documentation
Updated:
README.mdAGENTS.mddocs/REFERENCE.mddocs/TESTING.md
What's Changed
- Add offline CI pytest suite and improve testing documentation by @taruma in #17
- Add configurable hero notices and update messaging in UI by @taruma in #18
- feat(ui): make media uploads optional in both analysis phases by @taruma in #20
- Add options for copying plain text and markdown outputs by @taruma in #21
- Refactor UI terminology and enhance phase headers with icons by @taruma in #23
- fix(deps): bundle spaCy model wheel and sync guidance by @taruma in #24
- Add editable dialogs for Phase outputs and refactor UI components by @taruma in #25
- Add tagged multi-media uploads and improve tag management by @taruma in #26
- Enhance System Prompt UI with editable dialog and improved layout by @taruma in #28
- feat: add optional session request logging and JSON export by @taruma in #29
- V2.0 by @taruma in #19
Full Changelog: v1.2.1...v2.0.0
v1.2.1
Fixed
- Fixed POS highlighting so enabling it no longer breaks markdown rendering in model outputs.
- Updated highlighter behavior to be markdown-aware by preserving fenced code blocks, inline markdown spans, and line-prefix structure markers while highlighting plain text tokens.
Full Changelog: v1.2.0...v1.2.1
v1.2.0: Video Support & Prompt Management
This release expands FrameLab’s multimodal workflow with video support and improves prompt management for faster iteration.
✨ Highlights
- Added support for image + MP4 video inputs in both Phase 1 and Phase 2. Fix #7
- Added folder-based prompt presets with optional metadata sidecars. Fix #3
- Added a new initial preset: video technical hybrid script.
🔧 Improvements
- Removed leftover debug reasoning-effort caption from chat rendering. Fix #6
- Updated copy and docs to use broader “video” terminology (instead of MP4-only wording where appropriate).
📚 Documentation
- Expanded
AGENTS.mdguidance for video workflows and prompt preset behavior. - Clarified optional spaCy POS-highlighting requirements. Fix #4
✅ Notes
- No breaking changes.
- Existing setup flow remains the same (
uv run run.py).
What's Changed
Full Changelog: v1.1.1...v1.2.0
v1.1.1
See release v1.1 here: https://github.com/taruma/framelab/releases/tag/v1.1.0
Patch Fix
- Made Phase 1 and Phase 2 prompt fields editable in the UI so users can tune instructions per run without leaving the workflow.
- Cleaned up request transparency preview presentation for a clearer, more compact phase action experience.
v1.1.0
This release focuses on usability and configuration improvements to make FrameLab easier to set up and safer to operate during analysis runs.
✨ Highlights
- Added provider presets via
config.tomlfor faster endpoint/model switching. - Added unified API key env fallback with
LLM_API_KEYsupport (plus provider-specific compatibility). - Added a new hero landing section with branding, workflow guidance, badges, and creator credit.
- Added per-phase Request Transparency previews so users can inspect request metadata/payload before running.
- Improved request safety with UI locking during active processing to prevent duplicate submissions.
🔧 Improvements
- API Setup is now wrapped in an expander for cleaner layout.
- Transparency previews are now rendered inline with each phase action panel for clearer workflow context.
- Setup/documentation were updated to match the provider preset + key fallback flow.
- Added
requirements.txtfor pip-based installs and set Poetry to dependency-management-only (package-mode = false).
🐛 Fixes
- Improved phase layout consistency by removing placeholder-based transparency rendering and placing action buttons after each inline preview.
✅ Compatibility
- No breaking changes in this release.
- Existing
uv run run.pyworkflow remains unchanged.
What's Changed
New Contributors
Full Changelog: v1.0.0...v1.1.0
FrameLab v1.0.0
Lightweight Multimodal AI for Cinematic Image Analysis
We're excited to announce the first release of FrameLab! 🎉
What is FrameLab?
FrameLab is a Streamlit-powered web app that brings professional-grade cinematic image analysis to your browser. Upload a reference image, get detailed technical breakdowns on composition, lighting, and optics — then refine your analysis through a correction loop.
✨ Key Features
- Multi-Provider Support — Works with any OpenAI-compatible endpoint.
- Live Streaming — Watch the analysis generate in real-time
- Two-Phase Workflow — Initial analysis + correction loop for iterative refinement
- Reasoning Display — See the model's "thought process" expand in real-time
- Copy Made Easy — One-click plain-text copy for your results
- Zero Fuss — Just
streamlitandopenaias dependencies
🛠️ Quick Start
uv sync
uv run run.py