feat: add RLCR visualization dashboard (/humanize:viz)#63
feat: add RLCR visualization dashboard (/humanize:viz)#63zevorn wants to merge 7 commits intohumania-org:devfrom
Conversation
Add a local web dashboard for real-time monitoring and historical analysis of RLCR loop sessions. Backend (Python/Flask): - parser.py: RLCR session data parser (YAML/Markdown, bilingual) - app.py: Flask server with REST API, WebSocket, sanitized issue gen - watcher.py: watchdog file observer with debounce - analyzer.py: cross-session statistics - exporter.py: Markdown report generation Frontend (vanilla HTML/CSS/JS): - Snake-path node layout with zoom/pan canvas - Click-to-expand flyout detail panel (animates from node position) - Session overview sidebar (AC checklist, verdict distribution) - Chart.js analytics with empty-state handling - Round verdict timeline visualization - Mission Control aesthetic (Archivo + DM Sans + JetBrains Mono) Integration: - setup-rlcr-loop.sh: prompt to launch viz on loop start - cancel-rlcr-loop.sh: auto-stop viz on loop cancel - commands/viz.md: /humanize:viz start|stop|status - skills/humanize-viz: skill definition Shell scripts: - viz-start.sh: venv auto-creation, port scan, tmux launch - viz-stop.sh: tmux kill + cleanup - viz-status.sh: health check + stale detection Signed-off-by: Chao Liu <chao.liu.zevorn@gmail.com>
When starting an RLCR loop, if the viz dashboard is available: - setup-rlcr-loop.sh outputs VIZ_AVAILABLE/VIZ_PROJECT markers - start-rlcr-loop.md instructs Claude to ask the user via AskUserQuestion whether to open the dashboard - "Yes" launches the dashboard immediately - "No" prints a hint about /humanize:viz start for later use Also adds viz-start.sh to allowed-tools in start-rlcr-loop.md so Claude can execute it when the user accepts. Signed-off-by: Chao Liu <chao.liu.zevorn@gmail.com>
When viewing any RLCR session, the right sidebar now has an "Upstream Feedback" section with two buttons: - Preview Issue: opens a modal showing the sanitized issue content (title + body) in issue humania-org#62 format before submission - Submit to GitHub: sends the sanitized report to humania-org/humanize via `gh issue create` The flow: 1. Preview shows the taxonomy-generated content (no project data) 2. If sanitization warnings exist, content is redacted and Submit button is hidden 3. On successful submission, shows the issue URL and disables buttons to prevent duplicates 4. If gh is not available, offers a copy button as fallback Signed-off-by: Chao Liu <chao.liu.zevorn@gmail.com>
When clicking "Preview Issue" or "Submit to GitHub" in the session sidebar, if no methodology-analysis-report.md exists yet: 1. Frontend calls POST /api/sessions/<id>/generate-report 2. Backend collects all round summaries and review results 3. Invokes `claude -p --model sonnet` with a sanitization prompt matching methodology-analysis-prompt.md rules 4. Saves the generated report to the session directory 5. Frontend shows a spinner during generation (~30-60s) 6. Once complete, proceeds to preview or submit flow The prompt enforces: - Zero project-specific information (file paths, function names, branch names, business terms, code snippets) - Issue humania-org#62 format (Context, Observations, Suggested Improvements table, Quantitative Summary table) - Pure methodology perspective Signed-off-by: Chao Liu <chao.liu.zevorn@gmail.com>
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 6771bda603
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 9aa75a59bf
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
38 tests covering:
- Shell script syntax validation (3)
- Python module syntax validation (5)
- Parser functionality: parse_session, canonical rounds, goal tracker,
Completed and Verified parsing, list_sessions, is_valid_session,
malformed session skip (6)
- Analyzer: empty sessions, basic statistics, verdict distribution
excludes non-reviewed rounds (3)
- Exporter: Markdown generation, bilingual {zh,en} dict handling (2)
- Integration markers: VIZ_AVAILABLE/VIZ_PROJECT in setup script,
viz-stop in cancel script, viz prompt in start command (5)
- Command & skill definitions (3)
- Static asset existence + i18n English-only check + requirements (11)
Signed-off-by: Chao Liu <chao.liu.zevorn@gmail.com>
When a session is active:
1. Current round node enhancements:
- Sweeping gradient bar at top (horizontal scan animation)
- Pulsing orange glow shadow
- Live blinking dot indicator next to phase tag
- Stronger border highlight
2. Ghost "next round" node:
- Dashed orange border with breathing opacity animation
- Shows "R{N+1} Next" with spinner
- "Awaiting..." status text
- Positioned as the next node in the snake path
3. Flowing edge animation:
- The connector between current round and ghost node
has animated dashes (stroke-dashoffset animation)
- Orange colored to distinguish from static connectors
All animations are CSS-only, no JavaScript timers.
Signed-off-by: Chao Liu <chao.liu.zevorn@gmail.com>
Multi-project support: - Backend: /api/projects, /api/projects/switch, /api/projects/add, /api/projects/remove endpoints - Projects saved to ~/.humanize/viz-projects.json - Switch project dynamically (restarts watcher, clears cache) - Home page shows project bar with current project name/path, Switch dropdown for other projects, + Add button Restart command: - viz-restart.sh: stop + start in one step - /humanize:viz restart subcommand added Analytics cleanup: - Removed 6 Chart.js panels (Rounds/Duration/Verdicts/P-Issues/ FirstComplete/BitLesson) — kept stats overview + timeline + table - Session Comparison table defaults to time descending (newest first) Signed-off-by: Chao Liu <chao.liu.zevorn@gmail.com>
615fcde to
1b575fe
Compare
Summary
/humanize:vizcommand — a local web dashboard for real-time monitoring and historical analysis of RLCR loop sessionsArchitecture
Backend (Python/Flask, auto-managed venv):
viz/server/app.py— REST API + WebSocket + sanitized issue generation + Claude CLI report generationviz/server/parser.py— RLCR session data parser (YAML/Markdown, bilingual support)viz/server/watcher.py— watchdog file observer with 500ms debounce for live updatesviz/server/analyzer.py— cross-session statisticsviz/server/exporter.py— Markdown report exportFrontend (vanilla HTML/CSS/JS, zero build step):
Integration points:
setup-rlcr-loop.sh— outputsVIZ_AVAILABLEmarker, Claude asks user whether to open dashboardcancel-rlcr-loop.sh— auto-stops viz server on loop cancelcommands/viz.md—/humanize:viz start|stop|statusskills/humanize-viz/— skill registrationTest plan
/humanize:viz startin a project with.humanize/rlcr/data/humanize:viz stop→ verify tmux session killedtests/test-viz.sh)