Skip to content

feat: add RLCR visualization dashboard (/humanize:viz)#63

Open
zevorn wants to merge 7 commits intohumania-org:devfrom
zevorn:feat/viz-dashboard
Open

feat: add RLCR visualization dashboard (/humanize:viz)#63
zevorn wants to merge 7 commits intohumania-org:devfrom
zevorn:feat/viz-dashboard

Conversation

@zevorn
Copy link
Copy Markdown

@zevorn zevorn commented Apr 2, 2026

Summary

  • Add /humanize:viz command — a local web dashboard for real-time monitoring and historical analysis of RLCR loop sessions
  • Each RLCR round is rendered as an expandable node in a snake-path layout with zoom/pan support
  • Right sidebar shows session-level analysis: AC checklist, verdict distribution, session info
  • Auto-generate sanitized methodology reports via local Claude CLI, then preview/submit as GitHub issues to upstream (issue RLCR: Implicit two-phase structure needs explicit transition and batch review in polishing phase #62 format)
  • Integrated into RLCR lifecycle: prompt to open dashboard on loop start, auto-stop on cancel

Architecture

Backend (Python/Flask, auto-managed venv):

  • viz/server/app.py — REST API + WebSocket + sanitized issue generation + Claude CLI report generation
  • viz/server/parser.py — RLCR session data parser (YAML/Markdown, bilingual support)
  • viz/server/watcher.py — watchdog file observer with 500ms debounce for live updates
  • viz/server/analyzer.py — cross-session statistics
  • viz/server/exporter.py — Markdown report export

Frontend (vanilla HTML/CSS/JS, zero build step):

  • Snake-path node layout with SVG connectors + mouse wheel zoom/drag pan
  • Click-to-expand flyout detail panel (animates from node position to center)
  • Chart.js analytics (6 chart types + round verdict timeline)
  • DOMPurify for XSS-safe Markdown rendering

Integration points:

  • setup-rlcr-loop.sh — outputs VIZ_AVAILABLE marker, Claude asks user whether to open dashboard
  • cancel-rlcr-loop.sh — auto-stops viz server on loop cancel
  • commands/viz.md/humanize:viz start|stop|status
  • skills/humanize-viz/ — skill registration

Test plan

  • Run /humanize:viz start in a project with .humanize/rlcr/ data
  • Verify home page shows session cards with correct status grouping
  • Click a session → verify node graph renders with snake layout
  • Click a node → verify flyout expands from node position with summary/review
  • Verify zoom (scroll wheel) and pan (drag) on the graph area
  • Verify right sidebar shows AC checklist, verdict bars, session info
  • Click "Preview Issue" → verify Claude generates methodology report
  • Verify generated report contains no project-specific information
  • Toggle dark/light theme → verify all components update
  • Run /humanize:viz stop → verify tmux session killed
  • Start RLCR loop → verify dashboard offer prompt appears
  • All 38 viz-specific tests pass (tests/test-viz.sh)
  • All 1645+ existing tests pass (no regressions)
  • Codex P1 review items addressed (round range fix + XSS sanitization)

zevorn added 4 commits April 2, 2026 12:32
Add a local web dashboard for real-time monitoring and historical
analysis of RLCR loop sessions.

Backend (Python/Flask):
- parser.py: RLCR session data parser (YAML/Markdown, bilingual)
- app.py: Flask server with REST API, WebSocket, sanitized issue gen
- watcher.py: watchdog file observer with debounce
- analyzer.py: cross-session statistics
- exporter.py: Markdown report generation

Frontend (vanilla HTML/CSS/JS):
- Snake-path node layout with zoom/pan canvas
- Click-to-expand flyout detail panel (animates from node position)
- Session overview sidebar (AC checklist, verdict distribution)
- Chart.js analytics with empty-state handling
- Round verdict timeline visualization
- Mission Control aesthetic (Archivo + DM Sans + JetBrains Mono)

Integration:
- setup-rlcr-loop.sh: prompt to launch viz on loop start
- cancel-rlcr-loop.sh: auto-stop viz on loop cancel
- commands/viz.md: /humanize:viz start|stop|status
- skills/humanize-viz: skill definition

Shell scripts:
- viz-start.sh: venv auto-creation, port scan, tmux launch
- viz-stop.sh: tmux kill + cleanup
- viz-status.sh: health check + stale detection

Signed-off-by: Chao Liu <chao.liu.zevorn@gmail.com>
When starting an RLCR loop, if the viz dashboard is available:
- setup-rlcr-loop.sh outputs VIZ_AVAILABLE/VIZ_PROJECT markers
- start-rlcr-loop.md instructs Claude to ask the user via
  AskUserQuestion whether to open the dashboard
- "Yes" launches the dashboard immediately
- "No" prints a hint about /humanize:viz start for later use

Also adds viz-start.sh to allowed-tools in start-rlcr-loop.md
so Claude can execute it when the user accepts.

Signed-off-by: Chao Liu <chao.liu.zevorn@gmail.com>
When viewing any RLCR session, the right sidebar now has an
"Upstream Feedback" section with two buttons:

- Preview Issue: opens a modal showing the sanitized issue
  content (title + body) in issue humania-org#62 format before submission
- Submit to GitHub: sends the sanitized report to
  humania-org/humanize via `gh issue create`

The flow:
1. Preview shows the taxonomy-generated content (no project data)
2. If sanitization warnings exist, content is redacted and
   Submit button is hidden
3. On successful submission, shows the issue URL and disables
   buttons to prevent duplicates
4. If gh is not available, offers a copy button as fallback

Signed-off-by: Chao Liu <chao.liu.zevorn@gmail.com>
When clicking "Preview Issue" or "Submit to GitHub" in the session
sidebar, if no methodology-analysis-report.md exists yet:

1. Frontend calls POST /api/sessions/<id>/generate-report
2. Backend collects all round summaries and review results
3. Invokes `claude -p --model sonnet` with a sanitization prompt
   matching methodology-analysis-prompt.md rules
4. Saves the generated report to the session directory
5. Frontend shows a spinner during generation (~30-60s)
6. Once complete, proceeds to preview or submit flow

The prompt enforces:
- Zero project-specific information (file paths, function names,
  branch names, business terms, code snippets)
- Issue humania-org#62 format (Context, Observations, Suggested Improvements
  table, Quantitative Summary table)
- Pure methodology perspective

Signed-off-by: Chao Liu <chao.liu.zevorn@gmail.com>
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 6771bda603

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@zevorn
Copy link
Copy Markdown
Author

zevorn commented Apr 2, 2026

@codex

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 9aa75a59bf

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

zevorn added 3 commits April 2, 2026 17:25
38 tests covering:
- Shell script syntax validation (3)
- Python module syntax validation (5)
- Parser functionality: parse_session, canonical rounds, goal tracker,
  Completed and Verified parsing, list_sessions, is_valid_session,
  malformed session skip (6)
- Analyzer: empty sessions, basic statistics, verdict distribution
  excludes non-reviewed rounds (3)
- Exporter: Markdown generation, bilingual {zh,en} dict handling (2)
- Integration markers: VIZ_AVAILABLE/VIZ_PROJECT in setup script,
  viz-stop in cancel script, viz prompt in start command (5)
- Command & skill definitions (3)
- Static asset existence + i18n English-only check + requirements (11)

Signed-off-by: Chao Liu <chao.liu.zevorn@gmail.com>
When a session is active:

1. Current round node enhancements:
   - Sweeping gradient bar at top (horizontal scan animation)
   - Pulsing orange glow shadow
   - Live blinking dot indicator next to phase tag
   - Stronger border highlight

2. Ghost "next round" node:
   - Dashed orange border with breathing opacity animation
   - Shows "R{N+1} Next" with spinner
   - "Awaiting..." status text
   - Positioned as the next node in the snake path

3. Flowing edge animation:
   - The connector between current round and ghost node
     has animated dashes (stroke-dashoffset animation)
   - Orange colored to distinguish from static connectors

All animations are CSS-only, no JavaScript timers.

Signed-off-by: Chao Liu <chao.liu.zevorn@gmail.com>
Multi-project support:
- Backend: /api/projects, /api/projects/switch, /api/projects/add,
  /api/projects/remove endpoints
- Projects saved to ~/.humanize/viz-projects.json
- Switch project dynamically (restarts watcher, clears cache)
- Home page shows project bar with current project name/path,
  Switch dropdown for other projects, + Add button

Restart command:
- viz-restart.sh: stop + start in one step
- /humanize:viz restart subcommand added

Analytics cleanup:
- Removed 6 Chart.js panels (Rounds/Duration/Verdicts/P-Issues/
  FirstComplete/BitLesson) — kept stats overview + timeline + table
- Session Comparison table defaults to time descending (newest first)

Signed-off-by: Chao Liu <chao.liu.zevorn@gmail.com>
@zevorn zevorn force-pushed the feat/viz-dashboard branch from 615fcde to 1b575fe Compare April 2, 2026 09:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant