diff --git a/CHANGELOG.md b/CHANGELOG.md index c0c504e..b45d3eb 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -7,6 +7,9 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 ## [Unreleased] +### Added +- Self-learning system: detects repeated workflows and creates slash commands/skills automatically + --- ## [1.8.3] - 2026-03-22 diff --git a/CLAUDE.md b/CLAUDE.md index 1530a12..4071fc1 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -40,6 +40,8 @@ Commands with Teams Variant ship as `{name}.md` (parallel subagents) and `{name} **Working Memory**: Three shell-script hooks (`scripts/hooks/`) provide automatic session continuity. Toggleable via `devflow memory --enable/--disable/--status` or `devflow init --memory/--no-memory`. Stop hook → reads last turn from session transcript (`~/.claude/projects/{encoded-cwd}/{session_id}.jsonl`), spawns background `claude -p --model haiku` to update `.memory/WORKING-MEMORY.md` with structured sections (`## Now`, `## Progress`, `## Decisions`, `## Modified Files`, `## Context`, `## Session Log`; throttled: skips if triggered <2min ago; concurrent sessions serialize via mkdir-based lock). SessionStart hook → injects previous memory + git state as `additionalContext` on `/clear`, startup, or compact (warns if >1h stale; injects pre-compact memory snapshot when compaction happened mid-session). PreCompact hook → saves git state + WORKING-MEMORY.md snapshot + bootstraps minimal WORKING-MEMORY.md if none exists. Zero-ceremony context preservation. +**Self-Learning**: A Stop hook (`stop-update-learning`) spawns a background `claude -p --model sonnet` to detect repeated workflows and procedural knowledge from session transcripts. Observations accumulate in `.memory/learning-log.jsonl` with confidence scores, temporal decay, and daily run caps. When confidence thresholds are met (3 observations for workflows with 24h+ temporal spread, 2 for procedural), artifacts are auto-created as slash commands (`.claude/commands/learned/`) or skills (`.claude/skills/learned-*/`). Toggleable via `devflow learn --enable/--disable/--status` or `devflow init --learn/--no-learn`. Configurable model/throttle/caps via `devflow learn --configure`. + ## Project Structure ``` @@ -49,8 +51,8 @@ devflow/ ├── plugins/devflow-*/ # 17 plugins (8 core + 9 optional language/ecosystem) ├── docs/reference/ # Detailed reference documentation ├── scripts/ # Helper scripts (statusline, docs-helpers) -│ └── hooks/ # Working Memory + ambient hooks (stop, session-start, pre-compact, ambient-prompt) -├── src/cli/ # TypeScript CLI (init, list, uninstall, ambient) +│ └── hooks/ # Working Memory + ambient + learning hooks (stop, session-start, pre-compact, ambient-prompt, stop-update-learning, background-learning) +├── src/cli/ # TypeScript CLI (init, list, uninstall, ambient, learn) ├── .claude-plugin/ # Marketplace registry ├── .docs/ # Project docs (reviews, design) — per-project └── .memory/ # Working memory files — per-project @@ -94,6 +96,12 @@ Working memory files live in a dedicated `.memory/` directory: .memory/ ├── WORKING-MEMORY.md # Auto-maintained by Stop hook (overwritten each session) ├── backup.json # Pre-compact git state snapshot +├── learning-log.jsonl # Learning observations (JSONL, one entry per line) +├── learning.json # Project-level learning config (max runs, throttle, model) +├── .learning-runs-today # Daily run counter (date + count) +├── .learning-update.log # Background learning agent log +├── .learning-last-trigger # Throttle marker (epoch timestamp) +├── .learning-notified-at # New artifact notification marker (epoch timestamp) └── knowledge/ ├── decisions.md # Architectural decisions (ADR-NNN, append-only) └── pitfalls.md # Known pitfalls (PF-NNN, area-specific gotchas) diff --git a/README.md b/README.md index a04a442..dfd5235 100644 --- a/README.md +++ b/README.md @@ -189,6 +189,26 @@ Three shell hooks run behind the scenes: Working memory is **per-project** — scoped to each repo's `.memory/` directory. Multiple sessions across different repos don't interfere. +## Self-Learning + +DevFlow detects repeated workflows and procedural knowledge across your sessions and automatically creates slash commands and skills. + +A background agent runs on session stop (same as Working Memory) and analyzes your session transcript for patterns. When a pattern is observed enough times (3 for workflows with 24h+ temporal spread, 2 for procedural knowledge), it creates an artifact: + +- **Workflow patterns** become slash commands at `.claude/commands/learned/` +- **Procedural patterns** become skills at `.claude/skills/learned-*/` + +| Command | Description | +|---------|-------------| +| `devflow learn --enable` | Register the learning Stop hook | +| `devflow learn --disable` | Remove the learning hook | +| `devflow learn --status` | Show learning status and observation counts | +| `devflow learn --list` | Show all observations sorted by confidence | +| `devflow learn --configure` | Interactive configuration (model, throttle, daily cap) | +| `devflow learn --clear` | Reset all observations | + +Observations accumulate in `.memory/learning-log.jsonl` with confidence scores and temporal decay. You can edit or delete any generated artifacts — they are never overwritten. + ## Documentation Structure DevFlow creates project documentation in `.docs/` and working memory in `.memory/`: @@ -201,6 +221,12 @@ DevFlow creates project documentation in `.docs/` and working memory in `.memory .memory/ ├── WORKING-MEMORY.md # Auto-maintained by Stop hook ├── backup.json # Pre-compact git state snapshot +├── learning-log.jsonl # Learning observations (JSONL) +├── learning.json # Project-level learning config +├── .learning-runs-today # Daily run counter +├── .learning-update.log # Background learning agent log +├── .learning-last-trigger # Throttle marker +├── .learning-notified-at # New artifact notification marker └── knowledge/ ├── decisions.md # Architectural decisions (ADR-NNN, append-only) └── pitfalls.md # Known pitfalls (area-specific gotchas) @@ -239,6 +265,8 @@ Session context is saved and restored automatically via Working Memory hooks — | `npx devflow-kit list` | List available plugins | | `npx devflow-kit ambient --enable` | Enable always-on ambient mode | | `npx devflow-kit ambient --disable` | Disable ambient mode | +| `npx devflow-kit learn --enable` | Enable self-learning | +| `npx devflow-kit learn --disable` | Disable self-learning | | `npx devflow-kit uninstall` | Remove DevFlow | ### Init Options @@ -250,6 +278,7 @@ Session context is saved and restored automatically via Working Memory hooks — | `--teams` / `--no-teams` | Enable/disable Agent Teams (experimental, default: off) | | `--ambient` / `--no-ambient` | Enable/disable ambient mode (default: on) | | `--memory` / `--no-memory` | Enable/disable working memory (default: on) | +| `--learn` / `--no-learn` | Enable/disable self-learning (default: on) | | `--hud` / `--no-hud` | Enable/disable HUD status line (default: on) | | `--hud-only` | Install only the HUD (no plugins, hooks, or extras) | | `--verbose` | Show detailed output | diff --git a/docs/reference/file-organization.md b/docs/reference/file-organization.md index 35b549c..b2805e7 100644 --- a/docs/reference/file-organization.md +++ b/docs/reference/file-organization.md @@ -42,17 +42,23 @@ devflow/ │ ├── build-hud.js # Copies dist/hud/ → scripts/hud/ │ ├── hud.sh # Thin wrapper: exec node hud/index.js │ ├── hud/ # GENERATED — compiled HUD module (gitignored) -│ └── hooks/ # Working Memory + ambient hooks +│ └── hooks/ # Working Memory + ambient + learning hooks │ ├── stop-update-memory # Stop hook: writes WORKING-MEMORY.md │ ├── session-start-memory # SessionStart hook: injects memory + git state │ ├── pre-compact-memory # PreCompact hook: saves git state backup -│ └── ambient-prompt.sh # UserPromptSubmit hook: ambient skill injection +│ ├── ambient-prompt # UserPromptSubmit hook: ambient skill injection +│ ├── stop-update-learning # Stop hook: triggers background learning +│ ├── background-learning # Background: pattern detection via Sonnet +│ ├── json-helper.cjs # Node.js jq-equivalent operations +│ └── json-parse # Shell wrapper: jq with node fallback └── src/ └── cli/ ├── commands/ │ ├── init.ts │ ├── list.ts │ ├── memory.ts + │ ├── learn.ts + │ ├── ambient.ts │ ├── hud.ts │ └── uninstall.ts ├── hud/ # HUD module (TypeScript source) @@ -137,7 +143,7 @@ Skills and agents are **not duplicated** in git. Instead: Included settings: - `statusLine` - Configurable HUD with presets (replaces legacy statusline.sh) -- `hooks` - Working Memory hooks (Stop, SessionStart, PreCompact) +- `hooks` - Working Memory hooks (Stop, SessionStart, PreCompact) + Learning Stop hook - `env.ENABLE_TOOL_SEARCH` - Deferred MCP tool loading (~85% token savings) - `env.ENABLE_LSP_TOOL` - Language Server Protocol support - `env.CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS` - Agent Teams for peer-to-peer collaboration @@ -146,7 +152,9 @@ Included settings: ## Working Memory Hooks -Three hooks in `scripts/hooks/` provide automatic session continuity. Toggleable via `devflow memory --enable/--disable/--status` or `devflow init --memory/--no-memory`: +Three hooks in `scripts/hooks/` provide automatic session continuity. Toggleable via `devflow memory --enable/--disable/--status` or `devflow init --memory/--no-memory`. + +A fourth hook (`stop-update-learning`) provides self-learning. Toggleable via `devflow learn --enable/--disable/--status` or `devflow init --learn/--no-learn`: | Hook | Event | File | Purpose | |------|-------|------|---------| diff --git a/scripts/hooks/ambient-prompt b/scripts/hooks/ambient-prompt index bca9f35..1a78082 100755 --- a/scripts/hooks/ambient-prompt +++ b/scripts/hooks/ambient-prompt @@ -7,17 +7,19 @@ set -e -# jq is required to parse hook input JSON — silently no-op if missing -if ! command -v jq &>/dev/null; then exit 0; fi +# JSON parsing (jq with node fallback) — silently no-op if neither available +SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)" +source "$SCRIPT_DIR/json-parse" +if [ "$_JSON_AVAILABLE" = "false" ]; then exit 0; fi INPUT=$(cat) -CWD=$(echo "$INPUT" | jq -r '.cwd // ""' 2>/dev/null) +CWD=$(echo "$INPUT" | json_field "cwd" "") if [ -z "$CWD" ]; then exit 0 fi -PROMPT=$(echo "$INPUT" | jq -r '.prompt // ""' 2>/dev/null) +PROMPT=$(echo "$INPUT" | json_field "prompt" "") # Skip slash commands — they have their own orchestration if [[ "$PROMPT" == /* ]]; then @@ -42,9 +44,4 @@ fi # SYNC: must match tests/integration/helpers.ts AMBIENT_PREAMBLE PREAMBLE="AMBIENT MODE ACTIVE: Before responding, silently classify this prompt using the ambient-router skill already in your session context. If QUICK, respond normally without stating classification. If GUIDED or ORCHESTRATED, your FIRST tool calls MUST be Skill tool invocations for each selected skill — before writing ANY text about the task." -jq -n --arg ctx "$PREAMBLE" '{ - "hookSpecificOutput": { - "hookEventName": "UserPromptSubmit", - "additionalContext": $ctx - } -}' +json_prompt_output "$PREAMBLE" diff --git a/scripts/hooks/background-learning b/scripts/hooks/background-learning new file mode 100755 index 0000000..ed4e771 --- /dev/null +++ b/scripts/hooks/background-learning @@ -0,0 +1,604 @@ +#!/bin/bash + +# Background Learning Agent +# Called by stop-update-learning as a detached background process. +# Reads user messages from the session transcript, then uses a fresh `claude -p` +# invocation with Sonnet to detect patterns and update learning-log.jsonl. +# On failure: logs error, does nothing (missing patterns are better than fake data). + +set -e + +CWD="$1" +SESSION_ID="$2" +CLAUDE_BIN="$3" + +# Source JSON parsing helpers (jq with node fallback) +SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)" +source "$SCRIPT_DIR/json-parse" + +LOG_FILE="$CWD/.memory/.learning-update.log" +LOCK_DIR="$CWD/.memory/.learning.lock" +LEARNING_LOG="$CWD/.memory/learning-log.jsonl" + +# --- Logging --- + +log() { + echo "[$(date -u '+%Y-%m-%dT%H:%M:%SZ')] $1" >> "$LOG_FILE" +} + +rotate_log() { + if [ -f "$LOG_FILE" ] && [ "$(wc -l < "$LOG_FILE")" -gt 100 ]; then + tail -50 "$LOG_FILE" > "$LOG_FILE.tmp" && mv "$LOG_FILE.tmp" "$LOG_FILE" + fi +} + +# --- Stale Lock Recovery --- + +get_mtime() { + if stat --version &>/dev/null 2>&1; then + stat -c %Y "$1" + else + stat -f %m "$1" + fi +} + +STALE_THRESHOLD=300 # 5 min + +break_stale_lock() { + if [ ! -d "$LOCK_DIR" ]; then return; fi + local lock_mtime now age + lock_mtime=$(get_mtime "$LOCK_DIR") + now=$(date +%s) + age=$(( now - lock_mtime )) + if [ "$age" -gt "$STALE_THRESHOLD" ]; then + log "Breaking stale lock (age: ${age}s, threshold: ${STALE_THRESHOLD}s)" + rmdir "$LOCK_DIR" 2>/dev/null || true + fi +} + +# --- Locking (mkdir-based, POSIX-atomic) --- + +acquire_lock() { + local timeout=90 + local waited=0 + while ! mkdir "$LOCK_DIR" 2>/dev/null; do + if [ "$waited" -ge "$timeout" ]; then + return 1 + fi + sleep 1 + waited=$((waited + 1)) + done + return 0 +} + +cleanup() { + rmdir "$LOCK_DIR" 2>/dev/null || true +} +trap cleanup EXIT + +# --- Config Loading --- + +# SYNC: Config loading duplicated in src/cli/commands/learn.ts loadLearningConfig() +load_config() { + GLOBAL_CONFIG="$HOME/.devflow/learning.json" + PROJECT_CONFIG="$CWD/.memory/learning.json" + # Defaults + MAX_DAILY_RUNS=10 + THROTTLE_MINUTES=5 + MODEL="sonnet" + # Apply global + if [ -f "$GLOBAL_CONFIG" ]; then + MAX_DAILY_RUNS=$(json_field_file "$GLOBAL_CONFIG" "max_daily_runs" "10") + THROTTLE_MINUTES=$(json_field_file "$GLOBAL_CONFIG" "throttle_minutes" "5") + MODEL=$(json_field_file "$GLOBAL_CONFIG" "model" "sonnet") + fi + # Apply project override + if [ -f "$PROJECT_CONFIG" ]; then + MAX_DAILY_RUNS=$(json_field_file "$PROJECT_CONFIG" "max_daily_runs" "$MAX_DAILY_RUNS") + THROTTLE_MINUTES=$(json_field_file "$PROJECT_CONFIG" "throttle_minutes" "$THROTTLE_MINUTES") + MODEL=$(json_field_file "$PROJECT_CONFIG" "model" "$MODEL") + fi +} + +# --- Daily Run Cap --- + +check_daily_cap() { + COUNTER_FILE="$CWD/.memory/.learning-runs-today" + TODAY=$(date +%Y-%m-%d) + if [ -f "$COUNTER_FILE" ]; then + COUNTER_DATE=$(cut -f1 "$COUNTER_FILE") + COUNTER_COUNT=$(cut -f2 "$COUNTER_FILE") + if [ "$COUNTER_DATE" = "$TODAY" ] && [ "$COUNTER_COUNT" -ge "$MAX_DAILY_RUNS" ]; then + log "Daily cap reached ($COUNTER_COUNT/$MAX_DAILY_RUNS)" + return 1 + fi + fi + return 0 +} + +increment_daily_counter() { + TODAY=$(date +%Y-%m-%d) + COUNT=1 + if [ -f "$COUNTER_FILE" ] && [ "$(cut -f1 "$COUNTER_FILE")" = "$TODAY" ]; then + COUNT=$(( $(cut -f2 "$COUNTER_FILE") + 1 )) + fi + printf '%s\t%d\n' "$TODAY" "$COUNT" > "$COUNTER_FILE" +} + +# --- Temporal Decay --- + +decay_factor() { + case $1 in + 0) echo "100";; 1) echo "90";; 2) echo "81";; + 3) echo "73";; 4) echo "66";; 5) echo "59";; + *) echo "53";; # floor for 6+ periods + esac +} + +# --- Transcript Extraction --- + +extract_user_messages() { + local encoded_cwd + encoded_cwd=$(echo "$CWD" | sed 's|^/||' | tr '/' '-') + local transcript="$HOME/.claude/projects/-${encoded_cwd}/${SESSION_ID}.jsonl" + + if [ ! -f "$transcript" ]; then + log "Transcript not found at $transcript" + return 1 + fi + + # Extract ALL user text messages, skip tool_result blocks + USER_MESSAGES=$(grep '"type":"user"' "$transcript" 2>/dev/null \ + | while IFS= read -r line; do echo "$line" | json_extract_messages; done \ + | grep -v '^$') + + # Truncate to 12,000 chars + if [ ${#USER_MESSAGES} -gt 12000 ]; then + USER_MESSAGES="${USER_MESSAGES:0:12000}... [truncated]" + fi + + if [ -z "$USER_MESSAGES" ]; then + log "No user text content found in transcript" + return 1 + fi + + return 0 +} + +# --- Temporal Decay Pass --- + +apply_temporal_decay() { + if [ ! -f "$LEARNING_LOG" ]; then return; fi + + NOW_EPOCH=$(date +%s) + TEMP_FILE="$LEARNING_LOG.tmp" + > "$TEMP_FILE" + + while IFS= read -r line; do + if ! echo "$line" | json_valid; then continue; fi + + LAST_SEEN=$(echo "$line" | json_field "last_seen" "") + CONF=$(echo "$line" | json_field "confidence" "0") + + if [ -n "$LAST_SEEN" ]; then + LAST_EPOCH=$(date -j -f "%Y-%m-%dT%H:%M:%SZ" "$LAST_SEEN" +%s 2>/dev/null \ + || date -d "$LAST_SEEN" +%s 2>/dev/null \ + || echo "$NOW_EPOCH") + DAYS_ELAPSED=$(( (NOW_EPOCH - LAST_EPOCH) / 86400 )) + PERIODS=$(( DAYS_ELAPSED / 30 )) + + if [ "$PERIODS" -gt 0 ]; then + FACTOR=$(decay_factor "$PERIODS") + CONF_INT=$(echo "$CONF" | awk '{printf "%d", $1 * 100}') + NEW_CONF_INT=$(( CONF_INT * FACTOR / 100 )) + NEW_CONF=$(echo "$NEW_CONF_INT" | awk '{printf "%.2f", $1 / 100}') + + if [ "$NEW_CONF_INT" -lt 10 ]; then + log "Pruned observation (confidence: $NEW_CONF)" + continue + fi + + line=$(echo "$line" | json_update_field_json "confidence" "$NEW_CONF") + fi + fi + + echo "$line" >> "$TEMP_FILE" + done < "$LEARNING_LOG" + mv "$TEMP_FILE" "$LEARNING_LOG" +} + +# --- Entry Cap --- + +cap_entries() { + if [ ! -f "$LEARNING_LOG" ]; then return; fi + + LINE_COUNT=$(wc -l < "$LEARNING_LOG") + if [ "$LINE_COUNT" -gt 100 ]; then + TEMP_FILE="$LEARNING_LOG.tmp" + json_slurp_cap "$LEARNING_LOG" "confidence" 100 > "$TEMP_FILE" + mv "$TEMP_FILE" "$LEARNING_LOG" + fi +} + +# --- Prompt Construction --- + +build_sonnet_prompt() { + EXISTING_OBS="" + if [ -f "$LEARNING_LOG" ]; then + EXISTING_OBS=$(json_slurp_sort "$LEARNING_LOG" "confidence" 30 || echo "[]") + fi + if [ -z "$EXISTING_OBS" ]; then + EXISTING_OBS="[]" + fi + + PROMPT="You are a pattern detection agent. Analyze the user's session messages to identify repeated workflows and procedural knowledge. + +EXISTING OBSERVATIONS (for deduplication — reuse IDs for matching patterns): +$EXISTING_OBS + +USER MESSAGES FROM THIS SESSION: +$USER_MESSAGES + +Detect two types of patterns: + +1. WORKFLOW patterns: Multi-step sequences the user instructs repeatedly (e.g., \"squash merge PR, pull main, delete branch\"). These become slash commands. + - Required observations for artifact creation: 3 (seen across multiple sessions) + - Temporal spread requirement: first_seen and last_seen must be 24h+ apart + +2. PROCEDURAL patterns: Knowledge about how to accomplish specific tasks (e.g., debugging hook failures, configuring specific tools). These become skills. + - Required observations for artifact creation: 2 + - No temporal spread requirement + +Rules: +- If an existing observation matches a pattern from this session, report it with the SAME id so the count can be incremented +- For new patterns, generate a new id in format obs_XXXXXX (6 random alphanumeric chars) +- Quote specific evidence from user messages that supports each observation +- Only report patterns that are clearly distinct — do not create near-duplicate observations +- If no patterns detected, return empty arrays + +Output ONLY the JSON object. No markdown fences, no explanation. + +{ + \"observations\": [ + { + \"id\": \"obs_XXXXXX\", + \"type\": \"workflow\" | \"procedural\", + \"pattern\": \"Short description of the pattern\", + \"evidence\": [\"quoted user message excerpt 1\", \"quoted user message excerpt 2\"], + \"details\": \"Step-by-step description of the workflow or knowledge\" + } + ], + \"artifacts\": [ + { + \"observation_id\": \"obs_XXXXXX\", + \"type\": \"command\" | \"skill\", + \"name\": \"kebab-case-name\", + \"description\": \"One-line description for frontmatter\", + \"content\": \"Full markdown content for the command/skill file\" + } + ] +}" +} + +# --- Sonnet Invocation --- + +run_sonnet_analysis() { + log "--- Sending to claude -p --model $MODEL ---" + + TIMEOUT=180 + RESPONSE_FILE="$CWD/.memory/.learning-response.tmp" + + DEVFLOW_BG_UPDATER=1 DEVFLOW_BG_LEARNER=1 "$CLAUDE_BIN" -p \ + --model "$MODEL" \ + --dangerously-skip-permissions \ + --output-format text \ + "$PROMPT" \ + > "$RESPONSE_FILE" 2>> "$LOG_FILE" & + CLAUDE_PID=$! + + # Watchdog: kill claude if it exceeds timeout + ( sleep "$TIMEOUT" && kill "$CLAUDE_PID" 2>/dev/null ) & + WATCHDOG_PID=$! + + if ! wait "$CLAUDE_PID" 2>/dev/null; then + EXIT_CODE=$? + if [ "$EXIT_CODE" -gt 128 ]; then + log "Analysis timed out (killed after ${TIMEOUT}s) for session $SESSION_ID" + else + log "Analysis failed for session $SESSION_ID (exit code $EXIT_CODE)" + fi + rm -f "$RESPONSE_FILE" + kill "$WATCHDOG_PID" 2>/dev/null || true + wait "$WATCHDOG_PID" 2>/dev/null || true + return 1 + fi + + # Clean up watchdog + kill "$WATCHDOG_PID" 2>/dev/null || true + wait "$WATCHDOG_PID" 2>/dev/null || true + + if [ ! -f "$RESPONSE_FILE" ]; then + log "No response file produced" + return 1 + fi + + RESPONSE=$(cat "$RESPONSE_FILE") + rm -f "$RESPONSE_FILE" + + # Strip markdown fences if present + RESPONSE=$(echo "$RESPONSE" | sed '1s/^```json$//' | sed '1s/^```$//' | sed '$s/^```$//') + + # Validate JSON + if ! echo "$RESPONSE" | json_valid; then + log "Invalid JSON response from model — skipping" + log "--- Raw response ---" + log "$RESPONSE" + log "--- End raw response ---" + return 1 + fi + + return 0 +} + +# --- Process Observations --- + +process_observations() { + log "--- Processing response ---" + + OBS_COUNT=$(echo "$RESPONSE" | json_array_length "observations") + NOW_ISO=$(date -u '+%Y-%m-%dT%H:%M:%SZ') + + for i in $(seq 0 $((OBS_COUNT - 1))); do + OBS=$(echo "$RESPONSE" | json_array_item "observations" "$i") + OBS_ID=$(echo "$OBS" | json_field "id" "") + OBS_TYPE=$(echo "$OBS" | json_field "type" "") + OBS_PATTERN=$(echo "$OBS" | json_field "pattern" "") + # Evidence needs to stay as JSON array for merging later + if [ "$_HAS_JQ" = "true" ]; then + OBS_EVIDENCE=$(echo "$OBS" | jq -c '.evidence' 2>/dev/null) + else + OBS_EVIDENCE=$(echo "$OBS" | node -e "const d=JSON.parse(require('fs').readFileSync('/dev/stdin','utf8'));console.log(JSON.stringify(d.evidence||[]))" 2>/dev/null) + fi + OBS_DETAILS=$(echo "$OBS" | json_field "details" "") + + # Check if observation already exists + EXISTING_LINE="" + if [ -f "$LEARNING_LOG" ]; then + EXISTING_LINE=$(grep -F "\"id\":\"$OBS_ID\"" "$LEARNING_LOG" 2>/dev/null | head -1) + fi + + if [ -n "$EXISTING_LINE" ]; then + # Update existing: increment count, update last_seen, merge evidence + OLD_COUNT=$(echo "$EXISTING_LINE" | json_field "observations" "0") + NEW_COUNT=$((OLD_COUNT + 1)) + FIRST_SEEN=$(echo "$EXISTING_LINE" | json_field "first_seen" "") + if [ "$_HAS_JQ" = "true" ]; then + OLD_EVIDENCE=$(echo "$EXISTING_LINE" | jq -c '.evidence' 2>/dev/null) + else + OLD_EVIDENCE=$(echo "$EXISTING_LINE" | node -e "const d=JSON.parse(require('fs').readFileSync('/dev/stdin','utf8'));console.log(JSON.stringify(d.evidence||[]))" 2>/dev/null) + fi + MERGED_EVIDENCE=$(echo "[$OLD_EVIDENCE, $OBS_EVIDENCE]" | json_merge_evidence) + + # Calculate confidence + if [ "$OBS_TYPE" = "workflow" ]; then + REQUIRED=3 + else + REQUIRED=2 + fi + CONF_RAW=$((NEW_COUNT * 100 / REQUIRED)) + if [ "$CONF_RAW" -gt 95 ]; then CONF_RAW=95; fi + CONF=$(echo "$CONF_RAW" | awk '{printf "%.2f", $1 / 100}') + + # Check temporal spread for workflows + STATUS=$(echo "$EXISTING_LINE" | json_field "status" "") + if [ "$OBS_TYPE" = "workflow" ] && [ "$STATUS" != "created" ]; then + FIRST_EPOCH=$(date -j -f "%Y-%m-%dT%H:%M:%SZ" "$FIRST_SEEN" +%s 2>/dev/null \ + || date -d "$FIRST_SEEN" +%s 2>/dev/null \ + || echo "0") + NOW_EPOCH=$(date +%s) + SPREAD=$((NOW_EPOCH - FIRST_EPOCH)) + if [ "$SPREAD" -lt 86400 ] && [ "$CONF_RAW" -ge 70 ]; then + STATUS="observing" + fi + fi + + # Determine status + ARTIFACT_PATH=$(echo "$EXISTING_LINE" | json_field "artifact_path" "") + if [ "$STATUS" != "created" ] && [ "$CONF_RAW" -ge 70 ]; then + if [ "$OBS_TYPE" = "workflow" ]; then + FIRST_EPOCH=$(date -j -f "%Y-%m-%dT%H:%M:%SZ" "$FIRST_SEEN" +%s 2>/dev/null \ + || date -d "$FIRST_SEEN" +%s 2>/dev/null \ + || echo "0") + NOW_EPOCH=$(date +%s) + SPREAD=$((NOW_EPOCH - FIRST_EPOCH)) + if [ "$SPREAD" -ge 86400 ]; then + STATUS="ready" + fi + else + STATUS="ready" + fi + fi + + # Build updated entry + UPDATED=$(json_obs_construct_full \ + --arg id "$OBS_ID" \ + --arg type "$OBS_TYPE" \ + --arg pattern "$OBS_PATTERN" \ + --argjson confidence "$CONF" \ + --argjson observations "$NEW_COUNT" \ + --arg first_seen "$FIRST_SEEN" \ + --arg last_seen "$NOW_ISO" \ + --arg status "$STATUS" \ + --argjson evidence "$MERGED_EVIDENCE" \ + --arg details "$OBS_DETAILS" \ + --arg artifact_path "$ARTIFACT_PATH") + + # Replace line in file + TEMP_LOG="$LEARNING_LOG.tmp" + grep -vF "\"id\":\"$OBS_ID\"" "$LEARNING_LOG" > "$TEMP_LOG" 2>/dev/null || true + echo "$UPDATED" >> "$TEMP_LOG" + mv "$TEMP_LOG" "$LEARNING_LOG" + + log "Updated observation $OBS_ID: count=$NEW_COUNT confidence=$CONF status=$STATUS" + else + # New observation + if [ "$OBS_TYPE" = "workflow" ]; then + CONF="0.33" + else + CONF="0.50" + fi + + NEW_ENTRY=$(json_obs_construct \ + --arg id "$OBS_ID" \ + --arg type "$OBS_TYPE" \ + --arg pattern "$OBS_PATTERN" \ + --argjson confidence "$CONF" \ + --argjson observations 1 \ + --arg first_seen "$NOW_ISO" \ + --arg last_seen "$NOW_ISO" \ + --arg status "observing" \ + --argjson evidence "$OBS_EVIDENCE" \ + --arg details "$OBS_DETAILS") + + echo "$NEW_ENTRY" >> "$LEARNING_LOG" + log "New observation $OBS_ID: type=$OBS_TYPE confidence=$CONF" + fi + done +} + +# --- Create Artifacts --- + +create_artifacts() { + ART_COUNT=$(echo "$RESPONSE" | json_array_length "artifacts") + + for i in $(seq 0 $((ART_COUNT - 1))); do + ART=$(echo "$RESPONSE" | json_array_item "artifacts" "$i") + ART_OBS_ID=$(echo "$ART" | json_field "observation_id" "") + ART_TYPE=$(echo "$ART" | json_field "type" "") + ART_NAME=$(echo "$ART" | json_field "name" "") + ART_DESC=$(echo "$ART" | json_field "description" "") + ART_CONTENT=$(echo "$ART" | json_field "content" "") + + # Sanitize ART_NAME — prevent path traversal (model-generated input) + ART_NAME=$(echo "$ART_NAME" | tr -d '/' | sed 's/\.\.//g') + if [ -z "$ART_NAME" ]; then + log "Skipping artifact with empty/invalid name" + continue + fi + + # Check the observation's status — only create if ready + if [ -f "$LEARNING_LOG" ]; then + OBS_STATUS=$(grep -F "\"id\":\"$ART_OBS_ID\"" "$LEARNING_LOG" 2>/dev/null | json_field "status" "") + if [ "$OBS_STATUS" != "ready" ]; then + log "Skipping artifact for $ART_OBS_ID (status: $OBS_STATUS, need: ready)" + continue + fi + fi + + if [ "$ART_TYPE" = "command" ]; then + ART_DIR="$CWD/.claude/commands/learned" + ART_PATH="$ART_DIR/$ART_NAME.md" + else + ART_DIR="$CWD/.claude/skills/learned-$ART_NAME" + ART_PATH="$ART_DIR/SKILL.md" + fi + + # Never overwrite existing files (user customization preserved) + if [ -f "$ART_PATH" ]; then + log "Artifact already exists at $ART_PATH — skipping" + continue + fi + + mkdir -p "$ART_DIR" + + # Precompute metadata (safe — our own data, not model-generated) + ART_DATE=$(date +%Y-%m-%d) + ART_CONF=$(grep -F "\"id\":\"$ART_OBS_ID\"" "$LEARNING_LOG" 2>/dev/null | json_field "confidence" "0") + ART_OBS_N=$(grep -F "\"id\":\"$ART_OBS_ID\"" "$LEARNING_LOG" 2>/dev/null | json_field "observations" "0") + + # Write artifact with learning marker + # Uses printf %s to safely write model-generated content (no shell expansion) + if [ "$ART_TYPE" = "command" ]; then + printf '%s\n' "---" \ + "description: \"$ART_DESC\"" \ + "# devflow-learning: auto-generated ($ART_DATE, confidence: $ART_CONF, obs: $ART_OBS_N)" \ + "---" \ + "" > "$ART_PATH" + printf '%s\n' "$ART_CONTENT" >> "$ART_PATH" + else + printf '%s\n' "---" \ + "name: learned-$ART_NAME" \ + "description: \"$ART_DESC\"" \ + "# devflow-learning: auto-generated ($ART_DATE, confidence: $ART_CONF, obs: $ART_OBS_N)" \ + "---" \ + "" > "$ART_PATH" + printf '%s\n' "$ART_CONTENT" >> "$ART_PATH" + fi + + # Update observation status to "created" and record artifact path + TEMP_LOG="$LEARNING_LOG.tmp" + while IFS= read -r line; do + if echo "$line" | grep -qF "\"id\":\"$ART_OBS_ID\""; then + echo "$line" | json_update_field "status" "created" | json_update_field "artifact_path" "$ART_PATH" + else + echo "$line" + fi + done < "$LEARNING_LOG" > "$TEMP_LOG" + mv "$TEMP_LOG" "$LEARNING_LOG" + + log "Created artifact: $ART_PATH" + done +} + +# --- Main --- + +# Wait for parent session to flush transcript +sleep 3 + +log "Starting learning analysis for session $SESSION_ID" + +# Break stale locks from previous zombie processes +break_stale_lock + +# Acquire lock +if ! acquire_lock; then + log "Lock timeout after 90s — skipping for session $SESSION_ID" + trap - EXIT + exit 0 +fi + +rotate_log +load_config + +# Check daily cap +if ! check_daily_cap; then + exit 0 +fi + +# Extract user messages +USER_MESSAGES="" +if ! extract_user_messages; then + log "No messages to analyze — skipping" + exit 0 +fi + +# Apply temporal decay and cap entries +apply_temporal_decay +cap_entries + +# Build prompt with existing observations for dedup context +build_sonnet_prompt + +# Run Sonnet analysis +if ! run_sonnet_analysis; then + exit 0 +fi + +# Process observations and create artifacts +process_observations +create_artifacts + +increment_daily_counter +log "Learning analysis complete for session $SESSION_ID" + +exit 0 diff --git a/scripts/hooks/background-memory-update b/scripts/hooks/background-memory-update index 2ba5749..d136be1 100755 --- a/scripts/hooks/background-memory-update +++ b/scripts/hooks/background-memory-update @@ -13,6 +13,10 @@ SESSION_ID="$2" MEMORY_FILE="$3" CLAUDE_BIN="$4" +# Source JSON parsing helpers (jq with node fallback) +SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)" +source "$SCRIPT_DIR/json-parse" + LOG_FILE="$CWD/.memory/.working-memory-update.log" LOCK_DIR="$CWD/.memory/.working-memory.lock" @@ -93,22 +97,12 @@ extract_last_turn() { last_user=$(grep '"type":"user"' "$transcript" 2>/dev/null \ | tail -3 \ - | jq -r ' - if .message.content then - [.message.content[] | select(.type == "text") | .text] | join("\n") - else "" - end - ' 2>/dev/null \ + | while IFS= read -r line; do echo "$line" | json_extract_messages; done \ | tail -1) last_assistant=$(grep '"type":"assistant"' "$transcript" 2>/dev/null \ | tail -3 \ - | jq -r ' - if .message.content then - [.message.content[] | select(.type == "text") | .text] | join("\n") - else "" - end - ' 2>/dev/null \ + | while IFS= read -r line; do echo "$line" | json_extract_messages; done \ | tail -1) # Truncate to ~4000 chars total to keep token cost low diff --git a/scripts/hooks/json-helper.cjs b/scripts/hooks/json-helper.cjs new file mode 100755 index 0000000..6565e83 --- /dev/null +++ b/scripts/hooks/json-helper.cjs @@ -0,0 +1,338 @@ +#!/usr/bin/env node + +// scripts/hooks/json-helper.cjs +// Provides jq-equivalent operations for hooks when jq is not installed. +// SECURITY: This is a local CLI helper invoked only by shell hooks with controlled arguments. +// File path arguments come from hook-owned variables, not from external/untrusted input. +// Usage: node json-helper.cjs [args...] +// +// Operations: +// get-field [default] Read field from stdin JSON +// get-field-file [def] Read field from JSON file +// validate Exit 0 if stdin is valid JSON, 1 otherwise +// compact Compact stdin JSON to single line +// construct [--arg k v] Build JSON object with args +// update-field [--json] Set field on stdin JSON (--json parses value) +// update-fields Apply multiple field updates from stdin JSON +// extract-text-messages Extract text content from Claude message format +// merge-evidence Flatten, dedupe, limit to 10 from stdin JSON +// slurp-sort [limit] Read JSONL, sort by field desc, limit results +// slurp-cap Read JSONL, sort by field desc, output limit lines +// array-length Get length of array at dotted path in stdin JSON +// array-item Get item at index from array at path in stdin JSON +// obs-construct Build observation JSON from key=value pairs +// session-output Build SessionStart output envelope +// prompt-output Build UserPromptSubmit output envelope +// backup-construct Build pre-compact backup JSON from --arg pairs +// learning-created Extract created artifacts from learning log +// learning-new Find new artifacts since epoch + +'use strict'; + +const fs = require('fs'); +const path = require('path'); + +const op = process.argv[2]; +const args = process.argv.slice(3); + +/** + * Resolve and validate a file path argument. Returns the resolved absolute path. + * Rejects paths containing '..' traversal sequences for defense-in-depth. + */ +function safePath(filePath) { + const resolved = path.resolve(filePath); + if (resolved.includes('..')) { + throw new Error(`Refused path with traversal: ${filePath}`); + } + return resolved; +} + +function readStdin() { + try { + return fs.readFileSync('/dev/stdin', 'utf8').trim(); + } catch { + return ''; + } +} + +function getNestedField(obj, field) { + const parts = field.split('.'); + let current = obj; + for (const part of parts) { + if (current == null || typeof current !== 'object') return undefined; + current = current[part]; + } + return current; +} + +function parseJsonl(file) { + const lines = fs.readFileSync(safePath(file), 'utf8').trim().split('\n').filter(Boolean); + return lines.map(l => { + try { return JSON.parse(l); } catch { return null; } + }).filter(Boolean); +} + +function parseArgs(argList) { + const result = {}; + const jsonArgs = {}; + for (let i = 0; i < argList.length; i++) { + if (argList[i] === '--arg' && i + 2 < argList.length) { + result[argList[i + 1]] = argList[i + 2]; + i += 2; + } else if (argList[i] === '--argjson' && i + 2 < argList.length) { + try { + jsonArgs[argList[i + 1]] = JSON.parse(argList[i + 2]); + } catch { + jsonArgs[argList[i + 1]] = argList[i + 2]; + } + i += 2; + } + } + return { ...result, ...jsonArgs }; +} + +try { + switch (op) { + case 'get-field': { + const input = JSON.parse(readStdin()); + const field = args[0]; + const def = args[1] || ''; + const val = getNestedField(input, field); + console.log(val != null ? String(val) : def); + break; + } + + case 'get-field-file': { + const file = safePath(args[0]); + const field = args[1]; + const def = args[2] || ''; + const content = fs.readFileSync(file, 'utf8').trim(); + const input = JSON.parse(content); + const val = getNestedField(input, field); + console.log(val != null ? String(val) : def); + break; + } + + case 'validate': { + try { + const text = readStdin(); + if (!text) process.exit(1); + JSON.parse(text); + process.exit(0); + } catch { + process.exit(1); + } + break; + } + + case 'compact': { + const input = JSON.parse(readStdin()); + console.log(JSON.stringify(input)); + break; + } + + case 'construct': { + // Build JSON from --arg/--argjson pairs + const template = parseArgs(args); + console.log(JSON.stringify(template)); + break; + } + + case 'update-field': { + const input = JSON.parse(readStdin()); + const field = args[0]; + const value = args[1]; + const isJson = args[2] === '--json'; + input[field] = isJson ? JSON.parse(value) : value; + console.log(JSON.stringify(input)); + break; + } + + case 'update-fields': { + // Read stdin JSON, apply field updates from args: field1=val1 field2=val2 + const input = JSON.parse(readStdin()); + for (const arg of args) { + const eqIdx = arg.indexOf('='); + if (eqIdx > 0) { + const key = arg.slice(0, eqIdx); + const val = arg.slice(eqIdx + 1); + // Try to parse as JSON, fall back to string + try { input[key] = JSON.parse(val); } catch { input[key] = val; } + } + } + console.log(JSON.stringify(input)); + break; + } + + case 'extract-text-messages': { + const input = JSON.parse(readStdin()); + const content = input?.message?.content; + if (!Array.isArray(content)) { + console.log(''); + break; + } + const texts = content + .filter(c => c.type === 'text') + .map(c => c.text); + console.log(texts.join('\n')); + break; + } + + case 'merge-evidence': { + const input = JSON.parse(readStdin()); + // input is [[old_evidence], [new_evidence]] — flatten, dedupe, limit + const flat = input.flat(); + const unique = [...new Set(flat)]; + console.log(JSON.stringify(unique.slice(0, 10))); + break; + } + + case 'slurp-sort': { + const file = args[0]; + const field = args[1]; + const limit = parseInt(args[2]) || 30; + const parsed = parseJsonl(file); + parsed.sort((a, b) => (b[field] || 0) - (a[field] || 0)); + console.log(JSON.stringify(parsed.slice(0, limit))); + break; + } + + case 'slurp-cap': { + // Read JSONL, sort by field desc, output top N as JSONL (one per line) + const file = args[0]; + const field = args[1]; + const limit = parseInt(args[2]) || 100; + const parsed = parseJsonl(file); + parsed.sort((a, b) => (b[field] || 0) - (a[field] || 0)); + for (const item of parsed.slice(0, limit)) { + console.log(JSON.stringify(item)); + } + break; + } + + case 'array-length': { + const input = JSON.parse(readStdin()); + const path = args[0]; + const arr = getNestedField(input, path); + console.log(Array.isArray(arr) ? arr.length : 0); + break; + } + + case 'array-item': { + const input = JSON.parse(readStdin()); + const path = args[0]; + const index = parseInt(args[1]); + const arr = getNestedField(input, path); + if (Array.isArray(arr) && index >= 0 && index < arr.length) { + console.log(JSON.stringify(arr[index])); + } else { + console.log('null'); + } + break; + } + + case 'obs-construct': { + // Build an observation JSON from --arg/--argjson pairs + const data = parseArgs(args); + console.log(JSON.stringify(data)); + break; + } + + case 'session-output': { + const ctx = args[0]; + console.log(JSON.stringify({ + hookSpecificOutput: { + hookEventName: 'SessionStart', + additionalContext: ctx, + }, + })); + break; + } + + case 'prompt-output': { + const ctx = args[0]; + console.log(JSON.stringify({ + hookSpecificOutput: { + hookEventName: 'UserPromptSubmit', + additionalContext: ctx, + }, + })); + break; + } + + case 'backup-construct': { + const data = parseArgs(args); + console.log(JSON.stringify({ + timestamp: data.ts || '', + trigger: 'pre-compact', + memory_snapshot: data.memory || '', + git: { + branch: data.branch || '', + status: data.status || '', + log: data.log || '', + diff_stat: data.diff || '', + }, + }, null, 2)); + break; + } + + case 'learning-created': { + // Extract created artifacts from learning log JSONL + const file = args[0]; + const parsed = parseJsonl(file); + + const created = parsed.filter(o => o.status === 'created' && o.artifact_path); + + const commands = created + .filter(o => o.type === 'workflow') + .slice(0, 5) + .map(o => { + const name = o.artifact_path.split('/').pop().replace(/\.md$/, ''); + const conf = (Math.floor(o.confidence * 10) / 10).toString(); + return { name, conf }; + }); + + const skills = created + .filter(o => o.type === 'procedural') + .slice(0, 5) + .map(o => { + const match = o.artifact_path.match(/learned-([^/]+)/); + const name = match ? match[1] : ''; + const conf = (Math.floor(o.confidence * 10) / 10).toString(); + return { name, conf }; + }); + + console.log(JSON.stringify({ commands, skills })); + break; + } + + case 'learning-new': { + // Find new artifacts since epoch + const file = args[0]; + // since_epoch argument unused in current implementation — always show created + const parsed = parseJsonl(file); + + const created = parsed.filter(o => o.status === 'created' && o.last_seen); + const messages = created.map(o => { + if (o.type === 'workflow') { + const name = o.artifact_path.split('/').pop().replace(/\.md$/, ''); + return `NEW: /learned/${name} command created from repeated workflow`; + } else { + const match = o.artifact_path.match(/learned-([^/]+)/); + const name = match ? match[1] : ''; + return `NEW: ${name} skill created from procedural knowledge`; + } + }); + + console.log(messages.join('\n')); + break; + } + + default: + process.stderr.write(`json-helper: unknown operation "${op}"\n`); + process.exit(1); + } +} catch (err) { + process.stderr.write(`json-helper error: ${err && err.message ? err.message : String(err)}\n`); + process.exit(1); +} diff --git a/scripts/hooks/json-parse b/scripts/hooks/json-parse new file mode 100755 index 0000000..0d86ad6 --- /dev/null +++ b/scripts/hooks/json-parse @@ -0,0 +1,295 @@ +#!/bin/bash + +# JSON parsing helper — tries jq, falls back to node json-helper.js +# Usage: source "$SCRIPT_DIR/json-parse" +# +# After sourcing, check $_JSON_AVAILABLE before using json_* functions. +# All functions read from stdin unless noted otherwise. + +_SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +_JSON_HELPER="$_SCRIPT_DIR/json-helper.cjs" + +_HAS_JQ=false +_HAS_NODE=false +command -v jq &>/dev/null && _HAS_JQ=true +command -v node &>/dev/null && _HAS_NODE=true + +if [ "$_HAS_JQ" = "false" ] && [ "$_HAS_NODE" = "false" ]; then + _JSON_AVAILABLE=false + return 0 2>/dev/null || exit 0 +fi +_JSON_AVAILABLE=true + +# --- Core field extraction --- + +# Extract a field from stdin JSON. Usage: echo '{"k":"v"}' | json_field "k" "default" +json_field() { + local field="$1" default="${2:-}" + if [ "$_HAS_JQ" = "true" ]; then + jq -r ".$field // \"$default\"" 2>/dev/null + else + node "$_JSON_HELPER" get-field "$field" "$default" + fi +} + +# Extract a field from a JSON file. Usage: json_field_file "/path/to/file.json" "field" "default" +json_field_file() { + local file="$1" field="$2" default="${3:-}" + if [ "$_HAS_JQ" = "true" ]; then + jq -r ".$field // \"$default\"" "$file" 2>/dev/null + else + node "$_JSON_HELPER" get-field-file "$file" "$field" "$default" + fi +} + +# --- Validation --- + +# Validate stdin as JSON (exit code 0/1). Usage: echo '{}' | json_valid +json_valid() { + if [ "$_HAS_JQ" = "true" ]; then + jq -e . >/dev/null 2>&1 + else + node "$_JSON_HELPER" validate + fi +} + +# --- Compact --- + +# Compact stdin JSON to single line. Usage: echo '{ "k": "v" }' | json_compact +json_compact() { + if [ "$_HAS_JQ" = "true" ]; then + jq -c '.' 2>/dev/null + else + node "$_JSON_HELPER" compact + fi +} + +# --- Construction --- + +# Build a JSON object from --arg/--argjson pairs. +# Usage: json_construct --arg key1 val1 --arg key2 val2 +json_construct() { + if [ "$_HAS_JQ" = "true" ]; then + # Build jq expression from args + local jq_args=() + local fields="" + local i=0 + while [ $i -lt $# ]; do + local flag="${!((i+1))}" + if [ "$flag" = "--arg" ]; then + local name="${!((i+2))}" + local val="${!((i+3))}" + jq_args+=(--arg "$name" "$val") + if [ -n "$fields" ]; then fields="$fields, "; fi + fields="${fields}${name}: \$${name}" + i=$((i + 3)) + elif [ "$flag" = "--argjson" ]; then + local name="${!((i+2))}" + local val="${!((i+3))}" + jq_args+=(--argjson "$name" "$val") + if [ -n "$fields" ]; then fields="$fields, "; fi + fields="${fields}${name}: \$${name}" + i=$((i + 3)) + else + i=$((i + 1)) + fi + done + jq -n -c "${jq_args[@]}" "{$fields}" 2>/dev/null + else + node "$_JSON_HELPER" construct "$@" + fi +} + +# --- Field updates --- + +# Update a field on stdin JSON. Usage: echo '{"k":"v"}' | json_update_field "k" "new_v" +json_update_field() { + local field="$1" value="$2" + if [ "$_HAS_JQ" = "true" ]; then + jq -c --arg v "$value" ".$field = \$v" 2>/dev/null + else + node "$_JSON_HELPER" update-field "$field" "$value" + fi +} + +# Update a field with a JSON value. Usage: echo '{}' | json_update_field_json "k" '42' +json_update_field_json() { + local field="$1" value="$2" + if [ "$_HAS_JQ" = "true" ]; then + jq -c --argjson v "$value" ".$field = \$v" 2>/dev/null + else + node "$_JSON_HELPER" update-field "$field" "$value" --json + fi +} + +# --- Slurp operations --- + +# Slurp JSONL file, sort by field desc, output as JSON array. +# Usage: json_slurp_sort "file.jsonl" "confidence" 30 +json_slurp_sort() { + local file="$1" field="$2" limit="${3:-30}" + if [ "$_HAS_JQ" = "true" ]; then + jq -s "sort_by(.$field) | reverse | .[0:$limit]" "$file" 2>/dev/null + else + node "$_JSON_HELPER" slurp-sort "$file" "$field" "$limit" + fi +} + +# Slurp JSONL file, sort by field desc, output as JSONL (one line per entry). +# Usage: json_slurp_cap "file.jsonl" "confidence" 100 +json_slurp_cap() { + local file="$1" field="$2" limit="${3:-100}" + if [ "$_HAS_JQ" = "true" ]; then + jq -c '.' "$file" | jq -s "sort_by(.$field) | reverse | .[0:$limit][]" 2>/dev/null + else + node "$_JSON_HELPER" slurp-cap "$file" "$field" "$limit" + fi +} + +# --- Array operations --- + +# Get length of array at path. Usage: echo '{"obs":[1,2]}' | json_array_length "observations" +json_array_length() { + local path="$1" + if [ "$_HAS_JQ" = "true" ]; then + jq ".$path | length" 2>/dev/null + else + node "$_JSON_HELPER" array-length "$path" + fi +} + +# Get item at index from array. Usage: echo '{"obs":[{},{}]}' | json_array_item "observations" 0 +json_array_item() { + local path="$1" index="$2" + if [ "$_HAS_JQ" = "true" ]; then + jq -c ".${path}[$index]" 2>/dev/null + else + node "$_JSON_HELPER" array-item "$path" "$index" + fi +} + +# --- Transcript extraction --- + +# Extract text messages from Claude message JSON. Usage: echo '{"message":...}' | json_extract_messages +json_extract_messages() { + if [ "$_HAS_JQ" = "true" ]; then + jq -r 'if .message.content then + [.message.content[] | select(.type == "text") | .text] | join("\n") + else "" end' 2>/dev/null + else + node "$_JSON_HELPER" extract-text-messages + fi +} + +# --- Evidence merging --- + +# Merge two evidence arrays. Usage: echo '[[old], [new]]' | json_merge_evidence +json_merge_evidence() { + if [ "$_HAS_JQ" = "true" ]; then + jq -c 'flatten | unique | .[0:10]' 2>/dev/null + else + node "$_JSON_HELPER" merge-evidence + fi +} + +# --- Hook output envelopes --- + +# Build SessionStart output. Usage: json_session_output "$CONTEXT" +json_session_output() { + local ctx="$1" + if [ "$_HAS_JQ" = "true" ]; then + jq -n --arg ctx "$ctx" '{ + "hookSpecificOutput": { + "hookEventName": "SessionStart", + "additionalContext": $ctx + } + }' + else + node "$_JSON_HELPER" session-output "$ctx" + fi +} + +# Build UserPromptSubmit output. Usage: json_prompt_output "$CONTEXT" +json_prompt_output() { + local ctx="$1" + if [ "$_HAS_JQ" = "true" ]; then + jq -n --arg ctx "$ctx" '{ + "hookSpecificOutput": { + "hookEventName": "UserPromptSubmit", + "additionalContext": $ctx + } + }' + else + node "$_JSON_HELPER" prompt-output "$ctx" + fi +} + +# Build pre-compact backup JSON. Usage: json_backup_construct --arg ts X --arg branch Y ... +json_backup_construct() { + if [ "$_HAS_JQ" = "true" ]; then + jq -n "$@" '{timestamp: $ts, trigger: "pre-compact", memory_snapshot: $memory, git: {branch: $branch, status: $status, log: $log, diff_stat: $diff}}' + else + node "$_JSON_HELPER" backup-construct "$@" + fi +} + +# --- Learning-specific --- + +# Extract created artifacts from learning log. Usage: json_learning_created "learning-log.jsonl" +json_learning_created() { + local file="$1" + if [ "$_HAS_JQ" = "true" ]; then + jq -s ' + [.[] | select(.status == "created" and .artifact_path != null and .artifact_path != "")] + | { + commands: [.[] | select(.type == "workflow") | { + name: (.artifact_path | split("/") | last | rtrimstr(".md")), + conf: (.confidence * 10 | floor / 10 | tostring) + }] | .[0:5], + skills: [.[] | select(.type == "procedural") | { + name: (.artifact_path | capture("learned-(?[^/]+)") | .n // ""), + conf: (.confidence * 10 | floor / 10 | tostring) + }] | .[0:5] + } + ' "$file" 2>/dev/null + else + node "$_JSON_HELPER" learning-created "$file" + fi +} + +# Find new artifacts since epoch. Usage: json_learning_new "learning-log.jsonl" "$LAST_NOTIFIED" +json_learning_new() { + local file="$1" since="$2" + if [ "$_HAS_JQ" = "true" ]; then + jq -s --arg since "$since" ' + [.[] | select(.status == "created" and .last_seen != null)] + | map( + if .type == "workflow" then + "NEW: /learned/\(.artifact_path | split("/") | last | rtrimstr(".md")) command created from repeated workflow" + else + "NEW: \(.artifact_path | capture("learned-(?[^/]+)") | .n // "") skill created from procedural knowledge" + end + ) | join("\n") + ' "$file" 2>/dev/null + else + node "$_JSON_HELPER" learning-new "$file" "$since" + fi +} + +# Build observation JSON. Usage: json_obs_construct --arg id X --arg type Y ... +json_obs_construct() { + if [ "$_HAS_JQ" = "true" ]; then + jq -n -c "$@" '{id: $id, type: $type, pattern: $pattern, confidence: $confidence, observations: $observations, first_seen: $first_seen, last_seen: $last_seen, status: $status, evidence: $evidence, details: $details}' + else + node "$_JSON_HELPER" obs-construct "$@" + fi +} + +# Build observation JSON with artifact_path. Usage: json_obs_construct_full --arg id X ... +json_obs_construct_full() { + if [ "$_HAS_JQ" = "true" ]; then + jq -n -c "$@" '{id: $id, type: $type, pattern: $pattern, confidence: $confidence, observations: $observations, first_seen: $first_seen, last_seen: $last_seen, status: $status, evidence: $evidence, details: $details, artifact_path: $artifact_path}' + else + node "$_JSON_HELPER" obs-construct "$@" + fi +} diff --git a/scripts/hooks/pre-compact-memory b/scripts/hooks/pre-compact-memory index b4b3db3..08ef19e 100644 --- a/scripts/hooks/pre-compact-memory +++ b/scripts/hooks/pre-compact-memory @@ -8,12 +8,13 @@ set -e -# jq is required to parse hook input JSON — silently no-op if missing -if ! command -v jq &>/dev/null; then exit 0; fi +# JSON parsing (jq with node fallback) — silently no-op if neither available +source "$(cd "$(dirname "$0")" && pwd)/json-parse" +if [ "$_JSON_AVAILABLE" = "false" ]; then exit 0; fi INPUT=$(cat) -CWD=$(echo "$INPUT" | jq -r '.cwd // ""' 2>/dev/null) +CWD=$(echo "$INPUT" | json_field "cwd" "") if [ -z "$CWD" ]; then exit 0 fi @@ -44,24 +45,14 @@ if [ -f "$CWD/.memory/WORKING-MEMORY.md" ]; then fi # Write backup JSON -jq -n \ +json_backup_construct \ --arg ts "$TIMESTAMP" \ --arg branch "$GIT_BRANCH" \ --arg status "$GIT_STATUS" \ --arg log "$GIT_LOG" \ --arg diff "$GIT_DIFF_STAT" \ --arg memory "$MEMORY_SNAPSHOT" \ - '{ - timestamp: $ts, - trigger: "pre-compact", - memory_snapshot: $memory, - git: { - branch: $branch, - status: $status, - log: $log, - diff_stat: $diff - } - }' > "$BACKUP_FILE" + > "$BACKUP_FILE" # Bootstrap minimal WORKING-MEMORY.md if none exists yet # This ensures SessionStart has context to inject after compaction diff --git a/scripts/hooks/session-start-memory b/scripts/hooks/session-start-memory index 62d294a..ead0a74 100644 --- a/scripts/hooks/session-start-memory +++ b/scripts/hooks/session-start-memory @@ -8,12 +8,14 @@ set -e -# jq is required to parse hook input JSON — silently no-op if missing -if ! command -v jq &>/dev/null; then exit 0; fi +# JSON parsing (jq with node fallback) — silently no-op if neither available +SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)" +source "$SCRIPT_DIR/json-parse" +if [ "$_JSON_AVAILABLE" = "false" ]; then exit 0; fi INPUT=$(cat) -CWD=$(echo "$INPUT" | jq -r '.cwd // ""' 2>/dev/null) +CWD=$(echo "$INPUT" | json_field "cwd" "") if [ -z "$CWD" ]; then exit 0 fi @@ -40,9 +42,9 @@ if [ -f "$MEMORY_FILE" ]; then BACKUP_FILE="$CWD/.memory/backup.json" COMPACT_NOTE="" if [ -f "$BACKUP_FILE" ]; then - BACKUP_MEMORY=$(jq -r '.memory_snapshot // ""' "$BACKUP_FILE" 2>/dev/null) + BACKUP_MEMORY=$(json_field_file "$BACKUP_FILE" "memory_snapshot" "") if [ -n "$BACKUP_MEMORY" ]; then - BACKUP_TS=$(jq -r '.timestamp // ""' "$BACKUP_FILE" 2>/dev/null) + BACKUP_TS=$(json_field_file "$BACKUP_FILE" "timestamp" "") BACKUP_EPOCH=0 if [ -n "$BACKUP_TS" ]; then BACKUP_EPOCH=$(date -j -f "%Y-%m-%dT%H:%M:%SZ" "$BACKUP_TS" +%s 2>/dev/null \ @@ -128,6 +130,77 @@ ${KNOWLEDGE_SECTION}" fi fi +# --- Section 1.75: Learned Behaviors --- + +LEARNING_LOG="$CWD/.memory/learning-log.jsonl" +NOTIFIED_MARKER="$CWD/.memory/.learning-notified-at" + +if [ -f "$LEARNING_LOG" ]; then + # Single-pass extraction of created artifacts (commands + skills) + LEARNED_JSON=$(json_learning_created "$LEARNING_LOG") + + LEARNED_COMMANDS="" + LEARNED_SKILLS="" + if [ -n "$LEARNED_JSON" ]; then + if [ "$_HAS_JQ" = "true" ]; then + LEARNED_COMMANDS=$(echo "$LEARNED_JSON" | jq -r '.commands | map("/learned/\(.name) (\(.conf))") | join(", ")' 2>/dev/null) + LEARNED_SKILLS=$(echo "$LEARNED_JSON" | jq -r '.skills | map("\(.name) (\(.conf))") | join(", ")' 2>/dev/null) + else + # Node fallback: parse the JSON and format + LEARNED_COMMANDS=$(echo "$LEARNED_JSON" | node -e " + const d=JSON.parse(require('fs').readFileSync('/dev/stdin','utf8')); + console.log(d.commands.map(c=>\"/learned/\"+c.name+\" (\"+c.conf+\")\").join(', ')); + " 2>/dev/null) + LEARNED_SKILLS=$(echo "$LEARNED_JSON" | node -e " + const d=JSON.parse(require('fs').readFileSync('/dev/stdin','utf8')); + console.log(d.skills.map(s=>s.name+\" (\"+s.conf+\")\").join(', ')); + " 2>/dev/null) + fi + fi + + if [ -n "$LEARNED_COMMANDS" ] || [ -n "$LEARNED_SKILLS" ]; then + LEARNED_SECTION="--- LEARNED BEHAVIORS ---" + if [ -n "$LEARNED_COMMANDS" ]; then + LEARNED_SECTION="$LEARNED_SECTION +Commands: $LEARNED_COMMANDS" + fi + if [ -n "$LEARNED_SKILLS" ]; then + LEARNED_SECTION="$LEARNED_SECTION +Skills: $LEARNED_SKILLS" + fi + LEARNED_SECTION="$LEARNED_SECTION +Edit or delete: .claude/commands/learned/ and .claude/skills/" + + # Check for new artifacts since last notification (single jq -s pass) + LAST_NOTIFIED=0 + if [ -f "$NOTIFIED_MARKER" ]; then + LAST_NOTIFIED=$(cat "$NOTIFIED_MARKER" 2>/dev/null || echo "0") + fi + + NEW_ARTIFACTS=$(json_learning_new "$LEARNING_LOG" "$LAST_NOTIFIED") + + if [ -n "$NEW_ARTIFACTS" ]; then + LEARNED_SECTION="$LEARNED_SECTION + +--- NEW LEARNED BEHAVIORS --- +${NEW_ARTIFACTS} +TIP: Type the command name to use it. Rename to shorter path for quicker access. +Run \`devflow learn --list\` to see all." + + # Only update notified marker when new artifacts exist + date +%s > "$NOTIFIED_MARKER" + fi + + if [ -n "$CONTEXT" ]; then + CONTEXT="${CONTEXT} + +${LEARNED_SECTION}" + else + CONTEXT="$LEARNED_SECTION" + fi + fi +fi + # --- Section 2: Ambient Skill Injection --- # Inject ambient-router SKILL.md directly into context so Claude doesn't need a Read call. @@ -157,9 +230,4 @@ if [ -z "$CONTEXT" ]; then fi # Output as additionalContext JSON envelope (Claude sees it as system context, not user-visible) -jq -n --arg ctx "$CONTEXT" '{ - "hookSpecificOutput": { - "hookEventName": "SessionStart", - "additionalContext": $ctx - } -}' +json_session_output "$CONTEXT" diff --git a/scripts/hooks/stop-update-learning b/scripts/hooks/stop-update-learning new file mode 100755 index 0000000..16f038c --- /dev/null +++ b/scripts/hooks/stop-update-learning @@ -0,0 +1,97 @@ +#!/bin/bash + +# Self-Learning: Stop Hook +# Spawns a background process to analyze session patterns asynchronously. +# The session ends immediately — no visible effect in the TUI. +# On failure: does nothing (missing patterns are better than fake data). + +set -e + +# Break feedback loop: background learner's headless session triggers stop hook on exit. +# DEVFLOW_BG_LEARNER is set by background-learning before invoking claude. +if [ "${DEVFLOW_BG_LEARNER:-}" = "1" ]; then exit 0; fi +# Also guard against memory updater triggering learning +if [ "${DEVFLOW_BG_UPDATER:-}" = "1" ]; then exit 0; fi + +# Resolve script directory once (used for json-parse, ensure-memory-gitignore, and learner) +SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)" + +# JSON parsing (jq with node fallback) — silently no-op if neither available +source "$SCRIPT_DIR/json-parse" +if [ "$_JSON_AVAILABLE" = "false" ]; then exit 0; fi + +INPUT=$(cat) + +# Resolve project directory — bail if missing +CWD=$(echo "$INPUT" | json_field "cwd" "") +if [ -z "$CWD" ]; then + exit 0 +fi + +# Auto-create .memory/ and ensure .gitignore entries (idempotent after first run) +source "$SCRIPT_DIR/ensure-memory-gitignore" "$CWD" || exit 0 + +# Logging +LOG_FILE="$CWD/.memory/.learning-update.log" +log() { echo "[$(date -u '+%Y-%m-%dT%H:%M:%SZ')] [stop-hook] $1" >> "$LOG_FILE"; } + +# Throttle: skip if triggered within configured throttle window +# Load throttle config +THROTTLE_MINUTES=5 +GLOBAL_CONFIG="$HOME/.devflow/learning.json" +PROJECT_CONFIG="$CWD/.memory/learning.json" +if [ -f "$PROJECT_CONFIG" ]; then + THROTTLE_MINUTES=$(json_field_file "$PROJECT_CONFIG" "throttle_minutes" "5") +elif [ -f "$GLOBAL_CONFIG" ]; then + THROTTLE_MINUTES=$(json_field_file "$GLOBAL_CONFIG" "throttle_minutes" "5") +fi +THROTTLE_SECONDS=$((THROTTLE_MINUTES * 60)) + +TRIGGER_MARKER="$CWD/.memory/.learning-last-trigger" +if [ -f "$TRIGGER_MARKER" ]; then + if stat --version &>/dev/null 2>&1; then + MARKER_MTIME=$(stat -c %Y "$TRIGGER_MARKER") + else + MARKER_MTIME=$(stat -f %m "$TRIGGER_MARKER") + fi + NOW=$(date +%s) + AGE=$(( NOW - MARKER_MTIME )) + if [ "$AGE" -lt "$THROTTLE_SECONDS" ]; then + log "Skipped: triggered ${AGE}s ago (throttle: ${THROTTLE_SECONDS}s)" + exit 0 + fi +fi + +# Resolve claude binary — if not found, skip (graceful degradation) +CLAUDE_BIN=$(command -v claude 2>/dev/null || true) +if [ -z "$CLAUDE_BIN" ]; then + log "Skipped: claude binary not found" + exit 0 +fi + +# Extract session ID from hook input +SESSION_ID=$(echo "$INPUT" | json_field "session_id" "") +if [ -z "$SESSION_ID" ]; then + log "Skipped: no session_id in hook input" + exit 0 +fi + +# Resolve the background learning script (same directory as this hook) +LEARNER="$SCRIPT_DIR/background-learning" +if [ ! -x "$LEARNER" ]; then + log "Skipped: learner not found/not executable at $LEARNER" + exit 0 +fi + +# Touch marker BEFORE spawning learner — prevents race with concurrent hooks +touch "$TRIGGER_MARKER" + +# Spawn background learner — detached, no effect on session exit +nohup "$LEARNER" "$CWD" "$SESSION_ID" "$CLAUDE_BIN" \ + /dev/null 2>&1 & +disown + +log "Spawned background learner: session=$SESSION_ID cwd=$CWD claude=$CLAUDE_BIN learner=$LEARNER" + +# Allow stop immediately (no JSON output = proceed) +exit 0 diff --git a/scripts/hooks/stop-update-memory b/scripts/hooks/stop-update-memory index bdd8c1c..fa29ee8 100755 --- a/scripts/hooks/stop-update-memory +++ b/scripts/hooks/stop-update-memory @@ -11,20 +11,23 @@ set -e # DEVFLOW_BG_UPDATER is set by background-memory-update before invoking claude. if [ "${DEVFLOW_BG_UPDATER:-}" = "1" ]; then exit 0; fi -# jq is required to parse hook input JSON — silently no-op if missing -if ! command -v jq &>/dev/null; then exit 0; fi +# Resolve script directory once (used for json-parse, ensure-memory-gitignore, and updater) +SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)" + +# JSON parsing (jq with node fallback) — silently no-op if neither available +source "$SCRIPT_DIR/json-parse" +if [ "$_JSON_AVAILABLE" = "false" ]; then exit 0; fi INPUT=$(cat) # Resolve project directory — bail if missing -CWD=$(echo "$INPUT" | jq -r '.cwd // ""' 2>/dev/null) +CWD=$(echo "$INPUT" | json_field "cwd" "") if [ -z "$CWD" ]; then exit 0 fi # Auto-create .memory/ and ensure .gitignore entries (idempotent after first run) -SCRIPT_DIR_EARLY="$(cd "$(dirname "$0")" && pwd)" -source "$SCRIPT_DIR_EARLY/ensure-memory-gitignore" "$CWD" || exit 0 +source "$SCRIPT_DIR/ensure-memory-gitignore" "$CWD" || exit 0 # Logging (shared log file with background updater; [stop-hook] prefix distinguishes) MEMORY_FILE="$CWD/.memory/WORKING-MEMORY.md" @@ -57,14 +60,13 @@ if [ -z "$CLAUDE_BIN" ]; then fi # Extract session ID from hook input -SESSION_ID=$(echo "$INPUT" | jq -r '.session_id // ""' 2>/dev/null) +SESSION_ID=$(echo "$INPUT" | json_field "session_id" "") if [ -z "$SESSION_ID" ]; then log "Skipped: no session_id in hook input" exit 0 fi # Resolve the background updater script (same directory as this hook) -SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)" UPDATER="$SCRIPT_DIR/background-memory-update" if [ ! -x "$UPDATER" ]; then log "Skipped: updater not found/not executable at $UPDATER" diff --git a/src/cli/cli.ts b/src/cli/cli.ts index b864b5c..90dd005 100644 --- a/src/cli/cli.ts +++ b/src/cli/cli.ts @@ -11,6 +11,7 @@ import { ambientCommand } from './commands/ambient.js'; import { memoryCommand } from './commands/memory.js'; import { skillsCommand } from './commands/skills.js'; import { hudCommand } from './commands/hud.js'; +import { learnCommand } from './commands/learn.js'; const __filename = fileURLToPath(import.meta.url); const __dirname = dirname(__filename); @@ -37,6 +38,7 @@ program.addCommand(ambientCommand); program.addCommand(memoryCommand); program.addCommand(skillsCommand); program.addCommand(hudCommand); +program.addCommand(learnCommand); // Handle no command program.action(() => { diff --git a/src/cli/commands/ambient.ts b/src/cli/commands/ambient.ts index 4df1f31..374cd6c 100644 --- a/src/cli/commands/ambient.ts +++ b/src/cli/commands/ambient.ts @@ -3,25 +3,8 @@ import { promises as fs } from 'fs'; import * as path from 'path'; import * as p from '@clack/prompts'; import color from 'picocolors'; -import { getClaudeDirectory } from '../utils/paths.js'; - -/** - * The hook entry structure used by Claude Code settings.json. - */ -interface HookEntry { - type: string; - command: string; - timeout?: number; -} - -interface HookMatcher { - hooks: HookEntry[]; -} - -interface Settings { - hooks?: Record; - [key: string]: unknown; -} +import { getClaudeDirectory, getDevFlowDirectory } from '../utils/paths.js'; +import type { HookMatcher, Settings } from '../utils/hooks.js'; const AMBIENT_HOOK_MARKER = 'ambient-prompt'; @@ -73,10 +56,15 @@ export function removeAmbientHook(settingsJson: string): string { return settingsJson; } + const before = settings.hooks.UserPromptSubmit.length; settings.hooks.UserPromptSubmit = settings.hooks.UserPromptSubmit.filter( (matcher) => !matcher.hooks.some((h) => h.command.includes(AMBIENT_HOOK_MARKER)), ); + if (settings.hooks.UserPromptSubmit.length === before) { + return settingsJson; + } + if (settings.hooks.UserPromptSubmit.length === 0) { delete settings.hooks.UserPromptSubmit; } @@ -103,12 +91,18 @@ export function hasAmbientHook(settingsJson: string): boolean { ); } +interface AmbientOptions { + enable?: boolean; + disable?: boolean; + status?: boolean; +} + export const ambientCommand = new Command('ambient') .description('Enable or disable ambient mode (always-on quality enforcement)') .option('--enable', 'Register UserPromptSubmit hook for ambient mode') .option('--disable', 'Remove ambient mode hook') .option('--status', 'Check if ambient mode is enabled') - .action(async (options) => { + .action(async (options: AmbientOptions) => { const hasFlag = options.enable || options.disable || options.status; if (!hasFlag) { p.intro(color.bgMagenta(color.white(' Ambient Mode '))); @@ -146,18 +140,17 @@ export const ambientCommand = new Command('ambient') // Resolve devflow scripts directory from settings.json hooks or default let devflowDir: string; try { - const settings = JSON.parse(settingsContent); + const settings: Settings = JSON.parse(settingsContent); // Try to extract devflowDir from existing hooks (e.g., Stop hook path) const stopHook = settings.hooks?.Stop?.[0]?.hooks?.[0]?.command; if (stopHook) { - // e.g., "run-hook stop-update-memory" or "/path/to/.devflow/scripts/hooks/run-hook stop-update-memory" const hookBinary = stopHook.split(' ')[0]; devflowDir = path.resolve(hookBinary, '..', '..', '..'); } else { - devflowDir = path.join(process.env.HOME || '~', '.devflow'); + devflowDir = getDevFlowDirectory(); } } catch { - devflowDir = path.join(process.env.HOME || '~', '.devflow'); + devflowDir = getDevFlowDirectory(); } if (options.enable) { diff --git a/src/cli/commands/init.ts b/src/cli/commands/init.ts index 7a8161f..950fcb2 100644 --- a/src/cli/commands/init.ts +++ b/src/cli/commands/init.ts @@ -3,6 +3,7 @@ import { promises as fs } from 'fs'; import * as path from 'path'; import { fileURLToPath } from 'url'; import { dirname } from 'path'; +import { execSync } from 'child_process'; import * as p from '@clack/prompts'; import color from 'picocolors'; import { getInstallationPaths } from '../utils/paths.js'; @@ -25,6 +26,7 @@ import { detectPlatform, detectShell, getProfilePath, getSafeDeleteInfo, hasSafe import { generateSafeDeleteBlock, installToProfile, removeFromProfile, getInstalledVersion, SAFE_DELETE_BLOCK_VERSION } from '../utils/safe-delete-install.js'; import { addAmbientHook } from './ambient.js'; import { addMemoryHooks, removeMemoryHooks } from './memory.js'; +import { addLearningHook, removeLearningHook } from './learn.js'; import { addHudStatusLine, removeHudStatusLine } from './hud.js'; import { loadConfig as loadHudConfig, saveConfig as saveHudConfig } from '../hud/config.js'; import { readManifest, writeManifest, resolvePluginList, detectUpgrade } from '../utils/manifest.js'; @@ -33,6 +35,7 @@ import { readManifest, writeManifest, resolvePluginList, detectUpgrade } from '. export { substituteSettingsTemplate, computeGitignoreAppend, applyTeamsConfig, stripTeamsConfig, mergeDenyList, discoverProjectGitRoots } from '../utils/post-install.js'; export { addAmbientHook, removeAmbientHook, hasAmbientHook } from './ambient.js'; export { addMemoryHooks, removeMemoryHooks, hasMemoryHooks } from './memory.js'; +export { addLearningHook, removeLearningHook, hasLearningHook } from './learn.js'; export { addHudStatusLine, removeHudStatusLine, hasHudStatusLine } from './hud.js'; const __filename = fileURLToPath(import.meta.url); @@ -66,6 +69,7 @@ interface InitOptions { teams?: boolean; ambient?: boolean; memory?: boolean; + learn?: boolean; hud?: boolean; hudOnly?: boolean; } @@ -81,6 +85,8 @@ export const initCommand = new Command('init') .option('--no-ambient', 'Disable ambient mode') .option('--memory', 'Enable working memory (session context preservation)') .option('--no-memory', 'Disable working memory hooks') + .option('--learn', 'Enable self-learning (workflow detection)') + .option('--no-learn', 'Disable self-learning') .option('--hud', 'Enable HUD (git info, context usage, session stats)') .option('--no-hud', 'Disable HUD status line') .option('--hud-only', 'Install only the HUD (no plugins, hooks, or extras)') @@ -190,7 +196,7 @@ export const initCommand = new Command('init') version, plugins: [], scope, - features: { teams: false, ambient: false, memory: false, hud: true }, + features: { teams: false, ambient: false, memory: false, hud: true, learn: false }, installedAt: now, updatedAt: now, }); @@ -340,6 +346,30 @@ export const initCommand = new Command('init') memoryEnabled = memoryChoice; } + // Self-learning selection (defaults ON — foundational feature) + let learnEnabled: boolean; + if (options.learn !== undefined) { + learnEnabled = options.learn; + } else if (!process.stdin.isTTY) { + learnEnabled = true; + } else { + p.note( + 'Detects repeated workflows and creates slash commands\n' + + 'automatically. Runs a background agent on session stop\n' + + 'that consumes additional tokens.', + 'Self-Learning', + ); + const learnChoice = await p.confirm({ + message: 'Enable self-learning? (Recommended)', + initialValue: true, + }); + if (p.isCancel(learnChoice)) { + p.cancel('Installation cancelled.'); + process.exit(0); + } + learnEnabled = learnChoice; + } + // HUD selection (yes/no) let hudEnabled: boolean; if (options.hud !== undefined) { @@ -688,6 +718,10 @@ export const initCommand = new Command('init') const cleaned = removeMemoryHooks(content); content = memoryEnabled ? addMemoryHooks(cleaned, devflowDir) : cleaned; + // Learning hook — remove-then-add for upgrade safety + const cleanedForLearn = removeLearningHook(content); + content = learnEnabled ? addLearningHook(cleanedForLearn, devflowDir) : cleanedForLearn; + // HUD statusLine content = hudEnabled ? addHudStatusLine(content, devflowDir) @@ -756,6 +790,14 @@ export const initCommand = new Command('init') s.stop('Installation complete'); + // Check for jq (hooks degrade gracefully without it, but features are reduced) + try { + execSync('command -v jq', { stdio: 'ignore' }); + } catch { + p.log.warn('jq not found — some hook features will have reduced functionality'); + p.log.info(`Install: ${color.cyan('brew install jq')}`); + } + // === Summary === if (usedNativeCli) { @@ -822,7 +864,7 @@ export const initCommand = new Command('init') version, plugins: resolvePluginList(installedPluginNames, existingManifest, !!options.plugin), scope, - features: { teams: teamsEnabled, ambient: ambientEnabled, memory: memoryEnabled, hud: hudEnabled }, + features: { teams: teamsEnabled, ambient: ambientEnabled, memory: memoryEnabled, learn: learnEnabled, hud: hudEnabled }, installedAt: existingManifest?.installedAt ?? now, updatedAt: now, }; diff --git a/src/cli/commands/learn.ts b/src/cli/commands/learn.ts new file mode 100644 index 0000000..15a5eca --- /dev/null +++ b/src/cli/commands/learn.ts @@ -0,0 +1,480 @@ +import { Command } from 'commander'; +import { promises as fs } from 'fs'; +import * as path from 'path'; +import * as p from '@clack/prompts'; +import color from 'picocolors'; +import { getClaudeDirectory, getDevFlowDirectory } from '../utils/paths.js'; +import type { HookMatcher, Settings } from '../utils/hooks.js'; + +/** + * Learning observation stored in learning-log.jsonl (one JSON object per line). + */ +export interface LearningObservation { + id: string; + type: 'workflow' | 'procedural'; + pattern: string; + confidence: number; + observations: number; + first_seen: string; + last_seen: string; + status: 'observing' | 'ready' | 'created'; + evidence: string[]; + details: string; + artifact_path?: string; +} + +/** + * Merged learning configuration from global and project-level config files. + */ +export interface LearningConfig { + max_daily_runs: number; + throttle_minutes: number; + model: string; +} + +/** + * Type guard for validating raw JSON as a LearningObservation. + */ +export function isLearningObservation(obj: unknown): obj is LearningObservation { + if (typeof obj !== 'object' || obj === null) return false; + const o = obj as Record; + return typeof o.id === 'string' + && (o.type === 'workflow' || o.type === 'procedural') + && typeof o.pattern === 'string' + && typeof o.confidence === 'number' + && typeof o.observations === 'number' + && typeof o.first_seen === 'string' + && typeof o.last_seen === 'string' + && (o.status === 'observing' || o.status === 'ready' || o.status === 'created') + && Array.isArray(o.evidence) + && typeof o.details === 'string'; +} + +const LEARNING_HOOK_MARKER = 'stop-update-learning'; + +/** + * Add the learning Stop hook to settings JSON. + * Idempotent — returns unchanged JSON if hook already exists. + */ +export function addLearningHook(settingsJson: string, devflowDir: string): string { + const settings: Settings = JSON.parse(settingsJson); + + if (hasLearningHook(settingsJson)) { + return settingsJson; + } + + if (!settings.hooks) { + settings.hooks = {}; + } + + const hookCommand = path.join(devflowDir, 'scripts', 'hooks', 'run-hook') + ' stop-update-learning'; + + const newEntry: HookMatcher = { + hooks: [ + { + type: 'command', + command: hookCommand, + timeout: 10, + }, + ], + }; + + if (!settings.hooks.Stop) { + settings.hooks.Stop = []; + } + + settings.hooks.Stop.push(newEntry); + + return JSON.stringify(settings, null, 2) + '\n'; +} + +/** + * Remove the learning Stop hook from settings JSON. + * Idempotent — returns unchanged JSON if hook not present. + * Preserves other Stop hooks. Cleans empty arrays/objects. + */ +export function removeLearningHook(settingsJson: string): string { + const settings: Settings = JSON.parse(settingsJson); + + if (!settings.hooks?.Stop) { + return settingsJson; + } + + const before = settings.hooks.Stop.length; + settings.hooks.Stop = settings.hooks.Stop.filter( + (matcher) => !matcher.hooks.some((h) => h.command.includes(LEARNING_HOOK_MARKER)), + ); + + if (settings.hooks.Stop.length === before) { + return settingsJson; + } + + if (settings.hooks.Stop.length === 0) { + delete settings.hooks.Stop; + } + + if (settings.hooks && Object.keys(settings.hooks).length === 0) { + delete settings.hooks; + } + + return JSON.stringify(settings, null, 2) + '\n'; +} + +/** + * Check if the learning hook is registered in settings JSON. + */ +export function hasLearningHook(settingsJson: string): boolean { + const settings: Settings = JSON.parse(settingsJson); + + if (!settings.hooks?.Stop) { + return false; + } + + return settings.hooks.Stop.some((matcher) => + matcher.hooks.some((h) => h.command.includes(LEARNING_HOOK_MARKER)), + ); +} + +/** + * Parse a JSONL learning log into typed observations. + * Skips empty and malformed lines. + */ +export function parseLearningLog(logContent: string): LearningObservation[] { + if (!logContent.trim()) { + return []; + } + + const observations: LearningObservation[] = []; + + for (const line of logContent.split('\n')) { + const trimmed = line.trim(); + if (!trimmed) continue; + + try { + const parsed: unknown = JSON.parse(trimmed); + if (isLearningObservation(parsed)) { + observations.push(parsed); + } + } catch { + // Skip malformed lines + } + } + + return observations; +} + +/** + * Format a human-readable status summary for learning state. + */ +export function formatLearningStatus(observations: LearningObservation[], hookEnabled: boolean): string { + const lines: string[] = []; + + lines.push(`Self-learning: ${hookEnabled ? 'enabled' : 'disabled'}`); + + if (observations.length === 0) { + lines.push('Observations: none'); + return lines.join('\n'); + } + + const workflows = observations.filter((o) => o.type === 'workflow'); + const procedurals = observations.filter((o) => o.type === 'procedural'); + const created = observations.filter((o) => o.status === 'created'); + const ready = observations.filter((o) => o.status === 'ready'); + const observing = observations.filter((o) => o.status === 'observing'); + + lines.push(`Observations: ${observations.length} total`); + lines.push(` Workflows: ${workflows.length}, Procedural: ${procedurals.length}`); + lines.push(` Status: ${observing.length} observing, ${ready.length} ready, ${created.length} promoted`); + + return lines.join('\n'); +} + +/** + * Apply a single JSON config layer onto a LearningConfig, returning a new object. + * Skips fields with wrong types; swallows parse errors. + */ +// SYNC: Config loading duplicated in scripts/hooks/background-learning load_config() +export function applyConfigLayer(config: LearningConfig, json: string): LearningConfig { + try { + const raw = JSON.parse(json) as Record; + return { + max_daily_runs: typeof raw.max_daily_runs === 'number' ? raw.max_daily_runs : config.max_daily_runs, + throttle_minutes: typeof raw.throttle_minutes === 'number' ? raw.throttle_minutes : config.throttle_minutes, + model: typeof raw.model === 'string' ? raw.model : config.model, + }; + } catch { + return { ...config }; + } +} + +/** + * Load and merge learning configuration from global and project config JSON strings. + * Project config overrides global config; both override defaults. + */ +export function loadLearningConfig(globalJson: string | null, projectJson: string | null): LearningConfig { + let config: LearningConfig = { + max_daily_runs: 10, + throttle_minutes: 5, + model: 'sonnet', + }; + + if (globalJson) config = applyConfigLayer(config, globalJson); + if (projectJson) config = applyConfigLayer(config, projectJson); + + return config; +} + +interface LearnOptions { + enable?: boolean; + disable?: boolean; + status?: boolean; + list?: boolean; + configure?: boolean; + clear?: boolean; +} + +export const learnCommand = new Command('learn') + .description('Enable or disable self-learning (workflow detection + auto-commands)') + .option('--enable', 'Register Stop hook for self-learning') + .option('--disable', 'Remove self-learning hook') + .option('--status', 'Show learning status and observation counts') + .option('--list', 'Show all observations sorted by confidence') + .option('--configure', 'Interactive configuration wizard') + .option('--clear', 'Reset learning log (removes all observations)') + .action(async (options: LearnOptions) => { + const hasFlag = options.enable || options.disable || options.status || options.list || options.configure || options.clear; + if (!hasFlag) { + p.intro(color.bgYellow(color.black(' Self-Learning '))); + p.note( + `${color.cyan('devflow learn --enable')} Register learning hook\n` + + `${color.cyan('devflow learn --disable')} Remove learning hook\n` + + `${color.cyan('devflow learn --status')} Show learning status\n` + + `${color.cyan('devflow learn --list')} Show all observations\n` + + `${color.cyan('devflow learn --configure')} Configuration wizard\n` + + `${color.cyan('devflow learn --clear')} Reset learning log`, + 'Usage', + ); + p.outro(color.dim('Detects repeated workflows and creates slash commands automatically')); + return; + } + + const claudeDir = getClaudeDirectory(); + const settingsPath = path.join(claudeDir, 'settings.json'); + + let settingsContent: string; + try { + settingsContent = await fs.readFile(settingsPath, 'utf-8'); + } catch { + if (options.status) { + p.log.info('Self-learning: disabled (no settings.json found)'); + return; + } + settingsContent = '{}'; + } + + // --- --status --- + if (options.status) { + const hookEnabled = hasLearningHook(settingsContent); + const cwd = process.cwd(); + const logPath = path.join(cwd, '.memory', 'learning-log.jsonl'); + + let observations: LearningObservation[] = []; + try { + const logContent = await fs.readFile(logPath, 'utf-8'); + observations = parseLearningLog(logContent); + } catch { + // No log file yet + } + + const status = formatLearningStatus(observations, hookEnabled); + p.log.info(status); + return; + } + + // --- --list --- + if (options.list) { + const cwd = process.cwd(); + const logPath = path.join(cwd, '.memory', 'learning-log.jsonl'); + + let observations: LearningObservation[] = []; + try { + const logContent = await fs.readFile(logPath, 'utf-8'); + observations = parseLearningLog(logContent); + } catch { + p.log.info('No observations yet. Learning log not found.'); + return; + } + + if (observations.length === 0) { + p.log.info('No observations recorded yet.'); + return; + } + + // Sort by confidence descending + observations.sort((a, b) => b.confidence - a.confidence); + + p.intro(color.bgYellow(color.black(' Learning Observations '))); + for (const obs of observations) { + const typeIcon = obs.type === 'workflow' ? 'W' : 'P'; + const statusIcon = obs.status === 'created' ? color.green('created') + : obs.status === 'ready' ? color.yellow('ready') + : color.dim('observing'); + const conf = (obs.confidence * 100).toFixed(0); + p.log.info( + `[${typeIcon}] ${color.cyan(obs.pattern)} (${conf}% | ${obs.observations}x | ${statusIcon})`, + ); + } + p.outro(color.dim(`${observations.length} observation(s) total`)); + return; + } + + // --- --configure --- + if (options.configure) { + p.intro(color.bgYellow(color.black(' Learning Configuration '))); + + const maxRuns = await p.text({ + message: 'Maximum background runs per day', + placeholder: '10', + defaultValue: '10', + validate: (v) => { + const n = Number(v); + if (isNaN(n) || n < 1 || n > 50) return 'Enter a number between 1 and 50'; + return undefined; + }, + }); + if (p.isCancel(maxRuns)) { + p.cancel('Configuration cancelled.'); + return; + } + + const throttle = await p.text({ + message: 'Throttle interval (minutes between runs)', + placeholder: '5', + defaultValue: '5', + validate: (v) => { + const n = Number(v); + if (isNaN(n) || n < 1 || n > 60) return 'Enter a number between 1 and 60'; + return undefined; + }, + }); + if (p.isCancel(throttle)) { + p.cancel('Configuration cancelled.'); + return; + } + + const model = await p.select({ + message: 'Model for pattern detection', + options: [ + { value: 'sonnet', label: 'Sonnet', hint: 'Recommended — good balance of quality and speed' }, + { value: 'haiku', label: 'Haiku', hint: 'Fastest, lowest cost' }, + { value: 'opus', label: 'Opus', hint: 'Highest quality, highest cost' }, + ], + }); + if (p.isCancel(model)) { + p.cancel('Configuration cancelled.'); + return; + } + + const scope = await p.select({ + message: 'Configuration scope', + options: [ + { value: 'project', label: 'Project', hint: 'This project only (.memory/learning.json)' }, + { value: 'global', label: 'Global', hint: 'All projects (~/.devflow/learning.json)' }, + ], + }); + if (p.isCancel(scope)) { + p.cancel('Configuration cancelled.'); + return; + } + + const config: LearningConfig = { + max_daily_runs: Number(maxRuns), + throttle_minutes: Number(throttle), + model: String(model), + }; + + const configJson = JSON.stringify(config, null, 2) + '\n'; + + if (scope === 'global') { + const globalDir = path.join(process.env.HOME || '~', '.devflow'); + await fs.mkdir(globalDir, { recursive: true }); + await fs.writeFile(path.join(globalDir, 'learning.json'), configJson, 'utf-8'); + p.log.success(`Global config written to ${color.dim(path.join(globalDir, 'learning.json'))}`); + } else { + const cwd = process.cwd(); + const memoryDir = path.join(cwd, '.memory'); + await fs.mkdir(memoryDir, { recursive: true }); + await fs.writeFile(path.join(memoryDir, 'learning.json'), configJson, 'utf-8'); + p.log.success(`Project config written to ${color.dim(path.join(memoryDir, 'learning.json'))}`); + } + + p.outro(color.green('Configuration saved.')); + return; + } + + // --- --clear --- + if (options.clear) { + const cwd = process.cwd(); + const logPath = path.join(cwd, '.memory', 'learning-log.jsonl'); + + try { + await fs.access(logPath); + } catch { + p.log.info('No learning log to clear.'); + return; + } + + if (process.stdin.isTTY) { + const confirm = await p.confirm({ + message: 'Clear all learning observations? This cannot be undone.', + initialValue: false, + }); + if (p.isCancel(confirm) || !confirm) { + p.log.info('Clear cancelled.'); + return; + } + } + + await fs.writeFile(logPath, '', 'utf-8'); + p.log.success('Learning log cleared.'); + return; + } + + // --- --enable / --disable --- + // Resolve devflow scripts directory from settings.json hooks or default + let devflowDir: string; + try { + const settings: Settings = JSON.parse(settingsContent); + // Try to extract devflowDir from existing hooks (e.g., Stop hook path) + const stopHook = settings.hooks?.Stop?.[0]?.hooks?.[0]?.command; + if (stopHook) { + const hookBinary = stopHook.split(' ')[0]; + devflowDir = path.resolve(hookBinary, '..', '..', '..'); + } else { + devflowDir = getDevFlowDirectory(); + } + } catch { + devflowDir = getDevFlowDirectory(); + } + + if (options.enable) { + const updated = addLearningHook(settingsContent, devflowDir); + if (updated === settingsContent) { + p.log.info('Self-learning already enabled'); + return; + } + await fs.writeFile(settingsPath, updated, 'utf-8'); + p.log.success('Self-learning enabled — Stop hook registered'); + p.log.info(color.dim('Repeated workflows will be detected and turned into slash commands')); + } + + if (options.disable) { + const updated = removeLearningHook(settingsContent); + if (updated === settingsContent) { + p.log.info('Self-learning already disabled'); + return; + } + await fs.writeFile(settingsPath, updated, 'utf-8'); + p.log.success('Self-learning disabled — hook removed'); + } + }); diff --git a/src/cli/commands/memory.ts b/src/cli/commands/memory.ts index d6e2e41..aa3c906 100644 --- a/src/cli/commands/memory.ts +++ b/src/cli/commands/memory.ts @@ -5,24 +5,7 @@ import * as p from '@clack/prompts'; import color from 'picocolors'; import { getClaudeDirectory, getDevFlowDirectory } from '../utils/paths.js'; import { createMemoryDir, migrateMemoryFiles } from '../utils/post-install.js'; - -/** - * The hook entry structure used by Claude Code settings.json. - */ -interface HookEntry { - type: string; - command: string; - timeout?: number; -} - -interface HookMatcher { - hooks: HookEntry[]; -} - -interface Settings { - hooks?: Record; - [key: string]: unknown; -} +import type { HookMatcher, Settings } from '../utils/hooks.js'; /** * Map of hook event type → filename marker for the 3 memory hooks. @@ -157,12 +140,18 @@ export function countMemoryHooks(settingsJson: string): number { return count; } +interface MemoryOptions { + enable?: boolean; + disable?: boolean; + status?: boolean; +} + export const memoryCommand = new Command('memory') .description('Enable or disable working memory (session context preservation)') .option('--enable', 'Add Stop/SessionStart/PreCompact hooks') .option('--disable', 'Remove memory hooks') .option('--status', 'Show current state') - .action(async (options) => { + .action(async (options: MemoryOptions) => { const hasFlag = options.enable || options.disable || options.status; if (!hasFlag) { p.intro(color.bgCyan(color.white(' Working Memory '))); diff --git a/src/cli/commands/uninstall.ts b/src/cli/commands/uninstall.ts index d467dd1..dc31f8d 100644 --- a/src/cli/commands/uninstall.ts +++ b/src/cli/commands/uninstall.ts @@ -11,6 +11,7 @@ import { isClaudeCliAvailable } from '../utils/cli.js'; import { DEVFLOW_PLUGINS, getAllSkillNames, LEGACY_SKILL_NAMES, type PluginDefinition } from '../plugins.js'; import { removeAmbientHook } from './ambient.js'; import { removeMemoryHooks } from './memory.js'; +import { removeLearningHook } from './learn.js'; import { removeHudStatusLine } from './hud.js'; import { listShadowed } from './skills.js'; import { detectShell, getProfilePath } from '../utils/safe-delete.js'; @@ -396,6 +397,7 @@ export const uninstallCommand = new Command('uninstall') // Remove all DevFlow hooks in one pass (idempotent) let settingsContent = removeAmbientHook(originalContent); settingsContent = removeMemoryHooks(settingsContent); + settingsContent = removeLearningHook(settingsContent); settingsContent = removeHudStatusLine(settingsContent); if (settingsContent !== originalContent) { diff --git a/src/cli/utils/hooks.ts b/src/cli/utils/hooks.ts new file mode 100644 index 0000000..ed1c328 --- /dev/null +++ b/src/cli/utils/hooks.ts @@ -0,0 +1,22 @@ +/** + * Shared hook types for Claude Code settings.json. + * Used by learn.ts, ambient.ts, and memory.ts. + * + * NOTE: hud.ts uses a structurally different Settings type (statusLine, not hooks) + * and is intentionally excluded from this shared module. + */ + +export interface HookEntry { + type: string; + command: string; + timeout?: number; +} + +export interface HookMatcher { + hooks: HookEntry[]; +} + +export interface Settings { + hooks?: Record; + [key: string]: unknown; +} diff --git a/src/cli/utils/manifest.ts b/src/cli/utils/manifest.ts index 71b3068..444e6b3 100644 --- a/src/cli/utils/manifest.ts +++ b/src/cli/utils/manifest.ts @@ -12,7 +12,8 @@ export interface ManifestData { teams: boolean; ambient: boolean; memory: boolean; - hud?: boolean; + learn: boolean; + hud: boolean; }; installedAt: string; updatedAt: string; @@ -25,22 +26,37 @@ export async function readManifest(devflowDir: string): Promise; + const features = data.features as Record | undefined; if ( !data.version || !Array.isArray(data.plugins) || !data.scope || - typeof data.features !== 'object' || - data.features === null || - typeof data.features.teams !== 'boolean' || - typeof data.features.ambient !== 'boolean' || - typeof data.features.memory !== 'boolean' || + typeof features !== 'object' || + features === null || + typeof features.teams !== 'boolean' || + typeof features.ambient !== 'boolean' || + typeof features.memory !== 'boolean' || typeof data.installedAt !== 'string' || typeof data.updatedAt !== 'string' ) { return null; } - return data; + // Normalize optional fields with defaults (backwards-compatible with old manifests) + return { + version: data.version as string, + plugins: data.plugins as string[], + scope: data.scope as 'user' | 'local', + features: { + teams: features.teams as boolean, + ambient: features.ambient as boolean, + memory: features.memory as boolean, + hud: typeof features.hud === 'boolean' ? features.hud : false, + learn: typeof features.learn === 'boolean' ? features.learn : false, + }, + installedAt: data.installedAt as string, + updatedAt: data.updatedAt as string, + }; } catch { return null; } diff --git a/tests/learn.test.ts b/tests/learn.test.ts new file mode 100644 index 0000000..ccf769e --- /dev/null +++ b/tests/learn.test.ts @@ -0,0 +1,386 @@ +import { describe, it, expect } from 'vitest'; +import { + addLearningHook, + removeLearningHook, + hasLearningHook, + parseLearningLog, + formatLearningStatus, + loadLearningConfig, + isLearningObservation, + applyConfigLayer, + type LearningObservation, +} from '../src/cli/commands/learn.js'; + +describe('addLearningHook', () => { + it('adds hook to empty settings', () => { + const result = addLearningHook('{}', '/home/user/.devflow'); + const settings = JSON.parse(result); + + expect(settings.hooks.Stop).toHaveLength(1); + expect(settings.hooks.Stop[0].hooks[0].command).toContain('stop-update-learning'); + expect(settings.hooks.Stop[0].hooks[0].timeout).toBe(10); + }); + + it('adds alongside existing hooks (Stop hooks from memory)', () => { + const input = JSON.stringify({ + hooks: { + Stop: [{ hooks: [{ type: 'command', command: 'stop-update-memory' }] }], + }, + }); + const result = addLearningHook(input, '/home/user/.devflow'); + const settings = JSON.parse(result); + + expect(settings.hooks.Stop).toHaveLength(2); + expect(settings.hooks.Stop[0].hooks[0].command).toBe('stop-update-memory'); + expect(settings.hooks.Stop[1].hooks[0].command).toContain('stop-update-learning'); + }); + + it('is idempotent — does not add duplicate', () => { + const first = addLearningHook('{}', '/home/user/.devflow'); + const second = addLearningHook(first, '/home/user/.devflow'); + + expect(second).toBe(first); + }); + + it('uses correct path via run-hook wrapper', () => { + const result = addLearningHook('{}', '/custom/path/.devflow'); + const settings = JSON.parse(result); + const command = settings.hooks.Stop[0].hooks[0].command; + + expect(command).toContain('/custom/path/.devflow/scripts/hooks/run-hook'); + expect(command).toContain('stop-update-learning'); + }); + + it('preserves other settings', () => { + const input = JSON.stringify({ + statusLine: { type: 'command', command: 'statusline.sh' }, + env: { SOME_VAR: '1' }, + }); + const result = addLearningHook(input, '/home/user/.devflow'); + const settings = JSON.parse(result); + + expect(settings.statusLine.command).toBe('statusline.sh'); + expect(settings.env.SOME_VAR).toBe('1'); + expect(settings.hooks.Stop).toHaveLength(1); + }); + + it('adds alongside existing Stop hooks', () => { + const input = JSON.stringify({ + hooks: { + Stop: [{ hooks: [{ type: 'command', command: 'other-stop.sh' }] }], + UserPromptSubmit: [{ hooks: [{ type: 'command', command: 'ambient-prompt' }] }], + }, + }); + const result = addLearningHook(input, '/home/user/.devflow'); + const settings = JSON.parse(result); + + expect(settings.hooks.Stop).toHaveLength(2); + expect(settings.hooks.UserPromptSubmit).toHaveLength(1); + }); +}); + +describe('removeLearningHook', () => { + it('removes learning hook', () => { + const withHook = addLearningHook('{}', '/home/user/.devflow'); + const result = removeLearningHook(withHook); + const settings = JSON.parse(result); + + expect(settings.hooks).toBeUndefined(); + }); + + it('preserves memory Stop hooks', () => { + const input = JSON.stringify({ + hooks: { + Stop: [ + { hooks: [{ type: 'command', command: 'stop-update-memory' }] }, + { hooks: [{ type: 'command', command: '/path/to/stop-update-learning' }] }, + ], + }, + }); + const result = removeLearningHook(input); + const settings = JSON.parse(result); + + expect(settings.hooks.Stop).toHaveLength(1); + expect(settings.hooks.Stop[0].hooks[0].command).toBe('stop-update-memory'); + }); + + it('cleans empty hooks object when last hook removed', () => { + const input = JSON.stringify({ + hooks: { + Stop: [ + { hooks: [{ type: 'command', command: '/path/to/stop-update-learning' }] }, + ], + }, + }); + const result = removeLearningHook(input); + const settings = JSON.parse(result); + + expect(settings.hooks).toBeUndefined(); + }); + + it('preserves other hook event types', () => { + const input = JSON.stringify({ + hooks: { + UserPromptSubmit: [{ hooks: [{ type: 'command', command: 'ambient-prompt' }] }], + Stop: [ + { hooks: [{ type: 'command', command: '/path/to/stop-update-learning' }] }, + ], + }, + }); + const result = removeLearningHook(input); + const settings = JSON.parse(result); + + expect(settings.hooks.UserPromptSubmit).toHaveLength(1); + expect(settings.hooks.Stop).toBeUndefined(); + }); + + it('is idempotent', () => { + const input = JSON.stringify({ + hooks: { + Stop: [{ hooks: [{ type: 'command', command: 'stop-update-memory' }] }], + }, + }); + const result = removeLearningHook(input); + + expect(result).toBe(input); + }); +}); + +describe('hasLearningHook', () => { + it('returns true when present', () => { + const withHook = addLearningHook('{}', '/home/user/.devflow'); + expect(hasLearningHook(withHook)).toBe(true); + }); + + it('returns false when absent', () => { + expect(hasLearningHook('{}')).toBe(false); + }); + + it('returns false for non-learning Stop hooks', () => { + const input = JSON.stringify({ + hooks: { + Stop: [ + { hooks: [{ type: 'command', command: 'stop-update-memory' }] }, + ], + }, + }); + expect(hasLearningHook(input)).toBe(false); + }); + + it('returns true among other Stop hooks', () => { + const input = JSON.stringify({ + hooks: { + Stop: [ + { hooks: [{ type: 'command', command: 'stop-update-memory' }] }, + { hooks: [{ type: 'command', command: '/path/to/stop-update-learning' }] }, + ], + }, + }); + expect(hasLearningHook(input)).toBe(true); + }); +}); + +describe('parseLearningLog', () => { + it('parses valid JSONL', () => { + const log = [ + '{"id":"obs_abc123","type":"workflow","pattern":"PR merge flow","confidence":0.66,"observations":2,"first_seen":"2026-03-20T00:00:00Z","last_seen":"2026-03-22T00:00:00Z","status":"observing","evidence":["merge PR"],"details":"steps"}', + '{"id":"obs_def456","type":"procedural","pattern":"Debug hooks","confidence":0.50,"observations":1,"first_seen":"2026-03-22T00:00:00Z","last_seen":"2026-03-22T00:00:00Z","status":"observing","evidence":["check hooks"],"details":"steps"}', + ].join('\n'); + + const result = parseLearningLog(log); + expect(result).toHaveLength(2); + expect(result[0].id).toBe('obs_abc123'); + expect(result[1].type).toBe('procedural'); + }); + + it('skips malformed lines', () => { + const log = [ + '{"id":"obs_abc123","type":"workflow","pattern":"test","confidence":0.5,"observations":1,"first_seen":"t","last_seen":"t","status":"observing","evidence":[],"details":"d"}', + 'not json at all', + '', + '{"id":"obs_def456","type":"procedural","pattern":"test2","confidence":0.5,"observations":1,"first_seen":"t","last_seen":"t","status":"observing","evidence":[],"details":"d"}', + ].join('\n'); + + const result = parseLearningLog(log); + expect(result).toHaveLength(2); + }); + + it('handles empty input', () => { + expect(parseLearningLog('')).toEqual([]); + expect(parseLearningLog(' ')).toEqual([]); + }); + + it('parses single entry', () => { + const log = '{"id":"obs_abc123","type":"workflow","pattern":"deploy flow","confidence":0.33,"observations":1,"first_seen":"2026-03-22T00:00:00Z","last_seen":"2026-03-22T00:00:00Z","status":"observing","evidence":["deploy"],"details":"steps"}'; + const result = parseLearningLog(log); + expect(result).toHaveLength(1); + expect(result[0].pattern).toBe('deploy flow'); + }); +}); + +describe('formatLearningStatus', () => { + it('shows enabled state', () => { + const result = formatLearningStatus([], true); + expect(result).toContain('enabled'); + }); + + it('shows disabled state', () => { + const result = formatLearningStatus([], false); + expect(result).toContain('disabled'); + }); + + it('shows observation counts', () => { + const observations: LearningObservation[] = [ + { id: 'obs_1', type: 'workflow', pattern: 'p1', confidence: 0.33, observations: 1, first_seen: 't', last_seen: 't', status: 'observing', evidence: [], details: 'd' }, + { id: 'obs_2', type: 'procedural', pattern: 'p2', confidence: 0.50, observations: 1, first_seen: 't', last_seen: 't', status: 'observing', evidence: [], details: 'd' }, + { id: 'obs_3', type: 'workflow', pattern: 'p3', confidence: 0.95, observations: 3, first_seen: 't', last_seen: 't', status: 'ready', evidence: [], details: 'd' }, + ]; + const result = formatLearningStatus(observations, true); + expect(result).toContain('3 total'); + expect(result).toContain('Workflows: 2'); + expect(result).toContain('Procedural: 1'); + }); + + it('shows promoted artifacts count', () => { + const observations: LearningObservation[] = [ + { id: 'obs_1', type: 'workflow', pattern: 'p1', confidence: 0.95, observations: 3, first_seen: 't', last_seen: 't', status: 'created', evidence: [], details: 'd', artifact_path: '/path' }, + { id: 'obs_2', type: 'procedural', pattern: 'p2', confidence: 0.50, observations: 1, first_seen: 't', last_seen: 't', status: 'observing', evidence: [], details: 'd' }, + ]; + const result = formatLearningStatus(observations, true); + expect(result).toContain('1 promoted'); + expect(result).toContain('1 observing'); + }); + + it('handles empty observations', () => { + const result = formatLearningStatus([], true); + expect(result).toContain('none'); + }); +}); + +describe('loadLearningConfig', () => { + it('returns defaults when no config files', () => { + const config = loadLearningConfig(null, null); + expect(config.max_daily_runs).toBe(10); + expect(config.throttle_minutes).toBe(5); + expect(config.model).toBe('sonnet'); + }); + + it('loads global config', () => { + const globalJson = JSON.stringify({ max_daily_runs: 20, model: 'haiku' }); + const config = loadLearningConfig(globalJson, null); + expect(config.max_daily_runs).toBe(20); + expect(config.throttle_minutes).toBe(5); // default preserved + expect(config.model).toBe('haiku'); + }); + + it('project config overrides global', () => { + const globalJson = JSON.stringify({ max_daily_runs: 20, model: 'haiku' }); + const projectJson = JSON.stringify({ max_daily_runs: 5 }); + const config = loadLearningConfig(globalJson, projectJson); + expect(config.max_daily_runs).toBe(5); // project wins + expect(config.model).toBe('haiku'); // global preserved when project doesn't set + }); + + it('handles partial override (only some fields)', () => { + const projectJson = JSON.stringify({ throttle_minutes: 15 }); + const config = loadLearningConfig(null, projectJson); + expect(config.max_daily_runs).toBe(10); // default + expect(config.throttle_minutes).toBe(15); // overridden + expect(config.model).toBe('sonnet'); // default + }); +}); + +describe('isLearningObservation', () => { + const validObs = { + id: 'obs_abc123', + type: 'workflow', + pattern: 'test pattern', + confidence: 0.5, + observations: 1, + first_seen: '2026-03-22T00:00:00Z', + last_seen: '2026-03-22T00:00:00Z', + status: 'observing', + evidence: ['some evidence'], + details: 'details', + }; + + it('accepts valid observation', () => { + expect(isLearningObservation(validObs)).toBe(true); + }); + + it('rejects null', () => { + expect(isLearningObservation(null)).toBe(false); + }); + + it('rejects non-object', () => { + expect(isLearningObservation('string')).toBe(false); + expect(isLearningObservation(42)).toBe(false); + }); + + it('rejects missing id', () => { + const { id, ...rest } = validObs; + expect(isLearningObservation(rest)).toBe(false); + }); + + it('rejects invalid type', () => { + expect(isLearningObservation({ ...validObs, type: 'unknown' })).toBe(false); + }); + + it('rejects confidence as string', () => { + expect(isLearningObservation({ ...validObs, confidence: '0.5' })).toBe(false); + }); + + it('rejects invalid status', () => { + expect(isLearningObservation({ ...validObs, status: 'done' })).toBe(false); + }); + + it('rejects evidence as non-array', () => { + expect(isLearningObservation({ ...validObs, evidence: 'not array' })).toBe(false); + }); + + it('rejects missing details', () => { + const { details, ...rest } = validObs; + expect(isLearningObservation(rest)).toBe(false); + }); +}); + +describe('parseLearningLog — type guard filtering', () => { + it('rejects objects with missing required fields', () => { + const log = '{"id":"obs_1","type":"workflow","pattern":"p"}\n'; + const result = parseLearningLog(log); + expect(result).toHaveLength(0); + }); + + it('rejects objects with wrong field types', () => { + const log = JSON.stringify({ + id: 'obs_1', type: 'workflow', pattern: 'p', + confidence: 'high', observations: 1, first_seen: 't', + last_seen: 't', status: 'observing', evidence: [], details: 'd', + }) + '\n'; + const result = parseLearningLog(log); + expect(result).toHaveLength(0); + }); +}); + +describe('applyConfigLayer — immutability', () => { + it('returns new object without mutating input', () => { + const original = { max_daily_runs: 10, throttle_minutes: 5, model: 'sonnet' }; + const result = applyConfigLayer(original, JSON.stringify({ max_daily_runs: 20 })); + expect(result.max_daily_runs).toBe(20); + expect(original.max_daily_runs).toBe(10); // not mutated + }); + + it('returns copy on invalid JSON', () => { + const original = { max_daily_runs: 10, throttle_minutes: 5, model: 'sonnet' }; + const result = applyConfigLayer(original, 'not json'); + expect(result).toEqual(original); + expect(result).not.toBe(original); // different reference + }); + + it('ignores wrong-typed fields', () => { + const original = { max_daily_runs: 10, throttle_minutes: 5, model: 'sonnet' }; + const result = applyConfigLayer(original, JSON.stringify({ max_daily_runs: 'lots', model: 42 })); + expect(result.max_daily_runs).toBe(10); + expect(result.model).toBe('sonnet'); + }); +}); diff --git a/tests/manifest.test.ts b/tests/manifest.test.ts index ce41039..9f95550 100644 --- a/tests/manifest.test.ts +++ b/tests/manifest.test.ts @@ -77,7 +77,7 @@ describe('readManifest', () => { version: '1.4.0', plugins: ['devflow-core-skills', 'devflow-implement'], scope: 'user', - features: { teams: false, ambient: true, memory: true }, + features: { teams: false, ambient: true, memory: true, learn: false, hud: false }, installedAt: '2026-03-01T00:00:00.000Z', updatedAt: '2026-03-13T00:00:00.000Z', }; @@ -85,6 +85,22 @@ describe('readManifest', () => { const result = await readManifest(tmpDir); expect(result).toEqual(data); }); + + it('normalizes old manifest without hud/learn to defaults', async () => { + const oldData = { + version: '1.4.0', + plugins: ['devflow-core-skills'], + scope: 'user', + features: { teams: false, ambient: true, memory: true }, + installedAt: '2026-03-01T00:00:00.000Z', + updatedAt: '2026-03-13T00:00:00.000Z', + }; + await fs.writeFile(path.join(tmpDir, 'manifest.json'), JSON.stringify(oldData), 'utf-8'); + const result = await readManifest(tmpDir); + expect(result).not.toBeNull(); + expect(result!.features.hud).toBe(false); + expect(result!.features.learn).toBe(false); + }); }); describe('writeManifest', () => { @@ -103,7 +119,7 @@ describe('writeManifest', () => { version: '1.4.0', plugins: ['devflow-core-skills'], scope: 'user', - features: { teams: false, ambient: true, memory: true }, + features: { teams: false, ambient: true, memory: true, learn: false, hud: false }, installedAt: '2026-03-13T00:00:00.000Z', updatedAt: '2026-03-13T00:00:00.000Z', }; @@ -117,7 +133,7 @@ describe('writeManifest', () => { version: '1.0.0', plugins: ['devflow-core-skills'], scope: 'user', - features: { teams: false, ambient: false, memory: false }, + features: { teams: false, ambient: false, memory: false, learn: false, hud: false }, installedAt: '2026-01-01T00:00:00.000Z', updatedAt: '2026-01-01T00:00:00.000Z', }; @@ -136,7 +152,7 @@ describe('writeManifest', () => { version: '1.4.0', plugins: [], scope: 'local', - features: { teams: false, ambient: false, memory: false }, + features: { teams: false, ambient: false, memory: false, learn: false, hud: false }, installedAt: '2026-03-13T00:00:00.000Z', updatedAt: '2026-03-13T00:00:00.000Z', }; @@ -280,7 +296,7 @@ describe('resolvePluginList', () => { version: '1.0.0', plugins: ['devflow-core-skills', 'devflow-implement'], scope: 'user', - features: { teams: false, ambient: true, memory: true }, + features: { teams: false, ambient: true, memory: true, learn: false, hud: false }, installedAt: '2026-01-01T00:00:00.000Z', updatedAt: '2026-01-01T00:00:00.000Z', }; diff --git a/tests/shell-hooks.test.ts b/tests/shell-hooks.test.ts new file mode 100644 index 0000000..3bbf42b --- /dev/null +++ b/tests/shell-hooks.test.ts @@ -0,0 +1,318 @@ +import { describe, it, expect } from 'vitest'; +import { execSync } from 'child_process'; +import * as path from 'path'; +import * as fs from 'fs'; +import * as os from 'os'; + +const HOOKS_DIR = path.resolve(__dirname, '..', 'scripts', 'hooks'); + +const JSON_HELPER = path.join(HOOKS_DIR, 'json-helper.cjs'); + +const HOOK_SCRIPTS = [ + 'background-learning', + 'stop-update-learning', + 'background-memory-update', + 'stop-update-memory', + 'session-start-memory', + 'pre-compact-memory', + 'ambient-prompt', + 'json-parse', +]; + +describe('shell hook syntax checks', () => { + for (const script of HOOK_SCRIPTS) { + it(`${script} passes bash -n`, () => { + const scriptPath = path.join(HOOKS_DIR, script); + if (!fs.existsSync(scriptPath)) { + return; // skip missing optional scripts + } + // bash -n performs syntax check without executing + expect(() => { + execSync(`bash -n "${scriptPath}"`, { stdio: 'pipe' }); + }).not.toThrow(); + }); + } +}); + +describe('background-learning pure functions', () => { + it('decay_factor returns correct values for all periods', () => { + const expected: Record = { + '0': '100', '1': '90', '2': '81', + '3': '73', '4': '66', '5': '59', + '6': '53', '10': '53', '99': '53', + }; + + for (const [input, output] of Object.entries(expected)) { + const result = execSync( + `bash -c ' + decay_factor() { + case $1 in + 0) echo "100";; 1) echo "90";; 2) echo "81";; + 3) echo "73";; 4) echo "66";; 5) echo "59";; + *) echo "53";; + esac + } + decay_factor ${input} + '`, + { stdio: 'pipe' }, + ).toString().trim(); + expect(result).toBe(output); + } + }); + + it('check_daily_cap respects counter file', () => { + const tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), 'devflow-test-')); + const counterFile = path.join(tmpDir, '.learning-runs-today'); + // Get today's date from bash to avoid timezone mismatches + const today = execSync('date +%Y-%m-%d', { stdio: 'pipe' }).toString().trim(); + + const makeScript = (cf: string) => `bash -c ' + check_daily_cap() { + local COUNTER_FILE="${cf}" + local MAX_DAILY_RUNS=10 + local TODAY=$(date +%Y-%m-%d) + if [ -f "$COUNTER_FILE" ]; then + local COUNTER_DATE=$(cut -f1 "$COUNTER_FILE") + local COUNTER_COUNT=$(cut -f2 "$COUNTER_FILE") + if [ "$COUNTER_DATE" = "$TODAY" ] && [ "$COUNTER_COUNT" -ge "$MAX_DAILY_RUNS" ]; then + return 1 + fi + fi + return 0 + } + check_daily_cap && echo "ok" || echo "capped" + '`; + + try { + // Under cap — should succeed + fs.writeFileSync(counterFile, `${today}\t5\n`); + const underCap = execSync(makeScript(counterFile), { stdio: 'pipe' }).toString().trim(); + expect(underCap).toBe('ok'); + + // At cap — should return capped + fs.writeFileSync(counterFile, `${today}\t10\n`); + const atCap = execSync(makeScript(counterFile), { stdio: 'pipe' }).toString().trim(); + expect(atCap).toBe('capped'); + + // Old date — should not be capped + fs.writeFileSync(counterFile, `2020-01-01\t99\n`); + const oldDate = execSync(makeScript(counterFile), { stdio: 'pipe' }).toString().trim(); + expect(oldDate).toBe('ok'); + } finally { + fs.rmSync(tmpDir, { recursive: true, force: true }); + } + }); + + it('increment_daily_counter creates and increments counter', () => { + const tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), 'devflow-test-')); + const counterFile = path.join(tmpDir, '.learning-runs-today'); + // Get today's date from bash to avoid timezone mismatches + const today = execSync('date +%Y-%m-%d', { stdio: 'pipe' }).toString().trim(); + + const makeScript = (cf: string) => `bash -c ' + COUNTER_FILE="${cf}" + increment_daily_counter() { + local TODAY=$(date +%Y-%m-%d) + local COUNT=1 + if [ -f "$COUNTER_FILE" ] && [ "$(cut -f1 "$COUNTER_FILE")" = "$TODAY" ]; then + COUNT=$(( $(cut -f2 "$COUNTER_FILE") + 1 )) + fi + printf "%s\\t%d\\n" "$TODAY" "$COUNT" > "$COUNTER_FILE" + } + increment_daily_counter + '`; + + try { + // First call — creates file with count 1 + execSync(makeScript(counterFile), { stdio: 'pipe' }); + const content1 = fs.readFileSync(counterFile, 'utf-8').trim(); + expect(content1).toBe(`${today}\t1`); + + // Second call — increments to 2 + execSync(makeScript(counterFile), { stdio: 'pipe' }); + const content2 = fs.readFileSync(counterFile, 'utf-8').trim(); + expect(content2).toBe(`${today}\t2`); + } finally { + fs.rmSync(tmpDir, { recursive: true, force: true }); + } + }); +}); + +describe('json-helper.js operations', () => { + it('get-field extracts a field with default', () => { + const result = execSync( + `echo '{"cwd":"/tmp","session_id":"abc123"}' | node "${JSON_HELPER}" get-field cwd ""`, + { stdio: 'pipe' }, + ).toString().trim(); + expect(result).toBe('/tmp'); + }); + + it('get-field returns default for missing field', () => { + const result = execSync( + `echo '{"cwd":"/tmp"}' | node "${JSON_HELPER}" get-field missing "fallback"`, + { stdio: 'pipe' }, + ).toString().trim(); + expect(result).toBe('fallback'); + }); + + it('validate exits 0 for valid JSON', () => { + expect(() => { + execSync(`echo '{"valid":true}' | node "${JSON_HELPER}" validate`, { stdio: 'pipe' }); + }).not.toThrow(); + }); + + it('validate exits 1 for invalid JSON', () => { + expect(() => { + execSync(`echo 'not json' | node "${JSON_HELPER}" validate`, { stdio: 'pipe' }); + }).toThrow(); + }); + + it('compact outputs single-line JSON', () => { + const result = execSync( + `echo '{ "key": "value", "num": 42 }' | node "${JSON_HELPER}" compact`, + { stdio: 'pipe' }, + ).toString().trim(); + expect(result).toBe('{"key":"value","num":42}'); + }); + + it('extract-text-messages extracts text from Claude message format', () => { + const input = JSON.stringify({ + message: { + content: [ + { type: 'text', text: 'Hello world' }, + { type: 'tool_result', text: 'ignored' }, + { type: 'text', text: 'Second message' }, + ], + }, + }); + const result = execSync( + `echo '${input.replace(/'/g, "'\\''")}' | node "${JSON_HELPER}" extract-text-messages`, + { stdio: 'pipe' }, + ).toString().trim(); + expect(result).toBe('Hello world\nSecond message'); + }); + + it('merge-evidence flattens, dedupes, and limits', () => { + const input = JSON.stringify([['a', 'b', 'c'], ['b', 'c', 'd']]); + const result = execSync( + `echo '${input}' | node "${JSON_HELPER}" merge-evidence`, + { stdio: 'pipe' }, + ).toString().trim(); + const parsed = JSON.parse(result); + expect(parsed).toEqual(['a', 'b', 'c', 'd']); + }); + + it('slurp-sort reads JSONL, sorts, and limits', () => { + const tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), 'devflow-test-')); + const file = path.join(tmpDir, 'test.jsonl'); + + try { + fs.writeFileSync(file, [ + JSON.stringify({ id: 'a', confidence: 0.3 }), + JSON.stringify({ id: 'b', confidence: 0.9 }), + JSON.stringify({ id: 'c', confidence: 0.5 }), + ].join('\n')); + + const result = execSync( + `node "${JSON_HELPER}" slurp-sort "${file}" confidence 2`, + { stdio: 'pipe' }, + ).toString().trim(); + const parsed = JSON.parse(result); + expect(parsed).toHaveLength(2); + expect(parsed[0].id).toBe('b'); + expect(parsed[1].id).toBe('c'); + } finally { + fs.rmSync(tmpDir, { recursive: true, force: true }); + } + }); + + it('session-output builds correct envelope', () => { + const result = execSync( + `node "${JSON_HELPER}" session-output "test context"`, + { stdio: 'pipe' }, + ).toString().trim(); + const parsed = JSON.parse(result); + expect(parsed.hookSpecificOutput.hookEventName).toBe('SessionStart'); + expect(parsed.hookSpecificOutput.additionalContext).toBe('test context'); + }); + + it('prompt-output builds correct envelope', () => { + const result = execSync( + `node "${JSON_HELPER}" prompt-output "test preamble"`, + { stdio: 'pipe' }, + ).toString().trim(); + const parsed = JSON.parse(result); + expect(parsed.hookSpecificOutput.hookEventName).toBe('UserPromptSubmit'); + expect(parsed.hookSpecificOutput.additionalContext).toBe('test preamble'); + }); + + it('update-field updates a string field', () => { + const result = execSync( + `echo '{"status":"observing","id":"obs_1"}' | node "${JSON_HELPER}" update-field status created`, + { stdio: 'pipe' }, + ).toString().trim(); + const parsed = JSON.parse(result); + expect(parsed.status).toBe('created'); + expect(parsed.id).toBe('obs_1'); + }); + + it('array-length returns count', () => { + const result = execSync( + `echo '{"observations":[{},{},{}]}' | node "${JSON_HELPER}" array-length observations`, + { stdio: 'pipe' }, + ).toString().trim(); + expect(result).toBe('3'); + }); + + it('array-item returns item at index', () => { + const result = execSync( + `echo '{"items":[{"id":"a"},{"id":"b"}]}' | node "${JSON_HELPER}" array-item items 1`, + { stdio: 'pipe' }, + ).toString().trim(); + const parsed = JSON.parse(result); + expect(parsed.id).toBe('b'); + }); + + it('learning-created extracts artifacts from JSONL', () => { + const tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), 'devflow-test-')); + const file = path.join(tmpDir, 'learning.jsonl'); + + try { + fs.writeFileSync(file, [ + JSON.stringify({ id: 'obs_1', type: 'workflow', status: 'created', artifact_path: '/path/learned/deploy-flow.md', confidence: 0.95 }), + JSON.stringify({ id: 'obs_2', type: 'procedural', status: 'created', artifact_path: '/path/learned-debug-hooks/SKILL.md', confidence: 0.8 }), + JSON.stringify({ id: 'obs_3', type: 'workflow', status: 'observing', confidence: 0.3 }), + ].join('\n')); + + const result = execSync( + `node "${JSON_HELPER}" learning-created "${file}"`, + { stdio: 'pipe' }, + ).toString().trim(); + const parsed = JSON.parse(result); + expect(parsed.commands).toHaveLength(1); + expect(parsed.commands[0].name).toBe('deploy-flow'); + expect(parsed.skills).toHaveLength(1); + expect(parsed.skills[0].name).toBe('debug-hooks'); + } finally { + fs.rmSync(tmpDir, { recursive: true, force: true }); + } + }); +}); + +describe('json-parse wrapper', () => { + it('can be sourced and provides function definitions', () => { + const result = execSync( + `bash -c 'source "${path.join(HOOKS_DIR, 'json-parse')}" && echo "$_JSON_AVAILABLE"'`, + { stdio: 'pipe' }, + ).toString().trim(); + expect(result).toBe('true'); + }); + + it('json_field works via wrapper', () => { + const result = execSync( + `bash -c 'source "${path.join(HOOKS_DIR, 'json-parse')}" && echo "{\\"key\\":\\"val\\"}" | json_field key ""'`, + { stdio: 'pipe' }, + ).toString().trim(); + expect(result).toBe('val'); + }); +});