Skip to content

Commit da51eba

Browse files
author
jgstern
committed
Merge pull request 'Release v2.2.0' (#2198) from dev into main
Reviewed-on: https://codeberg.org/iterabloom/hypergumbo/pulls/2198
2 parents 84c298b + 97497b4 commit da51eba

896 files changed

Lines changed: 52993 additions & 2728 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

.agent/cooldown_prompt.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,4 +29,3 @@ Spend at most 2-3 minutes:
2929
- What worked well in the last cycle? What wasted time?
3030
- Write any governance improvement ideas to `~/hypergumbo_lab_notebook/` tagged `[GOVERNANCE-IDEA]`
3131

32-
Continue working — do not stop.

.agent/hooks/_shared/stop_logic.sh

Lines changed: 164 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -39,11 +39,51 @@ if [[ -x "$REPO_ROOT/scripts/tracker" ]] && [[ -d "$REPO_ROOT/.agent/tracker" ]]
3939
fi
4040
TOTAL_TODOS=$((TOTAL_HARD + TOTAL_SOFT))
4141

42-
# --- Circuit breaker (hash-based no-progress detection) ---
42+
# --- Circuit breaker (file-change-based no-progress detection) ---
43+
# Hashes file modification times in sentinel directories to detect whether
44+
# real work product changed between stop events. This measures whether the
45+
# agent *did* make progress (files changed) rather than whether it *said* it
46+
# made progress (tracker updated). Dotfiles and invisible directories are
47+
# excluded because tracker bookkeeping and git metadata shouldn't count as
48+
# progress — source code and docs changes should.
4349
CIRCUIT_BREAKER_TRIPPED=false
4450
if [[ "$TOTAL_TODOS" -gt 0 ]]; then
45-
CURRENT_HASH=$("$REPO_ROOT/scripts/tracker" hash-todos 2>/dev/null) || \
46-
{ echo "WARNING: hash-todos failed, using fallback hash" >&2; CURRENT_HASH="fallback-$$"; }
51+
# Read sentinel dirs from tracker config, fall back to sensible defaults
52+
SENTINEL_DIRS=()
53+
if command -v python3 &>/dev/null; then
54+
while IFS= read -r dir; do
55+
[[ -n "$dir" ]] && SENTINEL_DIRS+=("$dir")
56+
done < <(python3 -c "
57+
import yaml, os, sys
58+
try:
59+
with open('$REPO_ROOT/.agent/tracker/config.yaml') as f:
60+
cfg = yaml.safe_load(f)
61+
dirs = cfg.get('stop_hook', {}).get('progress_sentinel_dirs', [])
62+
for d in dirs:
63+
d = os.path.expanduser(d)
64+
if not os.path.isabs(d):
65+
d = os.path.join('$REPO_ROOT', d)
66+
print(d)
67+
except Exception:
68+
pass
69+
" 2>/dev/null)
70+
fi
71+
# Fall back to defaults if config didn't provide any
72+
if [[ ${#SENTINEL_DIRS[@]} -eq 0 ]]; then
73+
SENTINEL_DIRS=("$REPO_ROOT/packages" "$REPO_ROOT/docs" "$REPO_ROOT/scripts")
74+
fi
75+
76+
# Build file-change hash from sentinel directories
77+
FIND_ARGS=()
78+
for d in "${SENTINEL_DIRS[@]}"; do
79+
[[ -d "$d" ]] && FIND_ARGS+=("$d")
80+
done
81+
if [[ ${#FIND_ARGS[@]} -gt 0 ]]; then
82+
CURRENT_HASH=$(find "${FIND_ARGS[@]}" -not -path '*/.*' -not -path '*/guidance_log/*' -type f -printf '%p %T@\n' 2>/dev/null | sort | sha256sum | cut -d' ' -f1)
83+
else
84+
CURRENT_HASH="no-sentinel-dirs-$$"
85+
fi
86+
4787
if [[ -z "${STOP_HOOK_DRY_RUN:-}" ]]; then
4888
echo "$CURRENT_HASH" >> "$HASH_FILE"
4989
fi
@@ -107,12 +147,28 @@ except Exception:
107147
pass
108148
" 2>/dev/null || true)
109149

150+
# Determine session type (broad vs deep) from directory name
151+
_SESSION_DIR=$(dirname "$LATEST_STATE")
152+
_SESSION_NAME=$(basename "$_SESSION_DIR")
153+
_SESSION_TYPE="broad"
154+
if [[ "$_SESSION_NAME" == deep-* ]]; then
155+
_SESSION_TYPE="deep"
156+
fi
157+
110158
if [[ "$BAKEOFF_SUMMARY" == CONVERGED* ]]; then
111159
BAKEOFF_CONVERGENCE_LINE="$BAKEOFF_SUMMARY"
112-
BAKEOFF_SUFFIX=$'\n\n---\nBakeoff convergence: '"$BAKEOFF_SUMMARY"$'\nLatest bakeoff session is CONVERGED — no critical/high issues. Running another bakeoff on the same cohort would be redundant. Consider: selecting a new cohort, mining existing artifacts, or moving to other work items.'
160+
if [[ "$_SESSION_TYPE" == "broad" ]]; then
161+
BAKEOFF_SUFFIX=$'\n\n---\nBakeoff convergence: '"$BAKEOFF_SUMMARY"$'\nLatest BROAD bakeoff session is CONVERGED — no critical/high issues.\nNext steps:\n - Select a new cohort: ./scripts/bakeoff cohort --count 5\n - Mine existing artifacts: ./scripts/bakeoff issues --format json\n - Run LLM assessment: ./scripts/bakeoff-reflect\n - Or move to other work items (tracker ready)'
162+
else
163+
BAKEOFF_SUFFIX=$'\n\n---\nBakeoff convergence: '"$BAKEOFF_SUMMARY"$'\nLatest DEEP bakeoff session is CONVERGED — all repos GOOD.\nNext steps:\n - Select a new cohort: ./scripts/bakeoff-features cohort --count 4\n - Compare sessions: ./scripts/bakeoff-features compare <A> <B>\n - Run LLM assessment: ./scripts/bakeoff-features-reflect\n - Or move to other work items (tracker ready)'
164+
fi
113165
elif [[ "$BAKEOFF_SUMMARY" == NEEDS_WORK* ]]; then
114166
BAKEOFF_CONVERGENCE_LINE="$BAKEOFF_SUMMARY"
115-
BAKEOFF_SUFFIX=$'\n\n---\nBakeoff convergence: '"$BAKEOFF_SUMMARY"$'\nLatest bakeoff session has outstanding issues. Consider investigating these before starting new work.'
167+
if [[ "$_SESSION_TYPE" == "broad" ]]; then
168+
BAKEOFF_SUFFIX=$'\n\n---\nBakeoff convergence: '"$BAKEOFF_SUMMARY"$'\nLatest BROAD bakeoff session has outstanding issues.\nInvestigate:\n - View issues: ./scripts/bakeoff issues --format json\n - Diagnose latest: ./scripts/bakeoff diagnose\n - Check status: ./scripts/bakeoff status\n - Re-run after fixes: ./scripts/bakeoff cycle'
169+
else
170+
BAKEOFF_SUFFIX=$'\n\n---\nBakeoff convergence: '"$BAKEOFF_SUMMARY"$'\nLatest DEEP bakeoff session has outstanding issues.\nInvestigate:\n - Check status: ./scripts/bakeoff-features status\n - Diagnose repos: ./scripts/bakeoff-features diagnose\n - Re-run after fixes: ./scripts/bakeoff-features run\n - View questions: ./scripts/bakeoff-features questions'
171+
fi
116172
fi
117173
fi
118174
fi
@@ -137,7 +193,7 @@ if [[ "$TOTAL_TODOS" -gt 0 ]]; then
137193

138194
# Update last_stop_check.json with guidance_file pointer + bakeoff convergence
139195
if [[ -n "$GUIDANCE_FILE" && -z "${STOP_HOOK_DRY_RUN:-}" ]]; then
140-
STATE_FILE_FOR_GF="$REPO_ROOT/.agent/last_stop_check.json"
196+
STATE_FILE_FOR_GF="$HOME/hypergumbo_lab_notebook/last_stop_check.json"
141197
if command -v jq &>/dev/null && [[ -f "$STATE_FILE_FOR_GF" ]]; then
142198
TMP=$(mktemp)
143199
if jq --arg gf "$GUIDANCE_FILE" \
@@ -155,10 +211,14 @@ fi
155211
# (Bakeoff convergence computed above, before guidance file write)
156212

157213
# --- Cooldown & reflection: compute elapsed time, write guidance files ---
158-
STATE_FILE="$REPO_ROOT/.agent/last_stop_check.json"
159-
# Backward compat: fall back to old filename if new one doesn't exist
160-
if [[ ! -f "$STATE_FILE" && -f "$REPO_ROOT/.agent/stop_hook_state.json" ]]; then
161-
STATE_FILE="$REPO_ROOT/.agent/stop_hook_state.json"
214+
STATE_FILE="$HOME/hypergumbo_lab_notebook/last_stop_check.json"
215+
# Backward compat: fall back to old locations if new one doesn't exist
216+
if [[ ! -f "$STATE_FILE" ]]; then
217+
if [[ -f "$REPO_ROOT/.agent/last_stop_check.json" ]]; then
218+
STATE_FILE="$REPO_ROOT/.agent/last_stop_check.json"
219+
elif [[ -f "$REPO_ROOT/.agent/stop_hook_state.json" ]]; then
220+
STATE_FILE="$REPO_ROOT/.agent/stop_hook_state.json"
221+
fi
162222
fi
163223

164224
ELAPSED_MIN=9999 # Default: stale (will trigger Path 3)
@@ -212,3 +272,97 @@ if [[ "$ELAPSED_MIN" -ge 30 ]]; then
212272
fi
213273
} > "$GUIDANCE_FILE_REFLECTION"
214274
fi
275+
276+
# --- Stale-PR audit: surface open PRs older than 6 hours ---
277+
# Queries Forgejo for open PRs. Any PR created more than 6 hours ago is
278+
# flagged in the active guidance file. This catches PRs orphaned by context
279+
# compaction, remote timeouts, or failed CI that the agent forgot about.
280+
# Non-fatal: if the API call fails, we silently skip the audit.
281+
STALE_PR_SECTION=""
282+
if [[ -z "${STOP_HOOK_DRY_RUN:-}" ]]; then
283+
_stale_pr_audit() {
284+
# Source the Forgejo API library
285+
local api_lib="$REPO_ROOT/scripts/lib/forgejo-api.sh"
286+
[[ -f "$api_lib" ]] || return 0
287+
# shellcheck disable=SC1090
288+
source "$api_lib"
289+
load_env 2>/dev/null || return 0
290+
detect_api_base 2>/dev/null || return 0
291+
292+
# Fetch open PRs (silently fail if no connectivity)
293+
if ! api_get "$API_BASE/pulls?state=open&sort=recentupdate&limit=50" 2>/dev/null; then
294+
return 0
295+
fi
296+
297+
local now_epoch
298+
now_epoch=$(date +%s)
299+
local threshold=$((6 * 3600)) # 6 hours in seconds
300+
301+
# Parse PRs: filter to those older than 6 hours
302+
local stale_prs
303+
stale_prs=$(python3 -c "
304+
import json, sys, datetime
305+
now = $now_epoch
306+
threshold = $threshold
307+
prs = json.loads(sys.stdin.read())
308+
if not isinstance(prs, list):
309+
sys.exit(0)
310+
for pr in prs:
311+
created = pr.get('created_at', '')
312+
if not created:
313+
continue
314+
# Parse ISO 8601 timestamp
315+
try:
316+
dt = datetime.datetime.fromisoformat(created.replace('Z', '+00:00'))
317+
age_s = now - int(dt.timestamp())
318+
except (ValueError, TypeError):
319+
continue
320+
if age_s > threshold:
321+
num = pr.get('number', '?')
322+
title = pr.get('title', '?')[:60]
323+
age_h = age_s // 3600
324+
branch = pr.get('head', {}).get('ref', '?')
325+
mergeable = pr.get('mergeable', None)
326+
ci_note = ''
327+
if mergeable is False:
328+
ci_note = ' [NOT MERGEABLE]'
329+
elif mergeable is True:
330+
ci_note = ' [mergeable]'
331+
print(f'- PR #{num} ({age_h}h old){ci_note}: {title}')
332+
print(f' Branch: {branch}')
333+
" <<< "$API_RESPONSE" 2>/dev/null) || return 0
334+
335+
if [[ -n "$stale_prs" ]]; then
336+
STALE_PR_SECTION=$(printf '\n\n## STALE PULL REQUESTS\nThe following open PRs are older than 6 hours. Consider: merge (if CI green),\nrebase + re-push (if out of date), fix (if CI failed), or close (if superseded).\n\n%s\n' "$stale_prs")
337+
fi
338+
}
339+
_stale_pr_audit
340+
fi
341+
342+
# Append stale-PR section to the active guidance file (whichever was generated)
343+
if [[ -n "$STALE_PR_SECTION" ]]; then
344+
for _gf in "$GUIDANCE_FILE" "$GUIDANCE_FILE_COOLDOWN" "$GUIDANCE_FILE_REFLECTION"; do
345+
if [[ -n "$_gf" && -f "$_gf" ]]; then
346+
printf '%s' "$STALE_PR_SECTION" >> "$_gf"
347+
fi
348+
done
349+
fi
350+
351+
# --- Guidance file organization: move older files to subfolder ---
352+
# Keep the 10 most recent guidance files in the main directory for quick
353+
# access. Move everything else to older_guidance/ for archival. NEVER
354+
# deletes guidance files — move-only policy.
355+
if [[ -d "$GUIDANCE_LOG_DIR" && -z "${STOP_HOOK_DRY_RUN:-}" ]]; then
356+
OLDER_DIR="$GUIDANCE_LOG_DIR/older_guidance"
357+
# Count guidance files (stop_guidance_*.md pattern)
358+
GUIDANCE_COUNT=$(find "$GUIDANCE_LOG_DIR" -maxdepth 1 -name 'stop_guidance_*.md' -type f 2>/dev/null | wc -l)
359+
if [[ "$GUIDANCE_COUNT" -gt 10 ]]; then
360+
mkdir -p "$OLDER_DIR"
361+
# Move all but the 10 most recent (by modification time)
362+
find "$GUIDANCE_LOG_DIR" -maxdepth 1 -name 'stop_guidance_*.md' -type f -printf '%T@ %p\n' 2>/dev/null \
363+
| sort -rn | tail -n +"11" | cut -d' ' -f2- \
364+
| while IFS= read -r f; do
365+
mv "$f" "$OLDER_DIR/" 2>/dev/null || true
366+
done
367+
fi
368+
fi

.agent/hooks/claude-code/stop.sh

Lines changed: 45 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,12 +16,56 @@ REPO_ROOT="$(cd "$SCRIPT_DIR/../../.." && pwd)"
1616

1717
# Check autonomous mode (TRUE, BROAD, or DEEP all enable autonomous behavior)
1818
# OFF and FALSE both mean disabled (see scripts/loop-toggle)
19-
MODE=$(cat "$REPO_ROOT/AUTONOMOUS_MODE.txt" 2>/dev/null | tr -d '[:space:]' | tr '[:lower:]' '[:upper:]')
19+
# Format: "MODE" or "MODE pid=12345" (parallel session support)
20+
_RAW_MODE=$(head -1 "$REPO_ROOT/AUTONOMOUS_MODE.txt" 2>/dev/null || true)
21+
MODE=$(echo "$_RAW_MODE" | sed 's/ *pid=[0-9]*//' | tr -d '[:space:]' | tr '[:lower:]' '[:upper:]')
2022
if [[ -z "$MODE" || "$MODE" == "OFF" || "$MODE" == "FALSE" ]]; then
2123
echo '{"decision": "approve", "reason": "Autonomous mode disabled"}'
2224
exit 0
2325
fi
2426

27+
# --- PID-based parallel session detection ---
28+
# If a PID is stored, only the agent whose ancestor matches that PID is
29+
# treated as autonomous. Other sessions (interactive) get approved immediately.
30+
# If no PID is stored, the first agent to hit this hook claims ownership.
31+
_STORED_PID=""
32+
if [[ "$_RAW_MODE" =~ pid=([0-9]+) ]]; then
33+
_STORED_PID="${BASH_REMATCH[1]}"
34+
fi
35+
36+
_is_pid_ancestor() {
37+
# Walk /proc ancestor chain from current process to check if target PID
38+
# is an ancestor. Returns 0 if found, 1 if not.
39+
local target=$1
40+
local pid=$$
41+
while [[ $pid -gt 1 ]]; do
42+
[[ "$pid" == "$target" ]] && return 0
43+
pid=$(awk '/^PPid:/ {print $2}' "/proc/$pid/status" 2>/dev/null) || return 1
44+
done
45+
return 1
46+
}
47+
48+
if [[ -n "$_STORED_PID" ]]; then
49+
if _is_pid_ancestor "$_STORED_PID"; then
50+
: # This is the autonomous agent — proceed to blocking logic
51+
elif [[ -d "/proc/$_STORED_PID" ]]; then
52+
# Stored PID is alive but not our ancestor — we're a different session
53+
echo '{"decision": "approve", "reason": "Interactive session (PID does not match autonomous agent)"}'
54+
exit 0
55+
else
56+
# Stored PID is dead — don't auto-reclaim. Approve this session.
57+
# The autonomous agent must be restarted via loop-toggle, which
58+
# will set a fresh PID. Auto-reclaim caused interactive sessions
59+
# to inherit autonomous blocking after the agent crashed.
60+
echo '{"decision": "approve", "reason": "Autonomous agent PID is dead; use loop-toggle to restart"}'
61+
exit 0
62+
fi
63+
else
64+
# No PID stored: claim ownership using $PPID (the agent process)
65+
# Rewrite the mode file with our PID appended
66+
echo "$MODE pid=$PPID" > "$REPO_ROOT/AUTONOMOUS_MODE.txt"
67+
fi
68+
2569
# Check if loop sentinel exists
2670
if [[ ! -f "$REPO_ROOT/.agent/LOOP" ]]; then
2771
echo '{"decision": "approve", "reason": "Loop sentinel removed"}'

.agent/hooks/cursor/stop.sh

Lines changed: 41 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,12 +23,52 @@ INPUT=$(cat)
2323
# Check autonomous mode - if disabled, allow stop (empty output = no followup)
2424
# TRUE, BROAD, and DEEP all enable autonomous behavior
2525
# OFF and FALSE both mean disabled (see scripts/loop-toggle)
26-
MODE=$(cat "$REPO_ROOT/AUTONOMOUS_MODE.txt" 2>/dev/null | tr -d '[:space:]' | tr '[:lower:]' '[:upper:]')
26+
# Format: "MODE" or "MODE pid=12345" (parallel session support)
27+
_RAW_MODE=$(head -1 "$REPO_ROOT/AUTONOMOUS_MODE.txt" 2>/dev/null || true)
28+
MODE=$(echo "$_RAW_MODE" | sed 's/ *pid=[0-9]*//' | tr -d '[:space:]' | tr '[:lower:]' '[:upper:]')
2729
if [[ -z "$MODE" || "$MODE" == "OFF" || "$MODE" == "FALSE" ]]; then
2830
echo '{}'
2931
exit 0
3032
fi
3133

34+
# --- PID-based parallel session detection ---
35+
# If a PID is stored, only the agent whose ancestor matches that PID is
36+
# treated as autonomous. Other sessions (interactive) get approved immediately.
37+
# If no PID is stored, the first agent to hit this hook claims ownership.
38+
_STORED_PID=""
39+
if [[ "$_RAW_MODE" =~ pid=([0-9]+) ]]; then
40+
_STORED_PID="${BASH_REMATCH[1]}"
41+
fi
42+
43+
_is_pid_ancestor() {
44+
local target=$1
45+
local pid=$$
46+
while [[ $pid -gt 1 ]]; do
47+
[[ "$pid" == "$target" ]] && return 0
48+
pid=$(awk '/^PPid:/ {print $2}' "/proc/$pid/status" 2>/dev/null) || return 1
49+
done
50+
return 1
51+
}
52+
53+
if [[ -n "$_STORED_PID" ]]; then
54+
if _is_pid_ancestor "$_STORED_PID"; then
55+
: # This is the autonomous agent — proceed to blocking logic
56+
elif [[ -d "/proc/$_STORED_PID" ]]; then
57+
# Stored PID is alive but not our ancestor — we're a different session
58+
echo '{}'
59+
exit 0
60+
else
61+
# Stored PID is dead — don't auto-reclaim. Approve this session.
62+
# The autonomous agent must be restarted via loop-toggle, which
63+
# will set a fresh PID. Auto-reclaim caused interactive sessions
64+
# to inherit autonomous blocking after the agent crashed.
65+
echo '{}'
66+
exit 0
67+
fi
68+
else
69+
echo "$MODE pid=$PPID" > "$REPO_ROOT/AUTONOMOUS_MODE.txt"
70+
fi
71+
3272
# Check if loop sentinel exists - if removed, allow stop
3373
if [[ ! -f "$REPO_ROOT/.agent/LOOP" ]]; then
3474
echo '{}'

.agent/last_stop_check.json

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,9 @@
11
{
2-
"last_completed_utc": "2026-02-27T07:35:00Z",
2+
"last_completed_utc": "2026-03-05T18:00:00Z",
33
"branch": "dev",
4-
"last_pr": 1552,
4+
"last_pr": 1762,
55
"last_pr_state": "merged",
66
"pending_hard_todos": 0,
77
"pending_soft_todos": 0,
8-
"unfixed_invariants": 0,
9-
"notes": "PR #1552 merged: Guice @Provides + @ImplementedBy DI detection. PR #1551 merged: AngularJS $http + jQuery $.ajax HTTP client detection. PR #1550 open (CI pipeline revamp, governance, needs human review). Bakeoff cohort 9 re-ran same repos (guacamole-client, guacamole-server-main, tmux) with callback detection + $http detection. Results: guacamole-server-main avg_slice 21.7->50.3, orphan_rate 16.7%->15.0%. tmux edges 7327->7526, orphans 14.2%->13.3%. guacamole-client unchanged (LOW_AVG_SLICE_NODES 15.6). Next: investigate guacamole-client slice depth issue, or explore new bakeoff repos.",
10-
"bakeoff_convergence": "NEEDS_WORK cohort=9 iter=1 good=0 warn=3 fail=0\n guacamole-client: LOW_AVG_SLICE_NODES: 15.6 < 20; guacamole-server-main: HIGH_ORPHAN_RATE: 15.0% = 15%; tmux: HIGH_AVG_SLICE_NODES: 596.1 > 500, HIGH_SLICE_COVERAGE_PCT: 22.5% > 10%"
8+
"notes": "CI failover system implemented and tested end-to-end. PR #1762 merged on Codeberg (ci-failover scripts, --gov flag, forgejo-api.sh apply_failover_overrides). Failover engaged targeting self-hosted Forgejo at 10.85.0.10:3000. Local PRs #1-6 merged (mirror handling, credential embedding, safe mirror restore, analyzer error message improvement). Local CI fully operational with runner labels local-ci, codeberg-small-lazy, self-hosted. Dev is 10+ commits ahead of origin/dev. Disengage when Codeberg recovers."
119
}

.agent/stop_reflect.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -109,7 +109,7 @@ pending_soft_todos = tracker_count('--soft')
109109
110110
# Preserve guidance_file from previous stop hook run if present
111111
existing_state = {}
112-
state_path = pathlib.Path('.agent/last_stop_check.json')
112+
state_path = pathlib.Path.home() / 'hypergumbo_lab_notebook' / 'last_stop_check.json'
113113
if state_path.exists():
114114
try:
115115
existing_state = json.loads(state_path.read_text())
@@ -127,7 +127,7 @@ state = {
127127
}
128128
if 'guidance_file' in existing_state:
129129
state['guidance_file'] = existing_state['guidance_file']
130-
pathlib.Path('.agent/last_stop_check.json').write_text(json.dumps(state, indent=2) + '\n')
130+
state_path.write_text(json.dumps(state, indent=2) + '\n')
131131
"
132132
```
133133
**Important:** Before running, update `last_pr` and `notes` in the script with actual values. The `notes` field is critical — it gets injected into the cooldown prompt so the next cycle knows what to implement. Write specific, actionable implementation tasks, not vague observations.

0 commit comments

Comments
 (0)