-
Notifications
You must be signed in to change notification settings - Fork 10
Description
Implementation Plan: YAML-Driven Pipeline Refactoring
Table of Contents
- Agent Perspectives Summary
- Consensus Status
- Goal
- Codebase Analysis
- Implementation Steps
- Success Criteria
- Risks and Mitigations
- Selection History
- Refine History
Agent Perspectives Summary
| Agent | Core Position | Key Insight |
|---|---|---|
| Bold | External YAML files in pipelines/ with separate Python parser module |
Clear separation enables easy workflow customization without code changes |
| Paranoia | Embedded YAML heredoc with minimal Python bridge emitting shell commands | Matches existing _planner_load_backend_config pattern; fewer new files |
| Critique | Hybrid approach; variadic args for prompt; keep Stage 5 (Consensus) as special post-pipeline step | Both proposals miss that consensus is not a regular agent stage |
| Proposal Reducer | Pure shell solution with associative arrays, no YAML at all | Only 2 workflows exist; YAML infrastructure is overkill for 2 configurations |
| Code Reducer | Paranoia-based with inline Python heredoc achieves ~0 LOC delta | Biggest win is consolidating repetitive 4-agent execution pattern (~70 LOC savings) |
Consensus Status
RESOLVED via user selection. The user selected combination 1B + 2A + 3A:
- 1B: External YAML files for pipeline descriptors
- 2A: Inline Python heredoc for parsing (no separate
.pymodule) - 3A: Variadic arguments for multiple input files
This combination provides the flexibility of external YAML configuration (Bold's key value proposition) while maintaining minimal infrastructure by using inline Python parsing (Paranoia/Reducers' pattern) and idiomatic shell variadic arguments (consensus position).
Goal
Refactor src/cli/planner/pipeline.sh's _planner_run_pipeline function (~312 LOC, lines 320-631) to:
- Replace hardcoded 4-agent stage logic (~110 lines, lines 460-569) with a generic executor driven by external YAML pipeline descriptors
- Support both Ultra-Planner (4 agents) and Mega-Planner (5 agents) workflows from the same codebase
- Enable
_planner_render_promptto accept multiple context files for agents that consume multiple prior outputs - Preserve existing behavior: backend config loading, issue publishing, consensus synthesis
Out of scope:
- Arbitrary DAG pipeline topology
- Non-shell pipeline orchestration
- Dynamic pipeline modification at runtime
- Separate Python module file for parsing
Codebase Analysis
File changes:
| File | Level | Purpose |
|---|---|---|
src/cli/planner/pipeline.sh |
major | Refactor _planner_run_pipeline, extend _planner_render_prompt, add generic executor with inline Python parser |
src/cli/planner/pipelines/ultra.yaml |
major | Ultra-planner pipeline descriptor (new file) |
src/cli/planner/pipelines/mega.yaml |
major | Mega-planner pipeline descriptor (new file) |
tests/cli/test-planner-render-prompt-multi.sh |
medium | New test for multi-input prompt rendering |
tests/cli/test-lol-plan-pipeline-stubbed.sh |
medium | Update to verify dynamic stage execution |
Implementation Steps
Step 1: Create pipeline descriptor directory and Ultra-Planner YAML
- File:
src/cli/planner/pipelines/ultra.yaml - Changes: New file defining the 4-agent Ultra-Planner workflow
Code Draft
# Ultra-Planner pipeline (4-agent debate)
name: ultra
description: Single-proposer debate with understander, bold-proposer, critique, reducer
stages:
- name: understander
label: "Stage 1/5: Running understander"
agents:
- name: understander
agent_md: .claude-plugin/agents/understander.md
backend_key: understander
default_backend: claude:sonnet
tools: Read,Grep,Glob
permission_mode: ""
plan_guideline: false
inputs: []
- name: bold-proposer
label: "Stage 2/5: Running bold-proposer"
agents:
- name: bold
agent_md: .claude-plugin/agents/bold-proposer.md
backend_key: bold
default_backend: claude:opus
tools: Read,Grep,Glob,WebSearch,WebFetch
permission_mode: plan
plan_guideline: true
inputs:
- understander
- name: critique-reducer
label: "Stage 3-4/5: Running critique and reducer"
agents:
- name: critique
agent_md: .claude-plugin/agents/proposal-critique.md
backend_key: critique
default_backend: claude:opus
tools: Read,Grep,Glob,Bash
permission_mode: ""
plan_guideline: true
inputs:
- bold
- name: reducer
agent_md: .claude-plugin/agents/proposal-reducer.md
backend_key: reducer
default_backend: claude:opus
tools: Read,Grep,Glob
permission_mode: ""
plan_guideline: true
inputs:
- boldStep 2: Create Mega-Planner pipeline descriptor
- File:
src/cli/planner/pipelines/mega.yaml - Changes: New file defining the 5-agent Mega-Planner workflow
Code Draft
# Mega-Planner pipeline (5-agent dual-proposer debate)
name: mega
description: Dual-proposer debate with understander, bold, paranoia, critique, reducer, code-reducer
stages:
- name: understander
label: "Stage 1/6: Running understander"
agents:
- name: understander
agent_md: .claude-plugin/agents/understander.md
backend_key: understander
default_backend: claude:sonnet
tools: Read,Grep,Glob
permission_mode: ""
plan_guideline: false
inputs: []
- name: proposers
label: "Stage 2-3/6: Running bold and paranoia proposers"
agents:
- name: bold
agent_md: .claude-plugin/agents/bold-proposer.md
backend_key: bold
default_backend: claude:opus
tools: Read,Grep,Glob,WebSearch,WebFetch
permission_mode: plan
plan_guideline: true
inputs:
- understander
- name: paranoia
agent_md: .claude-plugin/agents/mega-paranoia-proposer.md
backend_key: paranoia
default_backend: claude:opus
tools: Read,Grep,Glob,WebSearch,WebFetch
permission_mode: plan
plan_guideline: true
inputs:
- understander
- name: reviewers
label: "Stage 4-6/6: Running critique, reducer, and code-reducer"
agents:
- name: critique
agent_md: .claude-plugin/agents/proposal-critique.md
backend_key: critique
default_backend: claude:opus
tools: Read,Grep,Glob,Bash
permission_mode: ""
plan_guideline: true
inputs:
- bold
- paranoia
- name: reducer
agent_md: .claude-plugin/agents/proposal-reducer.md
backend_key: reducer
default_backend: claude:opus
tools: Read,Grep,Glob
permission_mode: ""
plan_guideline: true
inputs:
- bold
- paranoia
- name: code-reducer
agent_md: .claude-plugin/agents/mega-code-reducer.md
backend_key: code-reducer
default_backend: claude:opus
tools: Read,Grep,Glob
permission_mode: ""
plan_guideline: true
inputs:
- bold
- paranoiaStep 3: Add test for multi-input prompt rendering
- File:
tests/cli/test-planner-render-prompt-multi.sh - Changes: New test verifying variadic context file handling
Code Draft
#!/usr/bin/env bash
# Test: _planner_render_prompt with multiple context files
source "$(dirname "$0")/../common.sh"
PLANNER_CLI="$PROJECT_ROOT/src/cli/planner.sh"
test_info "_planner_render_prompt handles multiple context files"
export AGENTIZE_HOME="$PROJECT_ROOT"
source "$PLANNER_CLI"
TMP_DIR=$(make_temp_dir "test-render-prompt-multi-$$")
trap 'cleanup_dir "$TMP_DIR"' EXIT
# Create mock context files
echo "# First Context" > "$TMP_DIR/context1.txt"
echo "Content from first stage" >> "$TMP_DIR/context1.txt"
echo "# Second Context" > "$TMP_DIR/context2.txt"
echo "Content from second stage" >> "$TMP_DIR/context2.txt"
echo "# Third Context" > "$TMP_DIR/context3.txt"
echo "Content from third stage" >> "$TMP_DIR/context3.txt"
# Test with multiple context files
OUTPUT_FILE="$TMP_DIR/rendered-prompt.md"
FEATURE_DESC="Test feature for multi-input"
_planner_render_prompt "$OUTPUT_FILE" \
".claude-plugin/agents/proposal-critique.md" \
"true" \
"$FEATURE_DESC" \
"$TMP_DIR/context1.txt" \
"$TMP_DIR/context2.txt" \
"$TMP_DIR/context3.txt"
# Verify all context files were included
grep -q "# Previous Stage Output" "$OUTPUT_FILE" || \
test_fail "Missing 'Previous Stage Output' header"
grep -q "# Additional Context (2)" "$OUTPUT_FILE" || \
test_fail "Missing 'Additional Context (2)' header"
grep -q "# Additional Context (3)" "$OUTPUT_FILE" || \
test_fail "Missing 'Additional Context (3)' header"
grep -q "Content from first stage" "$OUTPUT_FILE" || \
test_fail "Missing first context content"
grep -q "Content from second stage" "$OUTPUT_FILE" || \
test_fail "Missing second context content"
grep -q "Content from third stage" "$OUTPUT_FILE" || \
test_fail "Missing third context content"
grep -q "Test feature for multi-input" "$OUTPUT_FILE" || \
test_fail "Missing feature description"
test_pass "_planner_render_prompt handles multiple context files"Step 4: Extend _planner_render_prompt to support variadic context files
- File:
src/cli/planner/pipeline.sh - Changes: Modify signature to accept variadic context files using
shift 4and"$@"pattern
Code Draft
--- a/src/cli/planner/pipeline.sh
+++ b/src/cli/planner/pipeline.sh
@@ -244,9 +244,11 @@ _planner_acw_run() {
# ── Prompt rendering ──
-# Render a prompt by concatenating agent base prompt, optional plan-guideline, and context
-# Usage: _planner_render_prompt <output-file> <agent-md-path> <include-plan-guideline> <feature-desc> [context-file]
+# Render a prompt by concatenating agent base prompt, optional plan-guideline, feature desc, and context files
+# Usage: _planner_render_prompt <output-file> <agent-md-path> <include-plan-guideline> <feature-desc> [context-file...]
_planner_render_prompt() {
local output_file="$1"
local agent_md="$2"
local include_plan_guideline="$3"
local feature_desc="$4"
- local context_file="${5:-}"
+ shift 4
+ local -a context_files=("$@")
local repo_root="${AGENTIZE_HOME:-$(git rev-parse --show-toplevel 2>/dev/null)}"
if [ -z "$repo_root" ] || [ ! -d "$repo_root" ]; then
@@ -287,13 +289,21 @@ _planner_render_prompt() {
echo "" >> "$output_file"
echo "$feature_desc" >> "$output_file"
- # Append context from previous stage if provided
- if [ -n "$context_file" ] && [ -f "$context_file" ]; then
- echo "" >> "$output_file"
- echo "---" >> "$output_file"
- echo "" >> "$output_file"
- echo "# Previous Stage Output" >> "$output_file"
- echo "" >> "$output_file"
- cat "$context_file" >> "$output_file"
- fi
+ # Append context from previous stages (variadic)
+ local context_idx=0
+ for context_file in "${context_files[@]}"; do
+ if [ -n "$context_file" ] && [ -f "$context_file" ]; then
+ echo "" >> "$output_file"
+ echo "---" >> "$output_file"
+ echo "" >> "$output_file"
+ if [ $context_idx -eq 0 ]; then
+ echo "# Previous Stage Output" >> "$output_file"
+ else
+ echo "# Additional Context ($((context_idx + 1)))" >> "$output_file"
+ fi
+ echo "" >> "$output_file"
+ cat "$context_file" >> "$output_file"
+ context_idx=$((context_idx + 1))
+ fi
+ done
return 0
}Step 5: Add generic agent execution function
- File:
src/cli/planner/pipeline.sh - Changes: Add
_planner_exec_agentfunction that encapsulates render-acw-check pattern
Code Draft
--- a/src/cli/planner/pipeline.sh
+++ b/src/cli/planner/pipeline.sh
@@ -315,6 +315,52 @@ _planner_stage() {
echo "$@" >&2
}
+# Execute a single agent stage
+# Usage: _planner_exec_agent <name> <agent-md> <backend> <tools> <permission-mode> <plan-guideline> <input-path> <output-path> <feature-desc> [context-file...]
+_planner_exec_agent() {
+ local name="$1"
+ local agent_md="$2"
+ local backend="$3"
+ local tools="$4"
+ local permission_mode="$5"
+ local plan_guideline="$6"
+ local input_path="$7"
+ local output_path="$8"
+ local feature_desc="$9"
+ shift 9
+ local -a context_files=("$@")
+
+ # Render prompt with multiple context files
+ if ! _planner_render_prompt "$input_path" "$agent_md" "$plan_guideline" "$feature_desc" "${context_files[@]}"; then
+ echo "Error: ${name} prompt rendering failed" >&2
+ return 2
+ fi
+
+ # Execute agent via acw
+ _planner_acw_run "$backend" "$input_path" "$output_path" "$tools" "$permission_mode"
+ local exit_code=$?
+
+ if [ $exit_code -ne 0 ] || [ ! -s "$output_path" ]; then
+ echo "Error: ${name} stage failed (exit code: $exit_code)" >&2
+ return 2
+ fi
+
+ return 0
+}Step 6: Add inline Python pipeline parser function
- File:
src/cli/planner/pipeline.sh - Changes: Add
_planner_load_pipelinefunction using heredoc Python (matching existing_planner_load_backend_configpattern)
Code Draft
--- a/src/cli/planner/pipeline.sh
+++ b/src/cli/planner/pipeline.sh
@@ -315,6 +315,80 @@ _planner_stage() {
echo "$@" >&2
}
+# Load and parse pipeline descriptor from YAML file
+# Usage: _planner_load_pipeline <yaml-path> <backend-overrides>
+# Outputs: Line-separated stage commands in format:
+# STAGE:<label>:<parallel-agent-count>
+# AGENT:<name>|<agent_md>|<backend>|<tools>|<permission>|<plan_guideline>|<inputs-comma-sep>
+# STAGE_END
+_planner_load_pipeline() {
+ local yaml_path="$1"
+ local backend_overrides="$2"
+ local repo_root="${AGENTIZE_HOME:-$(git rev-parse --show-toplevel 2>/dev/null)}"
+
+ PIPELINE_YAML_PATH="$yaml_path" \
+ PIPELINE_BACKENDS="$backend_overrides" \
+ PIPELINE_GLOBAL_BACKEND="${3:-}" \
+ python3 - <<'PY'
+import os
+import sys
+from pathlib import Path
+
+yaml_path = Path(os.environ.get("PIPELINE_YAML_PATH", ""))
+backend_overrides_str = os.environ.get("PIPELINE_BACKENDS", "")
+global_backend = os.environ.get("PIPELINE_GLOBAL_BACKEND", "")
+
+if not yaml_path.is_file():
+ print(f"Error: Pipeline file not found: {yaml_path}", file=sys.stderr)
+ sys.exit(1)
+
+# Parse backend overrides
+overrides = {}
+for line in backend_overrides_str.strip().split("\n"):
+ if "=" in line:
+ k, v = line.split("=", 1)
+ overrides[k.strip()] = v.strip()
+
+# Parse YAML
+try:
+ import yaml
+ with open(yaml_path) as f:
+ data = yaml.safe_load(f)
+except ImportError:
+ # Fallback to local_config_io
+ repo_root = Path(os.environ.get("AGENTIZE_HOME", Path.cwd()))
+ sys.path.insert(0, str(repo_root / ".claude-plugin"))
+ from lib.local_config_io import parse_yaml_file
+ data = parse_yaml_file(yaml_path)
+
+if not isinstance(data, dict) or "stages" not in data:
+ print("Error: Pipeline must have 'stages' key", file=sys.stderr)
+ sys.exit(1)
+
+for stage in data["stages"]:
+ label = stage.get("label", stage.get("name", "unknown"))
+ agents = stage.get("agents", [])
+ print(f"STAGE:{label}:{len(agents)}")
+
+ for agent in agents:
+ name = agent.get("name", "unknown")
+ agent_md = agent.get("agent_md", "")
+ backend_key = agent.get("backend_key", name)
+ default_backend = agent.get("default_backend", "claude:opus")
+ tools = agent.get("tools", "Read,Grep,Glob")
+ permission = agent.get("permission_mode", "")
+ plan_guideline = "true" if agent.get("plan_guideline", False) else "false"
+ inputs = ",".join(agent.get("inputs", []))
+
+ # Resolve backend: override > global > default
+ backend = overrides.get(backend_key) or global_backend or default_backend
+
+ print(f"AGENT:{name}|{agent_md}|{backend}|{tools}|{permission}|{plan_guideline}|{inputs}")
+
+ print("STAGE_END")
+PY
+}Step 7: Add generic pipeline executor function
- File:
src/cli/planner/pipeline.sh - Changes: Add
_planner_exec_pipelinethat processes parsed stage commands and handles parallel execution
Code Draft
--- a/src/cli/planner/pipeline.sh
+++ b/src/cli/planner/pipeline.sh
@@ -395,6 +395,120 @@ _planner_load_pipeline() {
+}
+
+# Execute pipeline from parsed stage commands
+# Usage: _planner_exec_pipeline <pipeline-yaml> <prefix> <feature-desc> <backend-overrides> <global-backend> <verbose>
+_planner_exec_pipeline() {
+ local pipeline_yaml="$1"
+ local prefix="$2"
+ local feature_desc="$3"
+ local backend_overrides="$4"
+ local global_backend="$5"
+ local verbose="$6"
+
+ local repo_root="${AGENTIZE_HOME:-$(git rev-parse --show-toplevel 2>/dev/null)}"
+
+ # Get parsed commands from Python
+ local commands
+ commands=$(_planner_load_pipeline "$pipeline_yaml" "$backend_overrides" "$global_backend") || {
+ echo "Error: Pipeline parsing failed" >&2
+ return 1
+ }
+
+ local stage_count=0
+ local total_stages
+ total_stages=$(echo "$commands" | grep -c "^STAGE:" || echo 0)
+
+ # Track agent outputs for input resolution
+ declare -A agent_outputs
+
+ local current_label=""
+ local agents_in_stage=0
+ local -a pids=()
+ local -a agent_names=()
+ local -a agent_output_paths=()
+ local t_stage
+
+ while IFS= read -r line; do
+ case "$line" in
+ STAGE:*)
+ # Parse: STAGE:<label>:<agent_count>
+ local stage_info="${line#STAGE:}"
+ current_label="${stage_info%:*}"
+ agents_in_stage="${stage_info##*:}"
+ stage_count=$((stage_count + 1))
+ t_stage=$(_planner_timer_start)
+ pids=()
+ agent_names=()
+ agent_output_paths=()
+ _planner_anim_start "$current_label"
+ ;;
+
+ AGENT:*)
+ # Parse: AGENT:<name>|<agent_md>|<backend>|<tools>|<permission>|<plan_guideline>|<inputs>
+ local agent_line="${line#AGENT:}"
+ IFS='|' read -r name agent_md backend tools permission plan_guideline inputs_str <<< "$agent_line"
+
+ local input_path="${prefix}-${name}-input.md"
+ local output_path="${prefix}-${name}.txt"
+
+ agent_names+=("$name")
+ agent_output_paths+=("$output_path")
+
+ # Resolve input files from previous agent outputs
+ local -a context_files=()
+ if [ -n "$inputs_str" ]; then
+ IFS=',' read -ra input_names <<< "$inputs_str"
+ for input_name in "${input_names[@]}"; do
+ if [ -n "${agent_outputs[$input_name]:-}" ]; then
+ context_files+=("${agent_outputs[$input_name]}")
+ fi
+ done
+ fi
+
+ if [ "$agents_in_stage" -eq 1 ]; then
+ # Sequential execution
+ _planner_exec_agent "$name" "$agent_md" "$backend" "$tools" "$permission" "$plan_guideline" \
+ "$input_path" "$output_path" "$feature_desc" "${context_files[@]}"
+ local exit_code=$?
+ _planner_anim_stop
+ if [ $exit_code -ne 0 ]; then
+ return $exit_code
+ fi
+ _planner_timer_log "$name" "$t_stage"
+ _planner_log "$verbose" " ${name} complete: $output_path"
+ else
+ # Parallel execution
+ _planner_exec_agent "$name" "$agent_md" "$backend" "$tools" "$permission" "$plan_guideline" \
+ "$input_path" "$output_path" "$feature_desc" "${context_files[@]}" &
+ pids+=($!)
+ fi
+ ;;
+
+ STAGE_END)
+ # Wait for parallel agents
+ if [ "$agents_in_stage" -gt 1 ] && [ ${#pids[@]} -gt 0 ]; then
+ local all_success=true
+ for i in "${!pids[@]}"; do
+ wait "${pids[$i]}" || all_success=false
+ local aout="${agent_output_paths[$i]}"
+ if [ ! -s "$aout" ]; then
+ all_success=false
+ fi
+ done
+ _planner_anim_stop
+ if [ "$all_success" != "true" ]; then
+ echo "Error: One or more agents in stage failed" >&2
+ return 2
+ fi
+ _planner_timer_log "${current_label}" "$t_stage"
+ for i in "${!agent_names[@]}"; do
+ _planner_log "$verbose" " ${agent_names[$i]} complete: ${agent_output_paths[$i]}"
+ done
+ fi
+
+ # Record outputs for downstream input resolution
+ for i in "${!agent_names[@]}"; do
+ agent_outputs["${agent_names[$i]}"]="${agent_output_paths[$i]}"
+ done
+ _planner_log "$verbose" ""
+ ;;
+ esac
+ done <<< "$commands"
+
+ return 0
+}Step 8: Refactor _planner_run_pipeline to use the generic executor
- File:
src/cli/planner/pipeline.sh - Changes: Replace hardcoded stage logic with call to
_planner_exec_pipeline, keeping consensus stage as post-pipeline step
Code Draft
--- a/src/cli/planner/pipeline.sh
+++ b/src/cli/planner/pipeline.sh
@@ -320,6 +320,8 @@ _planner_stage() {
# Run the full multi-agent debate pipeline
# Usage: _planner_run_pipeline "<feature-description>" [issue-mode] [verbose] [refine-issue-number] [pipeline-type]
+# pipeline-type: "ultra" (default) or "mega"
_planner_run_pipeline() {
local feature_desc="$1"
local issue_mode="${2:-true}"
local verbose="${3:-false}"
local refine_issue_number="${4:-}"
+ local pipeline_type="${5:-ultra}"
local repo_root="${AGENTIZE_HOME:-$(git rev-parse --show-toplevel 2>/dev/null)}"
if [ -z "$repo_root" ] || [ ! -d "$repo_root" ]; then
echo "Error: Could not determine repo root. Set AGENTIZE_HOME or run inside a git repo." >&2
return 1
fi
+
+ # Select pipeline YAML
+ local pipeline_yaml="$repo_root/src/cli/planner/pipelines/${pipeline_type}.yaml"
+ if [ ! -f "$pipeline_yaml" ]; then
+ echo "Error: Pipeline descriptor not found: $pipeline_yaml" >&2
+ return 1
+ fi
+
local timestamp
timestamp=$(date +%Y%m%d-%H%M%S)
@@ -385,12 +395,14 @@ _planner_run_pipeline() {
local config_start_dir="${PWD:-$(pwd)}"
- local planner_backend=""
- local planner_understander=""
- local planner_bold=""
- local planner_critique=""
- local planner_reducer=""
+ local global_backend=""
+ local backend_overrides=""
local backend_config
backend_config=$(_planner_load_backend_config "$repo_root" "$config_start_dir") || return 1
if [ -n "$backend_config" ]; then
while IFS='=' read -r key value; do
- case "$key" in
- backend)
- planner_backend="$value"
- ;;
- understander)
- planner_understander="$value"
- ;;
- bold)
- planner_bold="$value"
- ;;
- critique)
- planner_critique="$value"
- ;;
- reducer)
- planner_reducer="$value"
- ;;
- esac
+ if [ "$key" = "backend" ]; then
+ global_backend="$value"
+ else
+ backend_overrides="${backend_overrides}${key}=${value}"$'\n'
+ fi
done <<< "$backend_config"
fi
- if ! _planner_validate_backend "$planner_backend" "planner.backend"; then
+ # Validate global backend if set
+ if ! _planner_validate_backend "$global_backend" "planner.backend"; then
return 1
fi
- if ! _planner_validate_backend "$planner_understander" "planner.understander"; then
- return 1
- fi
- # ... (remove other individual backend validations)
_planner_stage "Starting multi-agent debate pipeline..."
_planner_print_feature "$feature_desc"
_planner_log "$verbose" "Artifacts prefix: ${prefix_name}"
+ _planner_log "$verbose" "Pipeline: ${pipeline_type}"
_planner_log "$verbose" ""
- # ── Stage 1: Understander ──
- # ... (DELETE ~110 lines of hardcoded stage logic, lines 460-569)
+ # Execute pipeline from YAML descriptor
+ if ! _planner_exec_pipeline "$pipeline_yaml" "$prefix" "$feature_desc" "$backend_overrides" "$global_backend" "$verbose"; then
+ return 2
+ fi
# ── Final Stage: External Consensus ──
+ # (Kept as special post-pipeline step - not part of YAML descriptor)
local t_consensus
t_consensus=$(_planner_timer_start)
_planner_anim_start "Final Stage: Running external consensus synthesis"
local consensus_script="${_PLANNER_CONSENSUS_SCRIPT:-$repo_root/.claude-plugin/skills/external-consensus/scripts/external-consensus.sh}"
if [ ! -f "$consensus_script" ]; then
_planner_anim_stop
echo "Error: Consensus script not found: $consensus_script" >&2
return 2
fi
+ # Get output paths for consensus inputs based on pipeline type
+ local bold_output="${prefix}-bold.txt"
local critique_output="${prefix}-critique.txt"
local reducer_output="${prefix}-reducer.txt"
+ local consensus_inputs=("$bold_output" "$critique_output" "$reducer_output")
+
+ if [ "$pipeline_type" = "mega" ]; then
+ local paranoia_output="${prefix}-paranoia.txt"
+ local code_reducer_output="${prefix}-code-reducer.txt"
+ consensus_inputs=("$bold_output" "$paranoia_output" "$critique_output" "$reducer_output" "$code_reducer_output")
+ fi
local consensus_path
- consensus_path=$("$consensus_script" "$bold_output" "$critique_output" "$reducer_output" | tail -n 1)
+ consensus_path=$("$consensus_script" "${consensus_inputs[@]}" | tail -n 1)
local consensus_exit=$?
_planner_anim_stop
# ... (rest of consensus handling and issue publishing unchanged)Step 9: Update existing pipeline test
- File:
tests/cli/test-lol-plan-pipeline-stubbed.sh - Changes: Add verification that generic executor is used and validates pipeline YAML loading
Code Draft
--- a/tests/cli/test-lol-plan-pipeline-stubbed.sh
+++ b/tests/cli/test-lol-plan-pipeline-stubbed.sh
@@ -1,6 +1,6 @@
#!/usr/bin/env bash
# Test: Pipeline flow with stubbed acw and consensus script
-# Tests YAML-based backend overrides plus default (quiet) and --verbose modes via lol plan
+# Tests YAML-driven pipeline execution, backend overrides, and parallel execution
source "$(dirname "$0")/../common.sh"
@@ -137,4 +137,32 @@ echo "$output_verbose" | grep -q "Stage" || {
test_fail "Verbose output should include stage progress"
}
+# ── Test 3: Verify pipeline YAML exists and is valid ──
+test_info "Verifying pipeline YAML descriptors"
+
+ULTRA_PIPELINE="$PROJECT_ROOT/src/cli/planner/pipelines/ultra.yaml"
+MEGA_PIPELINE="$PROJECT_ROOT/src/cli/planner/pipelines/mega.yaml"
+
+if [ ! -f "$ULTRA_PIPELINE" ]; then
+ test_fail "Ultra pipeline descriptor not found: $ULTRA_PIPELINE"
+fi
+
+if [ ! -f "$MEGA_PIPELINE" ]; then
+ test_fail "Mega pipeline descriptor not found: $MEGA_PIPELINE"
+fi
+
+# Verify pipeline descriptors are valid YAML with required structure
+python3 -c "
+import sys
+sys.path.insert(0, '$PROJECT_ROOT/.claude-plugin')
+from lib.local_config_io import parse_yaml_file
+from pathlib import Path
+
+for pipeline in ['$ULTRA_PIPELINE', '$MEGA_PIPELINE']:
+ data = parse_yaml_file(Path(pipeline))
+ assert 'stages' in data, f'Missing stages in {pipeline}'
+ assert len(data['stages']) >= 3, f'Expected at least 3 stages in {pipeline}'
+" || test_fail "Failed to validate pipeline descriptors"
+
test_pass "Pipeline generates all stage artifacts with stubbed acw and consensus"Success Criteria
-
_planner_render_promptaccepts 0-N context files via variadic arguments -
_planner_exec_agentencapsulates the render-acw-check pattern -
_planner_load_pipelineparses external YAML using inline Python heredoc -
_planner_exec_pipelineexecutes stages with parallel agent support - Pipeline YAML files exist:
src/cli/planner/pipelines/ultra.yamlandsrc/cli/planner/pipelines/mega.yaml - All existing tests pass:
TEST_SHELLS="bash zsh" make test-fast - New test
test-planner-render-prompt-multi.shverifies multi-input concatenation - Pipeline still produces correct acw calls (4 for ultra, 5+ for mega)
- Backend override from
.agentize.local.yamlstill applies - Consensus stage still calls external script and publishes to issue
Risks and Mitigations
| Risk | Likelihood | Impact | Mitigation |
|---|---|---|---|
Breaking _planner_render_prompt callers |
Low | Medium | No external callers found; only used within pipeline.sh |
Variadic shift 9 in exec_agent breaks with special chars |
Low | Low | File paths don't contain special chars in this project |
| Pipeline YAML parsing fails on edge cases | Medium | Medium | Validate YAML structure in inline Python; fallback to local_config_io |
| File discovery adds complexity | Low | Low | Clear directory convention (pipelines/); validated at startup |
| Test flakiness from parallel execution | Medium | Low | Use PLANNER_NO_ANIM=1 and deterministic output checking |
Selection History
| Timestamp | Disagreement | Options Summary | Selected Option | User Comments |
|---|---|---|---|---|
| 2026-02-01 21:41 | 1: Pipeline Descriptor Storage | 1A: Embedded heredoc; 1B: External YAML files; 1C: Hybrid | 1B (Bold) | - |
| 2026-02-01 21:41 | 2: Python Parser Location | 2A: Inline heredoc; 2B: Separate module | 2A (Paranoia) | - |
| 2026-02-01 21:41 | 3: Multiple Input Files Interface | 3A: Variadic; 3B: Comma-separated | 3A (Bold/Critique) | - |
Refine History
| Timestamp | Summary |
|---|---|
| 2026-02-01 21:41 | Initial plan generation |
| 2026-02-01 21:41 | Resolved disagreements with selection 1B+2A+3A |
Option Compatibility Check
Status: VALIDATED
All selected options are architecturally compatible:
- Option 1B (external YAML files) works with Option 2A (inline Python heredoc) - the heredoc parses the external files
- Option 3A (variadic arguments) is independent of pipeline descriptor storage
- The combination provides flexible configuration (external YAML) with minimal infrastructure (no separate Python module)
- Consensus stage remains as special post-pipeline code, preserving unique behavior
Dude, carefully read my response to determine what to do next.