|
| 1 | +# Root Cause Analysis (RCA) for PlanExe Reports |
| 2 | + |
| 3 | +> **Historical note:** This spec was written under the name "flaw tracer". The module |
| 4 | +> has been renamed to `rca` (root cause analysis). |
| 5 | +> The static DAG registry described here has since been replaced by `extract_dag.py` |
| 6 | +> which introspects the Luigi task graph at import time. |
| 7 | +
|
| 8 | +## Goal |
| 9 | + |
| 10 | +A CLI tool that takes a PlanExe output directory, a starting file, and a problem description, then recursively traces the problem upstream through the DAG of intermediary files to find where it originated. Produces both JSON and markdown output. Built on PlanExe's existing LLM infrastructure so it can eventually become a pipeline stage. |
| 11 | + |
| 12 | +## Architecture |
| 13 | + |
| 14 | +The tool performs a recursive depth-first search through the pipeline DAG. Starting from a downstream file where a problem is observed, it walks upstream one hop at a time — reading input files, asking an LLM whether the problem or a precursor exists there, and continuing until it reaches a node where the problem exists in the output but not in any inputs. At that origin point, it reads the node's source code to identify the likely cause. |
| 15 | + |
| 16 | +Three LLM prompts drive the analysis: problem identification (once at the start), upstream checking (at each hop), and source code analysis (at each origin). All use Pydantic models for structured output and LLMExecutor for fallback resilience. |
| 17 | + |
| 18 | +## Components |
| 19 | + |
| 20 | +``` |
| 21 | +worker_plan/worker_plan_internal/rca/ |
| 22 | + __init__.py |
| 23 | + __main__.py — CLI entry point (argparse, LLM setup, orchestration) |
| 24 | + registry.py — Static DAG mapping: stages, output files, dependencies, source code paths |
| 25 | + tracer.py — Recursive tracing algorithm |
| 26 | + prompts.py — Pydantic models and LLM prompt templates |
| 27 | + output.py — JSON + markdown report generation |
| 28 | +``` |
| 29 | + |
| 30 | +### `registry.py` — DAG Mapping |
| 31 | + |
| 32 | +A static Python data structure mapping the full pipeline topology. Each entry describes one pipeline stage: |
| 33 | + |
| 34 | +```python |
| 35 | +@dataclass |
| 36 | +class NodeInfo: |
| 37 | + name: str # e.g., "potential_levers" |
| 38 | + output_files: list[str] # e.g., ["002-9-potential_levers_raw.json", "002-10-potential_levers.json"] |
| 39 | + inputs: list[str] # e.g., ["setup", "identify_purpose", "plan_type", "extract_constraints"] |
| 40 | + source_code_files: list[str] # Relative to worker_plan/, e.g., ["worker_plan_internal/plan/stages/potential_levers.py", "worker_plan_internal/lever/identify_potential_levers.py"] |
| 41 | +``` |
| 42 | + |
| 43 | +The registry covers all ~48 pipeline stages. Key functions: |
| 44 | + |
| 45 | +- `find_node_by_filename(filename: str) -> NodeInfo | None` — Given an output filename, return the stage that produced it. |
| 46 | +- `get_upstream_files(stage_name: str, output_dir: Path) -> list[tuple[str, Path]]` — Return `(stage_name, file_path)` pairs for all upstream stages, resolved against the output directory. Skip files that don't exist on disk. When a stage has multiple output files (e.g., both `_raw.json` and `.json`), prefer the clean/processed file since that's what downstream stages consume. If only the raw file exists, use that. |
| 47 | +- `get_source_code_paths(stage_name: str) -> list[Path]` — Return absolute paths to source code files for a stage. |
| 48 | + |
| 49 | +The mapping is derived from the Luigi task classes (`requires()` and `output()` methods) but hard-coded for reliability. When the pipeline changes, this file needs updating. |
| 50 | + |
| 51 | +### `prompts.py` — Pydantic Models and Prompt Templates |
| 52 | + |
| 53 | +Three Pydantic models for structured LLM output: |
| 54 | + |
| 55 | +```python |
| 56 | +class IdentifiedProblem(BaseModel): |
| 57 | + description: str = Field(description="One-sentence description of the problem") |
| 58 | + evidence: str = Field(description="Direct quote from the file demonstrating the problem") |
| 59 | + severity: Literal["HIGH", "MEDIUM", "LOW"] = Field( |
| 60 | + description="HIGH: fabricated data or missing critical analysis. MEDIUM: weak reasoning or vague claims. LOW: minor gaps." |
| 61 | + ) |
| 62 | + |
| 63 | +class ProblemIdentificationResult(BaseModel): |
| 64 | + problems: list[IdentifiedProblem] = Field(description="List of discrete problems found in the file") |
| 65 | + |
| 66 | +class UpstreamCheckResult(BaseModel): |
| 67 | + found: bool = Field(description="True if this file contains the problem or a precursor to it") |
| 68 | + evidence: str | None = Field(description="Direct quote from the file if found, null otherwise") |
| 69 | + explanation: str = Field(description="How this connects to the downstream problem, or why this file is clean") |
| 70 | + |
| 71 | +class SourceCodeAnalysisResult(BaseModel): |
| 72 | + likely_cause: str = Field(description="What in the prompt or logic likely caused the problem") |
| 73 | + relevant_code_section: str = Field(description="The specific code or prompt text responsible") |
| 74 | + suggestion: str = Field(description="How to fix or prevent this problem") |
| 75 | +``` |
| 76 | + |
| 77 | +Three prompt-building functions, each returning a `list[ChatMessage]`: |
| 78 | + |
| 79 | +**`build_problem_identification_messages(filename, file_content, user_problem_description)`** |
| 80 | + |
| 81 | +System message: |
| 82 | +``` |
| 83 | +You are analyzing an intermediary file from a project planning pipeline. |
| 84 | +The user has identified problems in this output. Identify each discrete problem. |
| 85 | +For each problem, provide a short description, a direct quote as evidence, and a severity level. |
| 86 | +Only identify real problems — do not flag stylistic preferences or minor formatting issues. |
| 87 | +``` |
| 88 | + |
| 89 | +User message contains the filename, file content, and the user's problem description. |
| 90 | + |
| 91 | +**`build_upstream_check_messages(problem_description, evidence_quote, upstream_filename, upstream_file_content)`** |
| 92 | + |
| 93 | +System message: |
| 94 | +``` |
| 95 | +You are tracing a problem through a project planning pipeline to find where it originated. |
| 96 | +A downstream file contains a problem. You are examining an upstream file that was an input |
| 97 | +to the stage that produced the problematic output. Determine if this upstream file contains |
| 98 | +the same problem or a precursor to it. |
| 99 | +``` |
| 100 | + |
| 101 | +User message contains the problem details and the upstream file content. |
| 102 | + |
| 103 | +**`build_source_code_analysis_messages(problem_description, evidence_quote, source_code_contents)`** |
| 104 | + |
| 105 | +System message: |
| 106 | +``` |
| 107 | +A problem was introduced at this pipeline stage. The problem exists in its output but NOT |
| 108 | +in any of its inputs. Examine the source code to identify what in the prompt text, |
| 109 | +logic, or processing likely caused this problem. Be specific — point to lines or prompt phrases. |
| 110 | +``` |
| 111 | + |
| 112 | +User message contains the problem details and the concatenated source code. |
| 113 | + |
| 114 | +### `tracer.py` — Recursive Tracing Algorithm |
| 115 | + |
| 116 | +```python |
| 117 | +class RootCauseAnalyzer: |
| 118 | + def __init__(self, output_dir: Path, llm_executor: LLMExecutor, source_code_base: Path, max_depth: int = 15, verbose: bool = False): |
| 119 | + ... |
| 120 | + |
| 121 | + def trace(self, starting_file: str, problem_description: str) -> RCAResult: |
| 122 | + """Main entry point. Returns the complete trace result.""" |
| 123 | + ... |
| 124 | +``` |
| 125 | + |
| 126 | +The `trace` method implements three phases: |
| 127 | + |
| 128 | +**Phase 1 — Identify problems.** |
| 129 | +Read the starting file. Build the problem identification prompt with the file content and user's description. Call the LLM via `LLMExecutor.run()` using `llm.as_structured_llm(ProblemIdentificationResult)`. Returns a list of `IdentifiedProblem` objects. |
| 130 | + |
| 131 | +**Phase 2 — Recursive upstream trace.** |
| 132 | +For each identified problem, call `_trace_upstream(problem, node_name, current_file, depth)`: |
| 133 | + |
| 134 | +1. Look up the current node's upstream nodes via the registry. |
| 135 | +2. For each upstream node, resolve its output files on disk. |
| 136 | +3. Read each upstream file. Build the upstream check prompt. Call the LLM. |
| 137 | +4. If `found=True`: append to the trace chain and recurse into that node's upstream dependencies. |
| 138 | +5. If `found=False`: this branch is clean, stop. |
| 139 | +6. If depth reaches `max_depth`: stop and mark trace as incomplete. |
| 140 | + |
| 141 | +**Deduplication:** Track which `(node_name, problem_description)` pairs have already been analyzed. If two problems converge on the same upstream file, reuse the earlier result. |
| 142 | + |
| 143 | +**Multiple upstream branches:** When a node has multiple upstream inputs and the problem is found in more than one, follow all branches. The trace can fork — the JSON output represents this as a list of trace entries per problem (each entry has a node and file), ordered from downstream to upstream. |
| 144 | + |
| 145 | +**Phase 3 — Source code analysis at origin.** |
| 146 | +When a problem is found in a node's output but not in any of its inputs, that node is the origin. Read the source code files for that node (via registry). Build the source code analysis prompt. Call the LLM. Attach the result to the problem's origin data. |
| 147 | + |
| 148 | +### `output.py` — Report Generation |
| 149 | + |
| 150 | +Two functions: |
| 151 | + |
| 152 | +**`write_json_report(result: RCAResult, output_path: Path)`** |
| 153 | + |
| 154 | +Writes the full trace as JSON: |
| 155 | + |
| 156 | +```json |
| 157 | +{ |
| 158 | + "input": { |
| 159 | + "starting_file": "030-report.html", |
| 160 | + "problem_description": "...", |
| 161 | + "output_dir": "/path/to/output", |
| 162 | + "timestamp": "2026-04-05T14:30:00Z" |
| 163 | + }, |
| 164 | + "problems": [ |
| 165 | + { |
| 166 | + "id": "problem_001", |
| 167 | + "description": "Budget of CZK 500,000 is unvalidated", |
| 168 | + "severity": "HIGH", |
| 169 | + "starting_evidence": "quote from starting file...", |
| 170 | + "trace": [ |
| 171 | + { |
| 172 | + "node": "executive_summary", |
| 173 | + "file": "025-2-executive_summary.md", |
| 174 | + "evidence": "...", |
| 175 | + "is_origin": false |
| 176 | + }, |
| 177 | + { |
| 178 | + "node": "make_assumptions", |
| 179 | + "file": "003-5-make_assumptions.md", |
| 180 | + "evidence": "...", |
| 181 | + "is_origin": true |
| 182 | + } |
| 183 | + ], |
| 184 | + "origin": { |
| 185 | + "node": "make_assumptions", |
| 186 | + "file": "003-5-make_assumptions.md", |
| 187 | + "source_code_files": ["stages/make_assumptions.py", "assumption/make_assumptions.py"], |
| 188 | + "likely_cause": "The prompt asks the LLM to...", |
| 189 | + "suggestion": "Add a validation step that..." |
| 190 | + }, |
| 191 | + "depth": 2 |
| 192 | + } |
| 193 | + ], |
| 194 | + "summary": { |
| 195 | + "total_problems": 3, |
| 196 | + "deepest_origin_node": "make_assumptions", |
| 197 | + "deepest_origin_depth": 3, |
| 198 | + "llm_calls_made": 12 |
| 199 | + } |
| 200 | +} |
| 201 | +``` |
| 202 | + |
| 203 | +**`write_markdown_report(result: RCAResult, output_path: Path)`** |
| 204 | + |
| 205 | +Writes a human-readable report: |
| 206 | + |
| 207 | +```markdown |
| 208 | +# Root Cause Analysis Report |
| 209 | + |
| 210 | +**Input:** 030-report.html |
| 211 | +**Problems found:** 3 |
| 212 | +**Deepest origin:** make_assumptions (depth 3) |
| 213 | + |
| 214 | +--- |
| 215 | + |
| 216 | +## Problem 1 (HIGH): Budget of CZK 500,000 is unvalidated |
| 217 | + |
| 218 | +**Trace:** executive_summary -> project_plan -> **make_assumptions** (origin) |
| 219 | + |
| 220 | +| Node | File | Evidence | |
| 221 | +|------|------|----------| |
| 222 | +| executive_summary | 025-2-executive_summary.md | "The budget is CZK 500,000..." | |
| 223 | +| project_plan | 005-2-project_plan.md | "Estimated budget: CZK 500,000..." | |
| 224 | +| **make_assumptions** | 003-5-make_assumptions.md | "Assume total budget..." | |
| 225 | + |
| 226 | +**Root cause:** The prompt asks the LLM to generate budget assumptions |
| 227 | +without requiring external data sources... |
| 228 | + |
| 229 | +**Suggestion:** Add a validation step that... |
| 230 | +``` |
| 231 | + |
| 232 | +Problems are sorted by depth (deepest origin first) so the most upstream root cause appears at the top. |
| 233 | + |
| 234 | +### `__main__.py` — CLI Entry Point |
| 235 | + |
| 236 | +``` |
| 237 | +python -m worker_plan_internal.rca \ |
| 238 | + --dir /path/to/output \ |
| 239 | + --file 030-report.html \ |
| 240 | + --problem "The budget is CZK 500,000 but this number appears unvalidated..." \ |
| 241 | + --output-dir /path/to/output \ |
| 242 | + --max-depth 15 \ |
| 243 | + --verbose |
| 244 | +``` |
| 245 | + |
| 246 | +Arguments: |
| 247 | +- `--dir` (required): Path to the output directory containing intermediary files. |
| 248 | +- `--file` (required): Starting file to analyze, relative to `--dir`. |
| 249 | +- `--problem` (required): Text description of the observed problem(s). |
| 250 | +- `--output-dir` (optional): Where to write `root_cause_analysis.json` and `root_cause_analysis.md`. Defaults to `--dir`. |
| 251 | +- `--max-depth` (optional): Maximum upstream hops per problem. Default 15. |
| 252 | +- `--verbose` (optional): Print each LLM call and result to stderr as the trace runs. |
| 253 | + |
| 254 | +Orchestration: |
| 255 | +1. Parse arguments. |
| 256 | +2. Load model profile via `PlanExeLLMConfig.load()` and create `LLMExecutor` with priority-ordered models from the profile. |
| 257 | +3. Create `RootCauseAnalyzer` instance. |
| 258 | +4. Call `analyzer.trace(starting_file, problem_description)`. |
| 259 | +5. Write JSON and markdown reports via `output.py`. |
| 260 | +6. Print summary to stdout. |
| 261 | + |
| 262 | +## LLM Infrastructure Integration |
| 263 | + |
| 264 | +- **LLMExecutor** with `LLMModelFromName.from_names()` for multi-model fallback. |
| 265 | +- **Pydantic models** with `llm.as_structured_llm()` for all three prompt types. |
| 266 | +- **Model profile** loaded from `PLANEXE_MODEL_PROFILE` environment variable (defaults to baseline). |
| 267 | +- **RetryConfig** with defaults (2 retries, exponential backoff) for transient errors. |
| 268 | +- **`max_validation_retries=1`** to allow one structured output retry with feedback on parse failure. |
| 269 | + |
| 270 | +## Scope Boundaries |
| 271 | + |
| 272 | +**In scope:** |
| 273 | +- CLI tool with `--dir`, `--file`, `--problem`, `--output-dir`, `--max-depth`, `--verbose`. |
| 274 | +- Static registry of all ~48 pipeline stages with dependencies and source code paths. |
| 275 | +- Recursive depth-first upstream tracing with three LLM prompt types. |
| 276 | +- JSON + markdown output sorted by trace depth. |
| 277 | +- Source code analysis only at origin stages (lazy evaluation). |
| 278 | +- Full file contents sent to LLM (no chunking or summarization). |
| 279 | + |
| 280 | +**Out of scope (future work):** |
| 281 | +- Library/module API (CLI first, refactor later). |
| 282 | +- Integration as a Luigi pipeline stage. |
| 283 | +- Approach B (full reverse-topological sweep). |
| 284 | +- Approach C (scout-then-trace optimization). |
| 285 | +- Automatic registry generation from Luigi task introspection. |
| 286 | +- UI/web integration. |
0 commit comments