-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathfeed.xml
More file actions
120 lines (74 loc) · 10.5 KB
/
feed.xml
File metadata and controls
120 lines (74 loc) · 10.5 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="4.4.1">Jekyll</generator><link href="https://opellen.github.io/feed.xml" rel="self" type="application/atom+xml" /><link href="https://opellen.github.io/" rel="alternate" type="text/html" /><updated>2026-04-15T22:39:19+09:00</updated><id>https://opellen.github.io/feed.xml</id><title type="html">Opellen</title><subtitle>Open-source tools for AI-assisted development.</subtitle><entry><title type="html">Using subagents as parallel perspectives on design decisions</title><link href="https://opellen.github.io/2026/04/11/subagent-parallel-perspectives.html" rel="alternate" type="text/html" title="Using subagents as parallel perspectives on design decisions" /><published>2026-04-11T00:00:00+09:00</published><updated>2026-04-11T00:00:00+09:00</updated><id>https://opellen.github.io/2026/04/11/subagent-parallel-perspectives</id><content type="html" xml:base="https://opellen.github.io/2026/04/11/subagent-parallel-perspectives.html"><![CDATA[<p>Most of us use Claude Code subagents for parallel <em>work</em> — search this,
refactor that, concurrent execution. Lately I’ve been using them for
something different: parallel <em>perspectives</em> on the same problem.</p>
<p><img src="/assets/images/posts/circuit_surfers_illustration.png" alt="Illustration of the parallel subagent workflow" /></p>
<p>If you’ve discussed design decisions with the main agent long enough, you’ve hit this: the agent gets buried in the context, or the conversation grows long and it just starts agreeing with you. Either way, fresh angles stop coming.</p>
<p>Subagents don’t share that context. They come in cold — no history of the conversation, no accumulated framing — and that’s exactly why they work here.</p>
<hr />
<p>The trigger for me was designing the commands for a lightweight AI harness called Scaff I’ve been building as a side project.</p>
<p>I’d frequently hit walls while discussing the design with the main agent. When I asked for its take, it either couldn’t break out of the framing it had built up so far, or it just drifted toward accommodating whatever I wanted.</p>
<p>So I asked the main agent to do this:</p>
<ol>
<li>“Log the current discussion to <code class="language-plaintext highlighter-rouge">docs/discussion/<topic>.md</code>.”</li>
<li>
<p>“Spawn 3 subagents with different perspectives — LLM, software architect, end-user. Write each take to <code class="language-plaintext highlighter-rouge">...-round-1-<perspective>.md</code>.”</p>
<p>This produces:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docs/discussion/<topic>-round-1-llm.md
docs/discussion/<topic>-round-1-architect.md
docs/discussion/<topic>-round-1-enduser.md
</code></pre></div> </div>
</li>
<li>Resume the discussion with the main agent using the round-1 files as fresh inputs.</li>
<li>If it converges, ask the main agent to update the original topic doc.</li>
<li>If open issues remain, repeat the same cycle as <code class="language-plaintext highlighter-rouge">...-round-2.md</code>, <code class="language-plaintext highlighter-rouge">...-round-3.md</code> — until it actually closes.</li>
</ol>
<p>The key is the three parallel perspectives in step 2.</p>
<ul>
<li>LLM perspective: “Is this easy for the model to parse?”</li>
<li>Architect perspective: “Will this design still hold up in a year?”</li>
<li>End-user perspective: “Can I understand this in five minutes?”</li>
</ul>
<p>The three subagents come at the problem from genuinely different angles. More detail below.</p>
<p>The side effect is great too.</p>
<p>The <code class="language-plaintext highlighter-rouge">...-round-1.md</code>, <code class="language-plaintext highlighter-rouge">...-round-2.md</code> files persist as files, so the discussion doesn’t evaporate from the conversation — it <em>accumulates as documentation</em>. Three days later when you ask “wait, why did I settle on this?” — the answer is right there.</p>
<p>Leaving a trace of decisions — that alone makes the pattern worth using.</p>
<p>Here’s how it played out on one design question. I can’t include everything, but I’ve excerpted the moments that were most interesting.</p>
<hr />
<p><strong>Topic:</strong> Should we introduce a separate command for loading <code class="language-plaintext highlighter-rouge">OVERVIEW.md</code>?</p>
<p><strong>User:</strong> What if the <code class="language-plaintext highlighter-rouge">/scaff:scout</code> command auto-loaded <code class="language-plaintext highlighter-rouge">OVERVIEW.md</code> — a project-level overview doc — on every session start? What do you think?</p>
<p><strong>Main agent:</strong> Good direction. Having the overview available from the start would give better architectural grounding for every task.</p>
<p><strong>User:</strong> But scout is often the starting point for a work session, and OVERVIEW.md can get long. End users might feel Scaff is heavier than its philosophy suggests. What about a dedicated <code class="language-plaintext highlighter-rouge">/scaff:overview load</code> so it’s only pulled in when needed?</p>
<p><strong>Main agent:</strong> Good point. A <code class="language-plaintext highlighter-rouge">/scaff:overview load</code> subcommand that loads on demand would help save tokens.</p>
<p><strong>User:</strong> But for most other commands that work fine without it, that’s extra token cost.</p>
<p><strong>Main agent:</strong> If we just give <code class="language-plaintext highlighter-rouge">/scaff:scout</code> an affordance — let it know <code class="language-plaintext highlighter-rouge">docs/OVERVIEW.md</code> exists and let the agent decide whether to read it — that would solve the unnecessary token cost.</p>
<p><strong>User:</strong> But in my experience, the LLM will be biased toward reading <code class="language-plaintext highlighter-rouge">OVERVIEW.md</code> anyway.</p>
<p><strong>User:</strong> Let’s hear from the subagents. Log the current discussion to <code class="language-plaintext highlighter-rouge">docs/discussion/<topic>.md</code>. Then spawn LLM / software architect / end-user subagents and write each take to <code class="language-plaintext highlighter-rouge">...-round-1-<perspective>.md</code>.</p>
<p>The LLM perspective came back first:</p>
<blockquote>
<p><em>The bias is real, and the user has correctly diagnosed it.</em> LLMs trained on helpfulness are systematically biased toward “more context = better answer.” The model’s internal cost function treats a false negative (missed read) as far more painful than a false positive (unnecessary read). Skipping feels like negligence, reading feels like diligence.</p>
</blockquote>
<p>(…it walked through concrete examples and estimated the soft rule would fire on 6–8 out of 10 typical invocations.)</p>
<blockquote>
<p>Gating must key off <em>literal tokens in the user’s input</em>, not the model’s interpretation of task semantics. Interpretation always drifts toward “read more.”</p>
</blockquote>
<p>The architect didn’t touch execution at all. It went after the document structure: CONTEXT.md is “what’s being worked on right now,” OVERVIEW.md is “big picture that rarely changes” — but read literally, that’s exactly what you’d want at session start.</p>
<p>(…it re-diagnosed this as a definitional collision, not a loading problem.)</p>
<blockquote>
<p>If OVERVIEW.md cannot answer “who reads me, when, and why,” the fix is to delete the role, not to invent a loader for it.</p>
</blockquote>
<p>Same topic, completely different layers. The LLM pointed at execution bias. The architect pointed at “should this document even exist?”</p>
<p>The end-user perspective came in from yet another angle — user behavior:</p>
<blockquote>
<p>A soft rule like “read OVERVIEW.md when the task touches architecture” sounds disciplined on paper, but in practice the LLM’s threshold for “touches architecture” is fuzzy and self-serving — when in doubt, it reads.</p>
</blockquote>
<p>(…after honestly checking how often users actually ask “what’s the big picture?” mid-workflow — rarely — it concluded that frequency doesn’t justify automation.)</p>
<blockquote>
<p>Reject the soft rule entirely. That phrasing is exactly the fuzzy trigger the user is warning about, and it will collapse into always-load within a week of real use.</p>
</blockquote>
<hr />
<p>After the three perspectives came back, I resumed with the main agent. The architect’s “role collision” diagnosis turned out to be a naming collision — CONTEXT.md’s first heading was <code class="language-plaintext highlighter-rouge"># Project Overview</code>, which made the two docs look like they overlapped. Renaming it to <code class="language-plaintext highlighter-rouge"># Working Context</code> fixed it. Subagents can misdiagnose too, but even that surfaced the real issue.</p>
<p>The soft rule was scrapped. All three perspectives rejected it, and I agreed. OVERVIEW.md loading switched to reactive triggers — it only suggests loading when specific events fire, not on every session. The user decides.</p>
<p>Then a new problem: where does this reactive-trigger principle live? The main agent suggested scaff-subagent, but that skill is specifically for subagent delegation. Deciding when to read OVERVIEW.md is main-agent workflow, not subagent work. So we created the scaff-flow skill.</p>
<p>Once scaff-flow existed, we noticed document sync guidelines scattered across individual command files — when to suggest /scaff:design sync, when to suggest /scaff:context sync — had the same character. Those went into scaff-flow too. In the end, scaff-flow became a collection of principles for the main agent to autonomously drive a scaff project — a skill that should work better as AI improves.</p>
<p>The original discussion was “when should OVERVIEW.md be read?” It ended with a new skill that collects main-agent workflow principles — and should age well as models get better at autonomous decisions.</p>
<p>Scaff is <a href="https://github.com/opellen/scaff">on GitHub</a> if you want to look around.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[Most of us use Claude Code subagents for parallel work — search this, refactor that, concurrent execution. Lately I’ve been using them for something different: parallel perspectives on the same problem.]]></summary></entry></feed>