The blindspot
The kata self-improvement loop captures observations about CODE being written, not about the METHODOLOGY being used. After 9 keikos, zero observations exist about:
- Whether a waza was appropriate for the work
- Whether a flavor was missing for a work pattern
- Whether the kataka persona was well-matched
- Whether the stage sequence (research → plan → build → review) was optimal for a given bet type
- What new waza/flavors should be created
The bunkai store has 7 learnings — all budget-management or cycle-management. Zero waza-effectiveness, flavor-gap, or kataka-fitness patterns.
Root causes
1. Agent context doesn't ask for methodology introspection. The kata kiai context output teaches agents to record observations about their WORK but not about the PROCESS they used. No agent has ever been prompted: "was the waza you ran appropriate? did a flavor exist for this type of work?"
2. The proposal prompt asks for bet proposals, not methodology proposals. buildProposalPrompt in next-keiko-proposal-generator.ts asks: "propose 6-8 ranked bets." It never asks: "what waza/flavors/kataka/stages should be added or changed?"
3. No methodology-evaluation observation type. The taxonomy (decision, prediction, assumption, friction, gap, outcome, insight) has no dedicated methodology type or waza-fit type.
What the system needs
- Agent context cooldown addendum: after completing a run, agent should record: "was the flavor I ran appropriate? was the waza sequence right? what methodology gaps did I encounter?"
- Proposal prompt expansion: ask Claude not just for bets but also for: "what new waza/flavors would improve future cycles?" and "was the current kataka configuration right for this work?"
- New bunkai category:
methodology-effectiveness — patterns about what waza/flavor/stage sequences work or don't for what types of work
- New observation type (or sub-type):
methodology-fit — structured observation about whether the process matched the work
Impact
Kata can never self-improve its own methodology without this. It can only surface bugs and missing features — not process inefficiencies.
Dogfooding — Keiko 9 audit
The blindspot
The kata self-improvement loop captures observations about CODE being written, not about the METHODOLOGY being used. After 9 keikos, zero observations exist about:
The bunkai store has 7 learnings — all
budget-managementorcycle-management. Zerowaza-effectiveness,flavor-gap, orkataka-fitnesspatterns.Root causes
1. Agent context doesn't ask for methodology introspection. The
kata kiai contextoutput teaches agents to record observations about their WORK but not about the PROCESS they used. No agent has ever been prompted: "was the waza you ran appropriate? did a flavor exist for this type of work?"2. The proposal prompt asks for bet proposals, not methodology proposals.
buildProposalPromptinnext-keiko-proposal-generator.tsasks: "propose 6-8 ranked bets." It never asks: "what waza/flavors/kataka/stages should be added or changed?"3. No methodology-evaluation observation type. The taxonomy (decision, prediction, assumption, friction, gap, outcome, insight) has no dedicated
methodologytype orwaza-fittype.What the system needs
methodology-effectiveness— patterns about what waza/flavor/stage sequences work or don't for what types of workmethodology-fit— structured observation about whether the process matched the workImpact
Kata can never self-improve its own methodology without this. It can only surface bugs and missing features — not process inefficiencies.
Dogfooding — Keiko 9 audit