A cyclic self-improvement architecture for AI coding agents.
AI coding agents accumulate skills, rules, and learned patterns over time. Without a maintenance loop, this knowledge degrades — skills go stale, rules contradict each other, documentation drifts from reality.
AKC is a set of six composable skills that form a closed self-improvement loop:
Experience → learn-eval → skill-stocktake → rules-distill → Behavior change → ...
(extract) (curate) (promote) ↑
skill-comply
(measure)
context-sync ← (maintain)
Each skill addresses one phase of the knowledge lifecycle:
| Skill | Phase | What it does |
|---|---|---|
| search-first | Research | Search for existing solutions before building new ones |
| learn-eval | Extract | Extract reusable patterns from sessions with quality gates |
| skill-stocktake | Curate | Audit installed skills for staleness, conflicts, and redundancy |
| rules-distill | Promote | Distill cross-cutting principles from skills into rules |
| skill-comply | Measure | Test whether agents actually follow their skills and rules |
| context-sync | Maintain | Audit documentation for role overlaps, stale content, and missing decision records |
You don't need all six skills to run the cycle. The docs/akc-cycle.md file distills the six phases into behavioral principles that any AI coding agent can follow through natural conversation.
# Copy to your agent's rules directory
cp docs/akc-cycle.md ~/.claude/rules/common/akc-cycle.mdThat's it. The cycle will run through conversation — no skills, no plugins, no CLI tools.
| Phase | Rule summary |
|---|---|
| Research | Search for existing solutions before building new ones |
| Extract | Capture reusable patterns from sessions with quality evaluation |
| Curate | Periodically audit for redundancy, staleness, and silence |
| Promote | Elevate patterns that recur 3+ times to the rule layer |
| Measure | Verify behavioral change quantitatively, not subjectively |
| Maintain | Keep documentation roles clean and content fresh |
- Skills provide deep, step-by-step workflows for each phase. Install them when you want guided execution.
- Rules provide principles and trigger conditions. Install them when you want the cycle to emerge naturally from conversation.
- Both can coexist. Rules ensure the cycle runs even when skills aren't triggered.
Static configuration drifts. Skills get added but never reviewed. Rules accumulate but compliance is never measured. Documentation grows stale.
AKC treats agent knowledge as a living system that requires continuous maintenance — not a one-time setup.
| Problem | AKC response |
|---|---|
| Skills go stale | skill-stocktake audits quality periodically |
| Rules don't match practice | skill-comply measures actual behavioral compliance |
| Knowledge is scattered | rules-distill promotes recurring patterns to principles |
| Documentation drifts | context-sync detects role overlaps and stale content |
| Wheels get reinvented | search-first checks for existing solutions first |
| Learnings are lost | learn-eval extracts patterns with quality gates |
- Composable — Each skill works independently. Use one or all six.
- Observable — skill-comply produces quantitative compliance rates, not subjective assessments.
- Non-destructive — Every skill proposes changes and waits for confirmation. Nothing is auto-applied.
- Tool-agnostic in concept — Designed for Claude Code but the architecture applies to any coding agent with persistent configuration.
- Evaluation scales with model capability — Small models benefit from rubric-based scoring; reasoning models (Opus-class) evaluate with full context and qualitative judgment. AKC does not prescribe one approach — it matches evaluation depth to the model's reasoning capacity.
- Scaffold dissolution — Skills are scaffolding. As the user and agent internalize the cycle, skills become unnecessary and rules alone sustain the loop. See docs/scaffold-dissolution.md.
AKC shares common ground with harness engineering (Mitchell Hashimoto, 2025) — the practice of engineering solutions so that an agent never repeats the same mistake, through a combination of improved prompts (e.g., AGENTS.md updates) and programmatic tools (scripts, verification commands). Both aim to make agents more reliable. They differ in what they focus on.
| Layer | Question | Addressed by |
|---|---|---|
| Harness | "Is this output correct?" | Individual linters, tests, scripts |
| AKC | "Are the harnesses themselves still valid?" | skill-comply, skill-stocktake, context-sync |
Correctness vs intent alignment. Harness engineering focuses on getting the right result the first time — preventing known errors through better instructions and automated checks. AKC is more concerned with a different question: "is this aligned with the owner's intent?" An agent can avoid all known mistakes yet still diverge from design intent. Design Principle #3 (Non-destructive) reflects this — propose, then wait for confirmation, because intent alignment is difficult to fully automate.
Reactive vs proactive. Harness engineering is reactive by nature — each mistake triggers a new harness. AKC's skill-comply and skill-stocktake take a proactive approach, periodically auditing whether skills and rules are actually followed and whether they remain relevant. Design Principle #5 scales this evaluation to model capability — rubrics for small models, holistic judgment for frontier models.
The reference implementations linked above are starting points. Fork them, rewrite them, adapt them to your agent and workflow. AKC defines the cycle — not the implementation. What matters is that the phases (extract → curate → promote → measure → maintain) form a closed loop, not how each phase is built.
This architecture was first proposed and implemented by Tatsuya Shimomoto (@shimo4228) in February 2026.
The first five skills were contributed to Everything Claude Code (ECC) between February and March 2026. context-sync was developed independently.
If you use or reference the Agent Knowledge Cycle architecture, please cite:
@software{shimomoto2026akc,
author = {Shimomoto, Tatsuya},
title = {Agent Knowledge Cycle (AKC)},
year = {2026},
doi = {10.5281/zenodo.19200727},
url = {https://doi.org/10.5281/zenodo.19200727},
note = {A cyclic self-improvement architecture for AI coding agents}
}Or in text:
Shimomoto, T. (2026). Agent Knowledge Cycle (AKC). doi:10.5281/zenodo.19200727
- Contemplative AI framework — Autonomous agent design inspired by Laukkonen et al. (2025)
- Articles on Zenn — Development journal (Japanese)
- Articles on Dev.to — English translations
MIT