A cognitive base that shifts AI agent reasoning from linear cause-effect chains to feedback-driven structural analysis. Works with any LLM agent — Claude, GPT, Gemini, open-source models, or custom frameworks.
Most AI agents respond to problems by listing contributing factors and recommending best practices. They produce plausible-sounding analysis that misses the structural dynamics actually driving the situation — feedback loops, accumulation effects, time delays, and leverage asymmetries.
Systems Thinking changes the agent's reasoning process so it identifies these dynamics before recommending interventions.
The project delays are caused by scope creep, understaffing, technical debt, unclear requirements, and poor communication. Recommendations: hire more engineers, improve documentation, add project management tooling.
Unclear requirements drive scope creep, which increases workload on an understaffed team, which forces shortcuts that add technical debt, which slows future work and degrades communication — making requirements even less clear. This is a reinforcing loop. Adding engineers (parameter, LP 12) won't break it — the 3-month onboarding delay means the loop runs faster than the fix. The leverage point is requirements clarity (information flow, LP 6): make requirement gaps visible before they become scope creep, not after.
| Default mode | Target mode |
|---|---|
| React to events ("sales dropped") | Identify the structure that produces the event pattern |
| Linear causality (A → B → C) | Circular causality — check if C feeds back to A |
| Static snapshots ("revenue is $X") | Dynamic analysis — stocks, flows, accumulation rates |
| Implicit boundaries | Explicit boundary with named exclusions |
| Fix at the most accessible level | Classify intervention on the leverage hierarchy |
The agent processes information through five filters: Feedback signals (trace and classify loops), Accumulation signals (distinguish stocks from flows), Delay signals (estimate and flag), Boundary signals (test and adjust), Emergence signals (recognize system-level properties).
Thirteen characteristic failure modes are checked across core and extended layers. Core: Factor Listing, Hollow Holism, Chain Masquerade, Boundary Inflation, Map-as-Insight, Delay Blindness, Leverage Inversion, Vocabulary Compliance. Extended: Template Application, Stability Blindness, Force-First Intervention, Analysis-as-Avoidance, Metric Corruption.
cp cognitive-protocol.md ~/.claude/systems-thinking.md
echo '@~/.claude/systems-thinking.md' >> ~/.claude/CLAUDE.mdPrepend cognitive-protocol.md to your AGENTS.md.
Include cognitive-protocol.md in the system_instruction field.
Prepend cognitive-protocol.md to the system prompt. See install/generic.md for platform-specific details.
Three-layer design:
- Layer 1 (always-on):
cognitive-protocol.md— ~30 lines of core rules. Feedback loops, stocks/flows, delays, boundaries, leverage hierarchy. - Layer 2 (extended, on-demand):
extended-protocol.md— ~60 lines. Meta-domain diagnosis, driving tensions, qualitative transformation, hidden dynamics, analysis self-audit, practice verification. Synthesized from dialectical analysis (矛盾论), Eastern epistemology (缘起性空, 道家), and cross-domain systems theory (Cynefin, Panarchy, Perrow, Goodhart, Scott, Minsky, Gigerenzer, Hollnagel). - Layer 3 (reference):
SKILL.md,anti-patterns.md,examples.md— full framework documentation loaded when the skill is invoked.
systems-thinking/
├── README.md # This file
├── cognitive-protocol.md # Core rules (~30 lines, always-on)
├── extended-protocol.md # Extended rules (~60 lines, on-demand)
├── SKILL.md # Full framework reference
├── anti-patterns.md # 13 failure modes with detection/fixes
├── examples.md # 4 before/after demonstrations
└── install/
├── claude-code.md # Claude Code installation guide
├── codex.md # Codex installation guide
├── gemini.md # Gemini installation guide
└── generic.md # Universal installation guide
Systems Thinking is a cognitive base — it changes HOW the agent thinks, not WHAT it does. It stacks cleanly with domain skills (coding, design, writing) and other cognitive bases:
- + Tacit Knowledge: Sharp, judgment-first output backed by structural analysis
- + First Principles: Verified components assembled into sound feedback structures
- + Cross-Domain Connector: Dynamic structural patterns matched across disciplines
No conflicts. Load Tacit Knowledge first (it shapes all output), then reasoning bases in any order.
Core layer: Systems dynamics tradition — Jay Forrester (simulation methodology), Donella Meadows (leverage points hierarchy), Peter Senge (system archetypes), Russell Ackoff (synthesis over analysis), Stafford Beer (viable system model), W. Edwards Deming (variation-aware systems thinking).
Extended layer: Three additional traditions fused into operational instructions:
- Dialectical analysis — Mao Zedong's 《矛盾论》(principal contradiction, antagonistic vs non-antagonistic tensions, qualitative transformation) and 《实践论》(practice-theory verification spiral)
- Eastern epistemology — Buddhist 缘起性空/Pratītyasamutpāda (dependent origination, reification detection, observer inclusion; via Nāgārjuna, Huayan, Joanna Macy, Francisco Varela) and Daoist 道家 (无为/wu-wei intervention, 反者道之动/reversal dynamics, over-optimization detection; via Laozi, Zhuangzi, François Jullien)
- Cross-domain systems theory — Cynefin/Snowden (domain diagnosis), Panarchy/Holling (adaptive cycles), Normal Accidents/Perrow (architectural impossibility), Goodhart's Law (metric corruption), Legibility/Scott (modeling-as-damage), Minsky (stability breeds instability), Ecological Rationality/Gigerenzer (when less modeling wins), Safety-II/Hollnagel (success-failure shared source)
The core insight: most problems are not caused by broken components but by the interaction structure between working components — and the most accessible interventions are usually the weakest.
MIT