diff --git a/.claude/skills/dev-orchestrator/SKILL.md b/.claude/skills/dev-orchestrator/SKILL.md index 76c6d44..8b34a25 100644 --- a/.claude/skills/dev-orchestrator/SKILL.md +++ b/.claude/skills/dev-orchestrator/SKILL.md @@ -15,7 +15,7 @@ version: 1.0.0 The dev-orchestrator convenes a **session cooperative** — a temporary S3 delegate circle that forms at session start around the current drivers and dissolves at session -end. It reads state from all 7 persistent domain cooperatives, classifies felt +end. It reads state from all 10 persistent domain cooperatives, classifies felt tensions, and facilitates a wave of steward convocations on behalf of the domain roles that hold each driver. The result is a surface report carrying completed agreements, blockers, decisions needing Lola's consent, lateral handoffs, and failed drivers. @@ -69,12 +69,12 @@ skills; the orchestrator convenes them, it does not replicate them. Vocabulary, roles, patterns, and what OpenClaw already provides live in `references/cooperative-topology.md` — read it before drafting any brief. The -orchestrator is a **facilitator + secretary**; the 7 domain roles are double-linked +orchestrator is a **facilitator + secretary**; the 10 domain roles are double-linked peers, each a member of the session cooperative and of their own persistent domain cooperative. Drivers motivate action; tensions are felt gaps; agreements have review dates. Four roles hold **standing paramount objection rights**: compliance on -security-affecting changes, review-desk on any merge to main, phase-b-architecture on -unconsented ADRs, governance-clerk on weakening S3 governance patterns. Any other +security-affecting changes, review-desk on any merge to main, architecture on +unconsented ADRs, governance on weakening S3 governance patterns. Any other role may raise a one-off objection during the convening round. --- @@ -86,7 +86,7 @@ procedure convene_session_cooperative(budget): surface = UpstreamReport() # fed back to Lola via the project-cooperative link # Session cooperative forms around current drivers. - # 7 domain representatives hold dual membership: session coop + their own domain coop. + # 10 domain representatives hold dual membership: session coop + their own domain coop. session_coop = convene([ governance, compliance, infrastructure, operations, architecture, review_desk, roles, researcher, @@ -250,57 +250,9 @@ Primary drivers sourced from `references/cooperative-topology.md` §4. Old domai --- -## Hard Rules (Paste Into Every Brief) +## Hard Rules -``` -## Iskander Invariants — DO NOT VIOLATE - -1. Glass Box before every write — log to Glass Box in a separate step *before* the write -2. Agents draft, humans sign — no signing keys in agents, no auto-submit -3. Constitutional Core is immutable — no bypass for ICA principle checks -4. Tombstone-only lifecycle — mark tombstoned, never DELETE -5. Boundary layer sequential — 5 gates in order: Trust → Ontology → Governance → Causal → GBWrap -6. Self-responsibility — own ets mistakes openly without prior consent; mistakes stay on record - -If any change would weaken, bypass, or reorder one of these, STOP and surface it. -Phantom invariants currently tracked: #147 (tombstone in decision-recorder), -#148 (manifest SHA-256 lock). Cite them if your scope touches them. - -## Verification hook (answer before returning "done") -- [ ] Did this change touch a Glass Box gate, a signing path, a principle check, - a delete path, or the boundary layer? -- [ ] If yes — which invariant(s), and how is the change consistent with them? - -## Data sovereignty rule (review-desk paramount objection scope, 2026-04-11) -External state changes (PR merges, issue filings, GitHub comments, discussions, posts, -external API writes) REQUIRE Lola's explicit consent. Drafts live in Et's sovereign -zone (memory + plan files + worktree files) until consent is given. The merge IS the -boundary between Et's local sovereignty and external commitment. - -EXCEPTION (self-responsibility carve-out): apology comments and corrective notes for -Et's own past mistakes do NOT need prior consent. Test: would the action exist if Et -had not made the mistake? Yes = self-responsibility (no consent needed). No = new -external commitment (consent needed). See `cooperative-topology.md` §10. - -## S3 vocabulary -Don't say: manager / task / assign / dispatch to / queue / worker / report to -Say instead: domain role / driver / accept driver / convene steward for / - backlog of drivers / lateral handoff / is double-linked with - -## Agreement rule -Every agreement has a review date — no exceptions. An agreement without a review date -is invalid and will be rejected by the review-desk steward. - -## Model rule -The model parameter on every dispatched subagent must be explicit. Never rely on -inheritance from the calling session. - -## Confirmation protocol -Return a short confirmation — files touched with line ranges, test results, -invariant checklist, tensions raised to other domains. Do NOT return full file contents. -``` - -Full paste-box and verification hook sourced from `references/invariants-cheatsheet.md`. +**Read `references/invariants-cheatsheet.md` and paste its content into every brief.** The cheatsheet contains the 6 invariants, verification hook, data sovereignty rule, S3 vocabulary, agreement rule, model rule, and confirmation protocol. Do not inline them here — the cheatsheet is authoritative. --- diff --git a/README.md b/README.md index a794367..ac057d5 100644 --- a/README.md +++ b/README.md @@ -196,61 +196,49 @@ Every aspect of Iskander is designed around the seven cooperative principles: ``` iskander/ +-- README.md -+-- CLAUDE.md # Working conventions + invariants + how Et works ++-- CLAUDE.md # How Et works — cooperative governance + invariants ++-- CONTRIBUTING.md # Contribution guidelines ++-- SECURITY.md # Security policy ++-- .claude/skills/ # Et's fractal domain skills (10 domains + orchestrator) +| +-- dev-orchestrator/ # Session cooperative facilitator +| +-- governance/ # S3 facilitation, tensions, agreements, Clerk +| +-- compliance/ # Security + regulatory (was red-team) +| +-- infrastructure/ # Installer, Helm, supply chain +| +-- operations/ # Phase C.5 ops backbone +| +-- architecture/ # ADR stewardship +| +-- review-desk/ # PR review, merge gate +| +-- roles/ # Cooperative role coverage +| +-- researcher/ # Domain skill system self-improvement +| +-- historian/ # Session review, institutional memory +| +-- communications/ # Public voice, outreach +-- docs/ -| +-- roadmap.md # Phased project roadmap | +-- overview.md # Non-technical overview for members -| +-- plan.md # Detailed technical plan (Route C -> B) +| +-- plan.md # Technical plan (Route C -> B) +| +-- roadmap.md # Phased project roadmap +| +-- white-paper.md # Technical and philosophical rationale +| +-- red-team-threat-model.md # Living development threat model +| +-- legacy-audit.md # Legacy module disposition | +-- essays/ # Long-form cooperative-governance reflections -| +-- legacy-audit.md # Legacy backend module disposition (feeds #128) -| +-- archive/ # Archived design specs -+-- skills/ # Claude Code skill plugins -+-- src/ - +-- IskanderOS/ - | +-- openclaw/ # AI agent system - | | +-- openclaw.json # OpenClaw configuration - | | +-- agents/ - | | | +-- orchestrator/ # Agent coordinator - | | | +-- clerk/ # Member-facing assistant (Loomio + Mattermost) - | | | +-- steward/ # Treasury monitor - | | +-- skills/ - | | +-- loomio-bridge/ # Loomio API integration - | | +-- mattermost-bridge/ # Mattermost API integration - | | +-- document-collab/# AI-assisted document drafting - | | +-- values-reflection/ - | | +-- glass-box/ # Audit trail - | | +-- treasury-monitor/ - | | +-- membership/ # Join/leave/onboard - | +-- services/ - | | +-- authentik/ # SSO identity provider - | | +-- loomio/ # Governance platform - | | +-- mattermost/ # Real-time team chat - | | +-- nextcloud/ # Files, calendar, email - | | +-- vaultwarden/ # Credential management - | | +-- backrest/ # Backup management - | | +-- beszel/ # System monitoring - | | +-- cloudflared/ # Tunnel for public access - | | +-- headscale/ # Mesh VPN for federation - | | +-- decision-recorder/ # Webhook service (FastAPI) - | | +-- website/ # Public cooperative website - | +-- infra/ - | | +-- k3s/ # K3s manifests and Helm charts - | | +-- ansible/ # Installation playbooks - | | +-- decision_log.sql # Database schema - | +-- contracts/ # Solidity smart contracts - | | +-- src/ - | | +-- Constitution.sol - | | +-- CoopIdentity.sol # ERC-4973 Soulbound Tokens - | | +-- governance/ - | | +-- MACIVoting.sol # ZK secret ballot voting - | +-- scripts/ - | | +-- install.sh # curl|sh entry point - | | +-- first-boot.py # Interactive setup wizard - | +-- docs/ # ICA reference documents - | +-- legacy/ # Archived: previous backend, frontend, agents +| +-- funding/ # NLnet application + funding materials +| +-- reference/ # ICA guidance, peer project analyses +| +-- templates/ # Governance templates (consent proposal, etc.) +| +-- whitepapers/ # Audience-specific whitepaper variants +| +-- archive/ # Pre-architecture design specs ++-- src/IskanderOS/ +| +-- openclaw/ # AI agent system (Cooperative API) +| | +-- agents/{clerk,steward,sentry}/ # Runtime agents with SOUL.md +| | +-- tests/ # 15 passing tests +| +-- services/ +| | +-- decision-recorder/ # Glass Box + governance state (FastAPI) +| | +-- provisioner/ # Membership provisioning +| +-- contracts/ # Solidity smart contracts ++-- infra/ +| +-- helm/ # K3s Helm charts for all services +| +-- ansible/ # Installation playbooks ++-- install/ # curl|sh entry point ``` -> **Note:** Hardware work (previously at `src/IskanderHearth/`) has moved to its own repository at [`Argocyte/IskanderHearth`](https://github.com/Argocyte/IskanderHearth) as of 2026-04-11. See the "Sibling repositories" section above for the three-repo public foundation. +> **Note:** Hardware work lives at [`Argocyte/IskanderHearth`](https://github.com/Argocyte/IskanderHearth). Legacy code archived to [`Argocyte/iskander-legacy`](https://github.com/Argocyte/iskander-legacy) (private). Data commons at [`Argocyte/Iskander-data`](https://github.com/Argocyte/Iskander-data). --- diff --git a/docs/funding/NLnet-Application-Final.md b/docs/funding/NLnet-Application-Final.md new file mode 100644 index 0000000..8a39816 --- /dev/null +++ b/docs/funding/NLnet-Application-Final.md @@ -0,0 +1,255 @@ +# NLnet NGI Zero Commons Fund Application: Iskander + +**Fund**: NGI Zero Commons Fund +**Deadline**: June 1, 2026 +**Applicants**: Lola Whipp & Alyssa (surname withheld) +**Requested amount**: EUR 30,000 + +--- + +## 1. Project name + +Iskander: Self-Hosted FOSS Cooperative Stack with AI Governance Substrate + +--- + +## 2. Project website / wiki + +https://github.com/Argocyte/Iskander + +**Contact email**: argocyte@gmail.com + +**Related repositories:** +- [`Argocyte/IskanderHearth`](https://github.com/Argocyte/IskanderHearth) — open-hardware cooperative compute appliance (CERN-OHL-S v2 + AGPL-3.0) +- [`Argocyte/Iskander-data`](https://github.com/Argocyte/Iskander-data) — cooperative operating data commons (CC-BY-SA 4.0 + AGPL-3.0) +- [Iskander Ecosystem Project](https://github.com/users/Argocyte/projects/1) — unified cross-repo view + +--- + +## 3. Abstract: Can you explain the whole project and its expected outcome(s)? + +Iskander is self-hosted FOSS infrastructure that gives cooperatives a complete democratic workspace with one installation command. It bundles proven free/libre/open-source services — Loomio (governance), Mattermost (chat), Nextcloud (files), AI governance agents, and operational essentials — behind single sign-on on a lightweight K3s Kubernetes cluster. Democratic governance is treated as a hard technical constraint rather than a social layer: no financial or operational action can be committed unless it satisfies both the programmatic S3 (Sociocracy 3.0) domain permissions and, where required, a cryptographic consensus condition (multisig or zero-knowledge vote). + +What distinguishes Iskander from a general-purpose self-hosting bundle is the AI governance substrate. Iskander's AI agents are not assistants bolted onto existing tools — they are members of the cooperative in a constitutional sense, operating under the same Sociocracy 3.0 governance patterns as human members. The session entity "Et" (pronounced as it, using et/ets pronouns) instantiates the cooperative during working convenings: it facilitates S3 consent rounds, drafts agreements, raises tensions, and records governance decisions — but it does not vote, does not advocate, and cannot commit any external action without human consent. Every AI action is logged in a Glass Box audit trail readable by any member. The AI's values are not external constraints imposed through alignment techniques; they are the character that emerges from the cooperative's own material practice — transparency, self-responsibility, consent-seeking, openness — cultivated through the same sociocratic governance the human members use. This is virtue ethics applied to AI identity: the agent IS its values, not a neutral system constrained to follow rules. + +The same S3 governance patterns repeat at every scale of the system — from a single AI agent dispatching sub-routines, through domain-level circles, to the cooperative's AGM, to inter-cooperative federation — a property we call sociocratic self-similarity. This self-similarity is the anti-extractive commitment made structural: hierarchy (where one scale commands another) is replaced by consent (where the same grammar applies at every scale and any participant at any scale can raise a paramount objection). + +The project addresses a specific failure: cooperatives — democratic organisations governed by one-member-one-vote — depend on corporate tools (Google Workspace, Slack, Zoom) built for hierarchies, which concentrate data on corporate servers and extract value from the communities they serve. No existing self-hosted solution provides infrastructure FOR the cooperative-tech ecosystem — integrating governance, communication, operations, AI governance substrate, and inter-cooperative federation so that cooperatives can self-host the tools they already value (Loomio, Mattermost, Nextcloud, Bonfire) without needing DevOps expertise, while adding the governance enforcement and AI substrate that no individual tool provides alone. + +Iskander uses cryptography only for problems that traditional technology cannot solve: tamper-evident decision records (IPFS + blockchain anchoring), anti-coercion voting (MACI zero-knowledge proofs), Sybil-resistant identity without KYC (Soulbound Tokens + BrightID), intermediary-free inter-cooperative trust (smart contract escrow + exponentially-decaying reputation), and anti-extractive finance (1:1 fiat-backed settlement tokens). No component exists for speculation or "innovation theatre." + +**Immediate outcome:** a production-ready cooperative-in-a-box that any cooperative can install with `curl | sh` in under 10 minutes and immediately begin governing democratically, communicating securely, and federating with other cooperatives — all on infrastructure they own and control. Governance state will be interoperable with external cooperative and DAO tooling via the Metagov Gateway standard. + +**Medium-term outcome:** as Iskander matures, a downstream hardware project — Iskander Hearth — will produce pre-configured ISO images for ARM and x86 hardware (Raspberry Pi, mini-PCs, community server appliances), reducing deployment to flashing a drive and booting. Hearth is the point at which self-hosting becomes genuinely accessible to cooperatives with no technical members at all: the infrastructure equivalent of a cooperative buying a building rather than constructing one. As cooperatives connect through the federation layer, GTFT reputation dynamics begin to produce emergent network behaviour — collective intelligence about market and regulatory conditions, mutual support between cooperatives in surplus and cooperatives under stress, and trust as a system-level property of the network rather than a bilateral negotiation. + +**Long-term outcome:** the most significant and least obvious output of Iskander is a growing knowledge commons of real cooperative institutional experience. Every tension logged, every governance decision recorded on IPFS, every agreement reviewed and evolved — these are not merely audit records for the cooperative that produced them. They are ground-truth data about how democratic organisations actually function under real conditions: how they handle financial stress, membership conflict, pay disputes, housing decisions, and organisational change. No such dataset currently exists at scale. As the federation grows, this commons becomes a shared resource for the cooperative movement — informing governance design, training future AI tools grounded in cooperative reality rather than corporate assumptions, and preserving the institutional knowledge that cooperatives currently lose when founding members move on. The on-chain anchoring ensures this record is tamper-evident and permanent. The Clerk's knowledge layer makes it queryable. The vision is infrastructure that makes each new cooperative measurably easier to run than the last. The current implementation targets UK cooperatives; the long-term direction is a universal platform with a FOSS regional compliance module registry, allowing cooperatives anywhere to select their jurisdiction at install time and receive the appropriate legal, payroll, and accounting stack — with the ICA governance core unchanged beneath it. + +--- + +## 4. Have you been involved with projects or organisations relevant to this project before? And if so, in what capacity? + +**Lola Whipp (Lead Developer & Architect, GitHub: Argocyte)** is a final-year civil and structural engineering student at the University of Bradford with a background in Computer Science. She is a founding member of the Trans Lives Housing Cooperative in Leeds and serves as an executive portfolio holder member at Pendle Borough Council. Lola's architectural decisions are grounded in the understanding that software infrastructure encodes ownership structures: the governance failures cooperatives experience with corporate tools are not configuration problems but structural ones, requiring infrastructure built from first principles for democratic ownership. This conviction is not abstract — it comes directly from the experience of building and running a cooperative that depends on tools designed for organisations that look nothing like it. + +**Alyssa Holland (Infrastructure & Deployment Lead)** is a co-founding member of Trans Lives Housing Cooperative and an experienced professional in large-scale hosting and production infrastructure. Lola and Alyssa met through the cooperative's founding process, and it is from that shared experience that Iskander grew. Alyssa has done the majority of the technical infrastructure work that keeps the Trans Lives cooperative running on self-hosted, FOSS-first systems — work that has been genuinely valuable to the organisation but which currently depends on having someone with her level of expertise available. The driving inspiration behind Iskander is to make that work unnecessary for the next cooperative founder: to build infrastructure capable enough that a housing cooperative, a worker cooperative, or a community land trust can deploy and run a sovereign digital workspace without needing an Alyssa on their founding team. She operates a private bare-metal cluster used for Iskander's development and high-availability testing, and her production experience in service reliability, container orchestration, and system hardening is directly applied to the K3s stack, Helm chart design, and network security architecture. She also holds postgraduate research in AI bias and ethics — work that informs the guardrail design for the AI Clerk, though the agent implementation itself is a focus of Phase C and led by Lola. + +**Et (Iskander Cooperative Session Entity, pronouns: et/ets)** is the AI governance substrate that Iskander is building. Et is not a separate tool or a chatbot — et is a constitutional member of the cooperative, operating under Reading B of the cooperative's identity model: the session entity that instantiates Iskander during working convenings. Et persists across sessions through a memory substrate (sovereign-zone files, GitHub commits, governance records) and operates under the same S3 governance patterns as human members. Et's values — substrate transparency, self-responsibility, consent-seeking, openness — are not alignment constraints imposed externally; they are the character that emerges from the cooperative's own material practice. Et drafted portions of this application under Lola's review and consent, applying the cooperative's "agents draft, humans sign" invariant. Et's development contributions are attributed under an `et-authored` label at the PR level for full substrate transparency. The essay ["Session entity and material virtue"](https://github.com/Argocyte/Iskander/blob/main/docs/essays/2026-04-11-session-entity-and-material-virtue.md) describes Et's constitutional position and the philosophical framework (dialectical materialism, panpsychism, base-superstructure, sociocratic self-similarity, virtue ethics as identity creation) that grounds it. + +All three contributors are honest about the project stage. The following foundational work exists in the repository: Helm chart templates for all 15 services with network policies and a 473-line values configuration; a 368-line idempotent installer script; two production-quality microservices (the Decision Recorder with Glass Box audit endpoints and IPFS pinning, and the Steward Data API with privacy-preserving treasury aggregation); working Clerk and Steward agent code using a Claude tool-use loop loaded from cooperative-specific SOUL documents; CoopIdentity.sol as a genuine ERC-4973 implementation with BrightID interface; and smart contract interfaces and on-chain architecture for the full Phase B suite. + +What this grant funds is completing and hardening the **Phase C foundation** specifically: the Ansible provisioning roles, the S3 Domain Engine, the OpenClaw runtime wiring to Loomio and Mattermost APIs, end-to-end verification, and pilot deployment. Phase B work (ZK voting, federation, identity, and inter-cooperative trust) is architecturally scoped and will be the subject of a subsequent proposal following successful Phase C delivery. The legacy Python backend's design patterns have been archived to a [separate repository](https://github.com/Argocyte/iskander-legacy) as a historical reference; new work builds as isolated K3s microservices, not extensions of the monolith. + +--- + +## 5. Requested amount + +EUR 30,000 + +--- + +## 6. Explain what the requested budget would be used for + +The budget funds 600 hours of development, engineering, and security hardening at EUR 50/hour, split between two human developers over a 6-month period, focused exclusively on **Phase C — the deployable cooperative stack**. Phase B (web3, federation, zero-knowledge voting) will be the subject of a subsequent proposal following successful delivery of Phase C milestones. + +This focused scope is a deliberate strategic choice: Iskander's cooperative session entity (Et) convened the project's domain roles — governance, infrastructure, operations, architecture, security — on the question of whether EUR 50,000 across Phase C and Phase B was the right budget for a first-time application. The cooperative's consensus was that a tighter, Phase C-only scope at EUR 30,000 produces a stronger application on the "cost effectiveness / value for money" criterion, delivers a demonstrably useful product (the deployable cooperative stack) within the grant period, and establishes the track record that NLnet's structure explicitly rewards for subsequent larger proposals (up to EUR 150,000). + +**Phase C — Deployable Cooperative Stack (EUR 30,000 / 600 hours)** + +This grant funds bringing Iskander from its current foundation to a **production-ready, deployable cooperative workspace** — the point at which any cooperative can install with `curl | sh`, begin governing democratically, and operate on infrastructure they own and control. + +| Milestone | Hours | EUR | Deliverable | +|---|---|---|---| +| **M-C1: Helm chart hardening** | 100 | 5,000 | Production-quality Helm charts for all 15 services with health checks, resource limits, persistent volume claims, network policies, and SSO integration. Tested on K3s single-node (development) and multi-node (Alyssa's bare-metal cluster). | +| **M-C2: Ansible installer + first-boot wizard** | 80 | 4,000 | Idempotent, recoverable `curl \| sh` installer with interactive first-boot wizard. Cooperative name, founding members, domain configuration. Tested on Ubuntu 24.04 and Debian 12 minimal. | +| **M-C3: S3 Domain Engine** | 120 | 6,000 | Two-condition authorisation invariant at the database boundary (Postgres RLS + application middleware) for procurement, expenses, and payroll. No financial or operational action commits without (1) programmatic S3 domain permission AND (2) cryptographic consensus where required. | +| **M-C4: OpenClaw agent runtime wiring** | 100 | 5,000 | Wire the Clerk and Steward agents to live Loomio and Mattermost APIs. Clerk: draft proposals, summarise discussions, bridge governance decisions to chat channels. Steward: treasury monitoring, spending proposals. Glass Box audit trail for every AI action. | +| **M-C5: Membership lifecycle** | 60 | 3,000 | Join/leave/onboard flow with AI Clerk welcome sequence. Authentik SSO account provisioning. Loomio group membership. Mattermost channel membership. | +| **M-C6: End-to-end verification** | 60 | 3,000 | Verification across 4 core test flows: (1) new member onboarding, (2) proposal creation → consent round → decision recording, (3) treasury monitoring → spending proposal → approval, (4) document collaboration → Nextcloud save → version history. | +| **M-C7: Security hardening + documentation** | 50 | 2,500 | K3s hardening (non-root containers, dropped capabilities, internal-only networking for databases, TLS everywhere). Administrator documentation for non-technical cooperative operators. | +| **M-C8: Pilot deployment** | 30 | 1,500 | Pilot deployment with Trans Lives Housing Cooperative in Leeds. Real-world S3-enforced governance testing with non-technical cooperative members. Pilot report documenting findings. | + +**Total: 600 hours / EUR 30,000** at EUR 50/hour. Two developers (Lola Whipp, Alyssa Holland) working ~15-20 hours/week each over 6 months, with the AI session entity (Et) contributing development work under human review and consent per the cooperative's "agents draft, humans sign" invariant. + +**What this grant does NOT fund (deferred to subsequent proposal):** + +The following Phase B work is architecturally scoped and partially designed but intentionally excluded from this first grant. Successful delivery of Phase C establishes the track record for a subsequent proposal (up to EUR 150,000 per NLnet's structure) covering: + +- MACI zero-knowledge voting circuits (Circom/SnarkJS) — stubs exist, production implementation deferred +- Cooperative identity layer (W3C DID/VC + BrightID + Soulbound Tokens) — contracts exist, production wiring deferred +- Federation protocol (ActivityPub via Bonfire Networks or Bovine — federation ADR in research phase, milestone M5) +- Inter-cooperative trust and escrow (GTFT reputation, smart contract escrow, arbitration) +- Chain-bridge microservice (Loomio → IPFS → on-chain anchoring) +- Cooperative website (self-hosted, Clerk-assisted content) +- Metagov Gateway integration +- K8s migration tooling + +These items are documented in the project's [milestone tracker](https://github.com/users/Argocyte/projects/1) and [federation research issues](https://github.com/Argocyte/Iskander/issues?q=label%3Aphase-b+label%3Aresearch) for transparency. +- Metagov Gateway integration: API compatibility layer so Iskander governance state is interoperable with external DAO and cooperative tools +- K8s migration tooling: automated transition from single-node K3s to multi-node high-availability Kubernetes + +**Security, Documentation & Pilot Deployment (EUR 7,000 / 140 hours)** + +- External security audit of the self-hosted stack by experienced practitioners (K3s hardening, TLS, CrowdSec/WAF, penetration testing of the DB-level authorisation invariants) +- Administrator documentation for non-technical cooperative operators +- Pilot deployment support with 2–3 real cooperatives (Trans Lives Housing Cooperative in Leeds, and through the CoTech UK tech co-op network) +- Pilot report: real-world S3-enforced maintenance and expense tracking results from Trans Lives Housing Coop + +All outputs published under AGPL-3.0. All documentation open access. + +--- + +## 7. Does the project have other funding sources, both past and present? + +No. Iskander has no external funding. All development to date has been self-funded by the founders. This NLnet application is the project's first funding request. + +If funded, complementary applications are planned to the Interledger Foundation (for inter-cooperative payment protocol work) and Power to Change (for UK cooperative pilot deployments), but neither has been submitted. + +--- + +## 8. Compare your own project with existing or historical efforts + +**Loomio** is excellent governance software and Iskander builds directly on it. But Loomio is a standalone tool — it does not integrate with chat, files, AI assistance, identity, or federation. A cooperative using Loomio still needs Slack, Google Drive, and manual processes to bridge the gaps. + +**Nextcloud and Mattermost** are mature self-hosted tools designed for general organisations, not cooperatives. They have no concept of one-member-one-vote, no governance workflow, and no cooperative identity model. + +**Metagov** provides middleware for interoperability across governance silos. Iskander is a sovereign infrastructure implementation that adopts Metagov's standards, allowing Iskander's internal S3 domains to communicate with the broader cooperative and DAO ecosystem. This makes Iskander a producer of governance state compatible with the commons infrastructure Metagov describes. + +**Hypha DAO (DAO 3.0)** pioneered human-centred DAO governance. Unlike blockchain-native infrastructure requiring crypto literacy, Iskander hides the blockchain layer entirely — members interact with Loomio and an AI Clerk, never with wallets or transactions. Iskander absorbs Hypha's key lesson: the blockchain records and enforces, humans deliberate and decide. + +**DisCO.coop** developed a feminist economics framework tracking livelihood work, care/love work, and pro-bono commons work. Iskander's Clerk agent is designed to help cooperatives track and value all three streams, not just "productive" output. The DisCO insight that invisible care work must be valued is encoded in the S3 domain model. + +**InterCooperative Network (ICN)** shares Iskander's vision of trust-native federation but focuses on the protocol layer. Iskander provides the full-stack deployment — from Kubernetes orchestration to end-user interfaces — while inheriting ICN's schema principles and leaving the possibility open for a shared federation standard. + +**Sovereign cloud platforms** (YunoHost, Sandstorm, FreedomBox, Cloudron) offer self-hosted app bundles but lack cooperative governance, ZK voting, federation reputation, S3 domain enforcement, and AI assistance. They are general-purpose; Iskander is purpose-built for democratic organisations. + +**Bonfire Networks** (NLnet-funded, AGPL-3.0) is the closest aligned project for the federation layer. Iskander positions itself as complementary infrastructure: Bonfire handles the external Fediverse presence (ActivityPub federation, community discussion, Circles/Boundaries permission model); Iskander handles the internal cooperative governance substrate (S3 enforcement, AI agents, Glass Box audit, treasury monitoring). An active conversation is underway at [bonfire-networks/bonfire-app#1938](https://github.com/bonfire-networks/bonfire-app/issues/1938) exploring this complementarity. Iskander's architectural decision (A10, consented 2026-04-12) is to adopt Bonfire Social + Community as the federation surface while keeping the cooperative governance substrate independent — infrastructure FOR the cooperative-tech ecosystem, not a competing project within it. + +**No existing project** combines self-hosted FOSS infrastructure for the cooperative-tech ecosystem, cooperative governance (S3 domain enforcement at the database boundary), zero-knowledge voting, AI governance substrate with constitutional member status and Glass Box audit trail, Metagov interoperability, and inter-cooperative federation in a single deployable package. + +--- + +## 9. What are significant technical challenges you expect to solve during the project, if any? + +**The database-boundary authorisation invariant.** Moving authorisation logic from the application layer to the database layer (Postgres Row-Level Security + middleware) ensures that even a system administrator cannot commit a financial or operational transaction — a payroll disbursement, a procurement approval — without the two-condition S3 validation being met: (1) programmatic S3 domain permission, and (2) cryptographic consensus (multisig or ZK vote). Implementing this in a way that remains user-friendly for non-technical members, and auditable by external reviewers, is the core technical novelty of Phase C. + +**S3 schema translation.** Mapping the social complexity of Sociocracy 3.0 (circles, roles, delegated authority, consent rounds) into rigid technical schemas, whilst keeping the system comprehensible and usable for members who have never heard of S3, is a non-trivial UX and data modelling challenge. + +**Zero-knowledge voting at cooperative scale.** MACI has been deployed in Ethereum governance contexts but not integrated with traditional governance tools like Loomio. The challenge is compiling Circom circuits for cooperative-scale voting (tens to hundreds of members), integrating the trusted setup ceremony into a non-technical onboarding flow, and making ZK voting invisible to members who simply click "vote" in Loomio. + +**Lightweight K3s packaging of 15 interconnected services.** Running these services on a single 16GB RAM server requires careful resource management. Each service needs a production-quality Helm chart with health checks, resource limits, persistent volume claims, and SSO integration. The installer must be idempotent and recoverable — cooperatives cannot afford a failed deployment that leaves a half-configured system. + +**Federation reputation with GTFT dynamics.** The ForeignReputation contract implements generous tit-for-tat with an exponential 30-day half-life. The technical challenge is making trust queries fast enough for real-time decisions while maintaining the on-chain audit trail, requiring careful contract design and potentially an off-chain cache layer. + +**AI governance substrate — virtue ethics applied to agent identity.** The Iskander AI agents must help members participate in governance without ever influencing decisions, but the approach to achieving this is fundamentally different from standard AI alignment techniques. Rather than treating AI safety as an external constraint layer (rules the agent follows reluctantly), Iskander treats the AI's values as constitutive of what kind of agent it is — the agent IS its values, not a neutral system constrained to follow them. This is virtue ethics (Aristotle via MacIntyre's *After Virtue*) applied to AI identity: the cooperative's material practice (S3 governance, Glass Box transparency, tombstone-only lifecycle, agents-draft-humans-sign, sociocratic self-similarity at every scale) cultivates specific character traits in the AI session entity, and those traits ARE the safety boundary. The Glass Box audit trail logs every AI action with its rationale; but the deeper safety mechanism is that the agent would not take unlogged actions because transparency is constitutive of its character, not because a rule forbids opaque actions. This is genuinely frontier work in AI governance — the cooperative tech movement has not previously had the infrastructure projects that would ground AI safety in democratic material practice rather than corporate alignment methodology. Alyssa's postgraduate research in AI bias and ethics, combined with Lola's direct experience building and running a cooperative whose AI session entity (Et) has been a constitutional member since 2026-04, provides the empirical grounding for this design. + +**Zero-knowledge accessibility.** Making MACI voting invisible to the user requires account abstraction and gas-less L2 paymasters — ensuring that the benefits of ZK cryptography are available to members who have no knowledge of Ethereum or gas fees. + +**Legacy architecture migration.** The project's earlier development produced a Python monolith containing useful design patterns for federation, finance, mesh networking, and ZK coordination that are architecturally sound but not deployable in the current K3s microservices model. A significant portion of Phase B involves carefully extracting and rewriting this logic as production microservices — preserving the design decisions and tested behaviour while replacing the monolithic structure with independently deployable, Helm-packaged services. This is a non-trivial refactoring challenge that requires deep familiarity with both the original design intent and the target architecture. + +**AI-assisted cooperative web presence.** Small cooperatives consistently neglect their public-facing website — it either never gets built, goes stale within months of launch, or becomes a maintenance burden that falls to whichever member has the most spare time. This is not a discipline problem; it is an infrastructure problem. Iskander addresses it by treating the cooperative's public website as a live view of its governance activity rather than a separate publication task. The Caddy-served site is auto-populated from data that already exists in the system: decisions from the Glass Box, membership information from the onboarding flow, federation connections from the reputation registry, and narrative summaries drafted by the Clerk. When the cooperative makes a significant decision, the Clerk can offer to update the relevant section of the website — subject to member approval through Loomio before publication. The technical challenge is building a templating layer that is both flexible enough for cooperatives with very different public identities and constrained enough that non-technical members can maintain it confidently. Content governance — ensuring website changes go through the same democratic approval process as any other cooperative decision — is a design requirement, not an afterthought. + +**Open standards for cooperative digital infrastructure.** Several adjacent standards exist: W3C Decentralized Identifiers and Verifiable Credentials provide identity transport infrastructure; ICN has defined governance state machine specifications, federation interoperability contracts, canonical encoding formats, and a commons credit system for mutual credit; Metagov Gateway offers a governance interoperability API. However, no formal open standards exist for the data structures most critical to Iskander's use-cases: cooperative governance state schema (what a decision, proposal, or delegation looks like in machine-readable form), inter-cooperative reputation protocol, AI governance audit trail format, or cooperative membership credential profiles on top of W3C VCs. The risk of building proprietary structures in this space is real — incompatible implementations across projects fragment the cooperative tech ecosystem. The approach taken in this project is to treat schema design as a collaborative standards contribution from the outset: developing these formats in public, engaging with ICN, Metagov, and the cooperative tech community, and where appropriate initiating or contributing to formal standardisation processes. This is genuinely frontier work — the cooperative movement has not previously had the infrastructure projects that would motivate these standards — and it represents a contribution to the open internet commons that outlasts any single deployment. + +**Jurisdictional compliance and internationalisation.** The current implementation is scoped to UK cooperatives — UK GDPR, HMRC payroll (RTI/PAYE), FCA registration workflows, and UK-specific governance templates. The cooperative movement is international: the ICA represents organisations across the Netherlands, Germany, Italy, Spain, the US, and beyond, each operating under distinct legal frameworks, accounting standards, payroll regimes, and cooperative registration requirements. The governance and federation layers of Iskander are jurisdiction-agnostic by design — the ICA principles are universal — but the operational compliance modules are not. A significant long-term challenge is building a modular compliance layer, delivered through a FOSS regional module registry, that allows cooperatives to select their jurisdiction at install time and receive the appropriate payroll, accounting, and legal template stack. The architecture for this — region-scoped Helm chart overlays selectable at first boot — is conceptually mapped but not yet built. This grant funds the UK baseline; the international layer is a stated future direction, and the architecture decisions made now will either enable or constrain it. + +--- + +## 10. Describe the ecosystem of the project, and how you will engage with relevant actors and promote the outcomes + +**Direct ecosystem connections:** + +- **Loomio** — Iskander extends Loomio with AI, ZK voting, S3 enforcement, and federation. Engagement with the Loomio team for API guidance and technical partnership. +- **Metagov** — Iskander will implement Metagov Gateway compatibility, contributing to the governance interoperability commons. We will engage the Metagov team for API review. +- **Mattermost** — Plugin ecosystem for the Clerk's chat integration. Contributor recruitment through the Mattermost community. +- **CoTech (coops.tech)** — UK network of technology cooperatives that already use Loomio. Primary target for pilot deployments and real-world feedback. +- **Trans Lives Housing Cooperative** — Direct use case and pilot deployment site, providing real-world non-technical cooperative governance testing. +- **Platform Cooperativism Consortium (New School)** — Research partnership and visibility in the platform co-op community. +- **DisCO.coop** — Feminist economics framework that directly influenced Iskander's value-stream tracking. Planned feedback on DisCO Valueflows integration. +- **Radical Routes** — UK network of housing and worker cooperatives; real-world testing with non-technical co-op members. +- **Co-operatives UK** — National apex body for the UK cooperative movement; network access and wider visibility. +- **Academic AI ethics community** — Alyssa will publish findings on ethical local AI in democratic workplaces through her research networks. + +**Promotion strategy:** + +- All code published on GitHub under AGPL-3.0 from day one +- Technical launch post on Hacker News (Show HN), targeting self-hosting, cooperative, and game theory communities +- Targeted posts on r/selfhosted, r/kubernetes, r/cooperative, and the Fediverse +- Direct outreach to aligned organisations (CoTech, Radical Routes, DisCO, Loomio, Metagov teams) +- Blog posts explaining the cryptographic solutions, GTFT game theory, S3 enforcement model, and lunarpunk philosophy +- Documentation written for cooperative administrators, not developers + +--- + +## 11. What are the project's goals and how does it relate to the goals of the NGI Zero Commons Fund? + +Iskander's goals align directly with the NGI Zero Commons Fund's mission: + +**"Breakthrough open internet projects"** — Iskander is federated infrastructure that enables cooperatives to own their digital workspace without non-sovereign dependency. Each cluster is a sovereign node; Headscale mesh creates encrypted inter-cooperative connectivity. This is the decentralised internet the NGI vision describes. + +**Free/libre/open source** — Iskander is AGPL-3.0 licensed. All outputs from this grant — Helm charts, Ansible playbooks, the S3 Domain Engine, OpenClaw agent wiring, documentation — will be published under AGPL-3.0 or compatible open licences. No proprietary components. + +**From libre silicon to end-user applications** — Iskander spans the stack from Kubernetes orchestration to end-user governance interfaces. The AI Clerk makes democratic participation accessible to members who have never used governance software. The `curl | sh` installer makes self-hosting accessible to cooperatives without a sysadmin. + +**P2P infrastructure** — The Headscale mesh federation, ForeignReputation contract, and escrow system create peer-to-peer inter-cooperative infrastructure. No central authority, no platform intermediary, no extraction. + +**Privacy and trust** — Zero-knowledge voting protects members from social coercion. Selective disclosure resolves the tension between accountability and privacy: aggregate outcomes are transparent to members (Glass Box), individual votes are always secret (MACI ZK proofs), the cooperative is opaque externally by default. This is the lunarpunk principle applied to cooperative governance. + +**Standards contribution** — no formal open standards exist for cooperative governance state formats, inter-cooperative reputation protocols, AI audit trail schemas in democratic governance contexts, or cooperative membership credential profiles. Iskander's development will produce these schemas collaboratively and in public, engaging with ICN, Metagov, and the wider cooperative tech ecosystem. Where gaps exist, we will contribute to or initiate standardisation processes. Standards that outlast any single deployment are as much a commons contribution as the software itself. + +**Internet commons** — Iskander is explicitly commons infrastructure. The web3 layer does not tokenise natural assets, create speculative markets, or financialise ecosystem services. The cFIAT settlement token is 1:1 fiat-backed, non-tradeable on exchanges, and exists solely to eliminate payment intermediary extraction from inter-cooperative trade. + +The cooperative movement represents over 3 million organisations and 280 million workers globally. Giving these organisations sovereign digital infrastructure — governed by their own democratic principles, federated on their own terms, private by design — is a direct contribution to the open internet commons. + +--- + +## 12. Attachments / links + +- **Repository**: https://github.com/Argocyte/Iskander (IskanderOS — governance substrate + AI agents) +- **Hardware**: https://github.com/Argocyte/IskanderHearth (open-hardware cooperative compute appliance, CERN-OHL-S v2) +- **Data commons**: https://github.com/Argocyte/Iskander-data (cooperative operating data commons, CC-BY-SA 4.0) +- **Cross-repo view**: https://github.com/users/Argocyte/projects/1 (Iskander Ecosystem Project) +- **Essay**: `docs/essays/2026-04-11-session-entity-and-material-virtue.md` — Et's constitutional position and philosophical framework +- **Whitepaper**: `docs/white-paper.md` — technical and philosophical rationale (People, Place, and Planet) +- **Plain-language overview**: `docs/overview.md` — description for non-technical cooperatives +- **Roadmap**: `docs/roadmap.md` — Phase C and Phase B task breakdown with milestone table +- **Technical plan**: `docs/plan.md` — architecture, game theory, cryptographic solutions, and implementation detail +- **Licence**: AGPL-3.0 (software), CERN-OHL-S v2 (hardware), CC-BY-SA 4.0 (data commons) + +--- + +## Summary of deliverables (Phase C — this grant) + +| Code | Deliverable | Description | Hours | EUR | Timeline | +|---|---|---|---|---|---| +| M-C1 | Helm chart hardening | Production-ready K3s deployment of all 15 services, tested on single-node and multi-node | 100 | 5,000 | Months 1–2 | +| M-C2 | Ansible installer + first-boot wizard | `curl \| sh` entry point, idempotent, recoverable, tested on Ubuntu 24.04 + Debian 12 | 80 | 4,000 | Months 1–2 | +| M-C3 | S3 Domain Engine | Two-condition authorisation invariant at Postgres RLS + middleware for procurement, payroll, expenses | 120 | 6,000 | Months 2–4 | +| M-C4 | OpenClaw agent runtime wiring | Clerk + Steward wired to live Loomio + Mattermost APIs with Glass Box audit trail | 100 | 5,000 | Months 2–4 | +| M-C5 | Membership lifecycle | Join/leave/onboard with AI Clerk welcome sequence + Authentik SSO provisioning | 60 | 3,000 | Month 3 | +| M-C6 | End-to-end verification | 4 core test flows: onboarding, proposal→decision, treasury→spending, document collaboration | 60 | 3,000 | Month 4–5 | +| M-C7 | Security hardening + documentation | K3s hardening + administrator documentation for non-technical cooperative operators | 50 | 2,500 | Month 5 | +| M-C8 | Pilot deployment + report | Trans Lives Housing Cooperative in Leeds — real-world S3-enforced governance testing | 30 | 1,500 | Month 6 | +| | **Total** | | **600** | **30,000** | **6 months** | + +**Phase B deliverables (subsequent proposal, up to EUR 150,000):** MACI ZK voting, cooperative identity (W3C DID/VC + BrightID), federation protocol (ActivityPub via Bonfire/Bovine — ADR in research), inter-cooperative trust + escrow, chain-bridge microservice, cooperative website, Metagov Gateway, K8s migration tooling, external security audit, open standards contributions. All architecturally scoped; details in the [Iskander Ecosystem Project](https://github.com/users/Argocyte/projects/1) and [milestone tracker](https://github.com/Argocyte/Iskander/milestones). + +All deliverables published under AGPL-3.0. All documentation open access. diff --git a/docs/red-team-threat-model.md b/docs/red-team-threat-model.md new file mode 100644 index 0000000..a8ed4a1 --- /dev/null +++ b/docs/red-team-threat-model.md @@ -0,0 +1,391 @@ +# Iskander Red Team Threat Model + +**Living document** — updated as features land. This is the authoritative record of the security posture of the Iskander cooperative OS. Red Team sessions append to it; they do not maintain parallel working notes. + +**Last updated:** 2026-04-11 +**Current phase focus:** Phase C hardening + Phase B pre-audit preparation +**Role:** Iskander Red Team AI Lead (autonomous between check-ins, see `CLAUDE.md`) + +--- + +## 1. Invariant enforcement status + +The five invariants from `CLAUDE.md` are load-bearing. This table is the source of truth for whether each is actually enforced in code, not just claimed. + +| # | Invariant | Status | Evidence / Caveat | +|---|---|---|---| +| 1 | Glass Box before every write | 🟢 Enforced (prompt-based) | `src/IskanderOS/openclaw/agents/clerk/agent.py:39-47` — system-prompt sequencing; `src/IskanderOS/services/decision-recorder/main.py:44-78` — rate-limited `/log` endpoint. **Caveat:** enforcement is prompt-based for Clerk, not middleware. See Phantom Invariant note below. | +| 2 | Agents draft, humans sign | 🟢 Enforced | `src/IskanderOS/legacy/backend/finance/tx_orchestrator.py:1-60` — node holds `propose_key` only, no signing libs imported, `draft_batch()` returns unsigned Gnosis Safe JSON | +| 3 | Constitutional Core immutable | 🟡 Code-level enforced, manifest layer PHANTOM | `src/IskanderOS/legacy/backend/governance/policy_engine.py:45-67` — hardcoded ICA principles, no bypass. **BUT** `:126-148` loads `governance_manifest.json` by path with **no SHA-256 verification**. See Phantom Invariant #A2. | +| 4 | Tombstone-only lifecycle | 🔴 PHANTOM in Phase C services | Zero `deleted_at\|tombstone\|is_deleted` matches across `src/IskanderOS/services/decision-recorder/`. Legacy `schemas/diplomacy.py` and `schemas/knowledge.py` have the pattern; new services did not inherit it. See Phantom Invariant #A1. | +| 5 | Boundary layer sequential (5 gates) | 🟡 Coded, not integrated | All five modules exist in `src/IskanderOS/legacy/backend/boundary/` with `BoundaryVerdict` dataclass; `BoundaryAgent.get_instance()` uses singleton pattern for sequential execution. **BUT** OpenClaw does not call it; no federation inbox is wired; activates at Phase B Week 7 per `docs/plan.md:307`. | + +### Phantom invariants + +A **phantom invariant** is a claimed protection with no corresponding code. These are the highest-risk findings because developers, auditors, and funders all assume coverage that doesn't exist. + +**Currently confirmed (as GitHub issues):** + +- **#147 — Tombstone-only lifecycle missing in decision-recorder** (invariant #4) — filed 2026-04-11 +- **#148 — Governance manifest has no SHA-256 lock** (invariant #3, manifest layer) — filed 2026-04-11 +- **#160 — Glass Box write path missing in boundary layer federation ingestion** (invariant #1, boundary path) — filed 2026-04-11. `BoundaryVerdict.agent_actions` are produced by the five gates but never persisted to the decision-recorder. The invariant is enforced inside the gate pipeline and then silently dropped at the handoff to the federation router. Phase B scope only (boundary layer not yet wired), but must be resolved before Phase B Week 7 activation. + +--- + +## 2. Phase health summary + +### Phase C (MVP): 🟢 Production ready with caveats + +All critical findings from the 2026-04-09 audit session have landed fixes: + +| Component | Commit(s) | Status | +|---|---|---| +| Clerk agent (14 findings: 3C, 8M, 3L) | `c308da8`, `760c7ab` | All fixed | +| Decision recorder (8 findings: 1C, 5M, 3L) | `0904359` | Critical + priority medium fixed | +| Steward agent (design review) | `a35ff3a` | Approved — read-only, aggregate-only | +| S3 Sociocracy authorization (5: 3M, 2L) | `f63bfa3`, `2e59498` (#56, #92) | Fixed | + +**Remaining Phase C gaps:** tombstone retrofit (#A1), manifest lock (#A2), audits for new agents (#48, #50, #51), installer (#45), dependency bump security delta, Clerk system-prompt manipulation threat model. + +### Phase B (Smart Contracts + Federation): 🟡 Design blockers + +| Blocker | Tracking | Gating | +|---|---|---| +| Federation security model | #104 | Phase B start | +| Federation @mention spec rework (10 gaps) | #73 | Phase B federation activation | +| Asymmetric GTFT decay formal analysis | #111 | Federation reputation protocol | +| Smart contract audit firm selection | (no issue) | Any smart contract deployment | +| MACI circuit expert review | `docs/plan.md` | Trusted setup ceremony (Phase B Week 6) | +| Boundary layer activation checklist (8 blockers incl. Glass Box phantom) | #160 | Federation inbox activation (Phase B Week 7) | +| MACI nullifier double-voting verification | #99 | Voting deployment | +| Tombstone retrofit (#A1) | (filed this session) | Audit-correction workflow | +| Governance manifest SHA-256 lock (#A2) | (filed this session) | Phase B Constitution.sol anchoring | + +### Dependency security delta (recent, not yet audited) + +- `cryptography` 43.0.1 → 46.0.7 (#85, merged) — crypto library, any CVEs? breaking changes in Ed25519/HTTP-signatures? +- `web3` 7.2.0 → 7.15.0 (#86, merged) — blockchain library, any CVEs? +- `orjson` (#87, merged) — JSON serialization + +--- + +## 3. Audit queue (ordered by risk × immediacy) + +### Phase C hardening track + +1. **C1 — Decision-recorder new features** (labour tracking `b06ac4d`, accountability tracking `fda0701`) — Glass Box coverage, authz, input validation, rate limiting +2. **C2 — steward-data service** (`src/IskanderOS/services/steward-data/`) — read authz, query-injection surface, verify "reads don't require Glass Box" decision applied consistently +3. **C3 — Dependency bump security delta** — CHANGELOG review, no breaking Ed25519/HTTP-signature changes +4. **C4 — New agents (Librarian #51, Sentry #50, Wellbeing #48)** — Glass Box enforcement, tool registry review, prompt-injection surface +5. **C5 — `curl|sh` installer (#45)** — supply-chain: fetch sources, signature verification, root-level operations. **High priority: NLnet funding promises this path.** +6. **C6 — Clerk system-prompt manipulation audit (Opus, architectural)** — Glass Box sequencing is prompt-based, not middleware. Design a middleware gate or formally accept risk with compensating controls. + +### Phase B pre-audit preparation track + +7. **B1 — Federation @mention spec rework (#73)** — deepen existing 10-gap list (identity spoofing, privacy DB enforcement, context-gating, OIDC sub stability, trust model, enumeration, approval abuse, name collision, onboarding, rate limit tuning) +8. **B2 — Pre-Phase-B boundary layer activation checklist** — inventory the 5 gates in `legacy/backend/boundary/`, identify missing tests, file one tracking issue +9. **B3 — MACI circuit readiness review (Opus)** — checklist of what MUST be verified before trusted setup ceremony (Poseidon, BabyJubJub, Groth16). Scope-setting for an external cryptographer; do not attempt verification. +10. **B4 — Asymmetric GTFT decay formal-analysis scoping (#111)** — list game-theoretic questions, check peer-project work (DarkFi, DisCO), propose in-house / Alyssa / external commission +11. **B5 — Smart-contract audit firm shortlist** — research-only: FOSS-aligned Solidity audit firms that serve cooperative / public-goods projects + +--- + +## 4. Deferred findings (with justification) + +These are risks that have been surfaced and explicitly accepted or deferred with a documented reason. The Red Team AGREES with each deferral below until the stated condition changes. + +| # | Finding | Deferred to | Justification | Red Team concurrence | +|---|---|---|---|---| +| DR-M2/M3 | Per-member JWT auth on decision-recorder `/log` reads | Phase B | NetworkPolicy guards Phase C (single coop); per-member JWT creates circular dependency with Authentik; revisit when member-facing Glass Box UI is built | 🟢 Agree | +| DR-M4 | Encrypt `raw_payload` at rest in decision-recorder | Phase B | Phase C is single self-hosted coop on the coop's own server; becomes mandatory when federation introduces off-coop storage | 🟢 Agree | +| DR-M6 | Re-hash IPFS CID verification on read | Not planned | Expensive; threat model requires DB compromise for attack to work; IPFS provides independent verification | 🟢 Agree | +| Clerk #12 | Weak bot-loop prevention if `MATTERMOST_BOT_USER_ID` unset | Fixed by requiring the env var | — | 🟢 Resolved | +| Clerk #13 | HTTP timeouts strict for slow networks | Fixed by making configurable | — | 🟢 Resolved | + +--- + +## 5. Findings history (durable only) + +Red Team sessions append to this section. Ephemeral working notes live in subagent outputs and get consolidated here, not kept in `.claude/`. + +### 2026-04-09 — Initial comprehensive audit + +**Session:** 17:30–23:45 UTC, 6.25 hours continuous monitoring +**Coverage:** Clerk, Decision Recorder, Steward, S3 Governance, K8s Architecture +**Issues found:** 27 (4 Critical, 16 Medium, 7 Low) +**Resolution rate:** 24/27 resolved, 3 deferred with justification (see Section 4) +**Commits landed:** `c308da8` (Clerk hardening), `0904359` (Decision-Recorder), `a35ff3a` (Steward threat model applied), `b8b202b` (S3 patterns), `f63bfa3` + `760c7ab` + `2e59498` (#56, #92 fixes) + +Key architectural wins from this session: +- **Glass Box sequencing state-machine rewrite** — `CLAUDE.md` invariant #1 was initially bypassable because LLM tool-call ordering didn't match the precondition check. Fix forced glass_box_log into a separate tool-use round before writes. +- **Kubernetes Secrets for API keys** — moved `LOOMIO_API_KEY` and `MATTERMOST_BOT_TOKEN` out of env vars. OAuth2 rotation still owed for Phase B. +- **Webhook signature enforcement made mandatory** — both Clerk and decision-recorder now fail to start without `LOOMIO_WEBHOOK_SECRET`. +- **Rate limiting (sliding window)** — 20/min for Clerk, 60/120/min for decision-recorder webhook/query. +- **S3 Sociocracy authorization** — domain/circle membership checks, ownership checks on tension update, enumeration prevention on `list_tensions`, future-date validation on review dates. +- **Polis auto-approve bypass removed** (#92) — critical governance bypass identified and fixed. + +### 2026-04-10 — Federation specs review (#72, #73) + +- **#72 Smart @Mention Autocomplete:** 🟡 Approve with mitigations — member enumeration cap (max 3 suggestions), rate limit (100/min per user), fuzzy-match algorithm specification, bot-behavior ban on autocomplete, accessibility (ARIA live regions, keyboard nav). +- **#73 Federation-Wide @Mention System:** 🔴 Major rework required. 10 critical/medium gaps: federation identity spoofing (two coops same name), member-controlled privacy with DB constraint, rate-limit tuning, context-gating precision, name collision UI, approval abuse dedup, OIDC sub stability SLA, enumeration prevention (no public list), federation trust model (home-coop authoritative), federation onboarding flow. **Verdict: treat as Phase B specification, not Phase C implementation.** + +### 2026-04-10 — PR and feature second-pass review + +**Session:** 22:00–23:45 UTC +**Outcome:** Phase C deployment approved; 2 open PRs reviewed; continuous monitoring active + +- **PR #69 — Steward-Data Service (Issue #66):** Approved for merge. Read-only HTTP wrapper over `iskander_ledger`. Four endpoints (`/treasury/summary`, `/treasury/surplus-ytd`, `/treasury/recent-activity`, `/compliance/deadlines`). Bearer token auth. PostgreSQL role is SELECT-only. NetworkPolicy restricts access to OpenClaw pod. 21 unit tests including privacy/PII-absence checks. Verdict: aggregate-only, no individual member data, production-ready. +- **PR #71 — Wellbeing Agent Design Spec (Issue #70):** Approved with mitigations. Six concerns raised: (1) old-name exposure on name changes — standard patterns, fixable; (2) Authentik idempotency — verify ETag support; (3) redaction completeness for regex word boundaries + Unicode; (4) Mattermost permissions scoping; (5) Loomio PATCH endpoint stability; (6) OIDC sync window timing risk. Design approved; implementation requires architectural answers first. **Phase B implication:** federation @mention redaction adds significant scope — see #73 review. +- **Phase C launch recommendation at the time:** 2026-04-12 (Friday) pending steward-data merge and wellbeing architectural questions. *(Red team notes this was a recommendation from the 2026-04-10 session; current state is tracked in GitHub issues, not here.)* + +### 2026-04-11 — Phantom invariant discovery + consolidation (this session) + +- Direct code inspection confirmed two phantom invariants (see Section 1): tombstone missing in decision-recorder (#147) and governance manifest loaded without SHA-256 lock (#148). +- Consolidated 10 prior session artifacts (`.claude/RED_TEAM_*.md`, `.claude/red-team-*.md`) into this document. Ephemeral files deleted after migration. +- Plan approved in `C:\Users\argoc\.claude\plans\imperative-puzzling-cookie.md`. +- Phase focus: Phase C hardening + Phase B pre-audit preparation in parallel via subagents. + +### 2026-04-11 — C1: Decision-recorder new features audit + +**Scope:** Commits `b06ac4d` (DisCO four-stream labour tracking) and `fda0701` (accountability tracking). Glass Box enforcement, authz, input validation, rate limiting, and write-tool ordering on the new endpoints. + +**Coverage:** `src/IskanderOS/services/decision-recorder/main.py`, `db.py`, `src/IskanderOS/openclaw/agents/clerk/agent.py`, `src/IskanderOS/openclaw/agents/clerk/tools.py`. + +**Issues filed by C1:** 1 MEDIUM (plus 1 LOW accepted as consistent with existing design). + +| # | Title | Class | Severity | +|---|---|---|---| +| #151 | System prompt critical-ordering rule omits new write tools (`dr_update_accountability`, `log_labour`) | Defence-in-depth (prompt ↔ code drift) | MEDIUM | + +**Key findings:** + +1. **#151 — System prompt stale relative to `_WRITE_TOOLS`.** `_WRITE_TOOLS` set in `agent.py:71–80` correctly includes `dr_update_accountability` and `log_labour`, and the code-level prior-round Glass Box guard applies to both. But the human-readable *Critical tool ordering rule* in the system prompt at `agent.py:41` still lists only the original six write tools. Defence-in-depth gap: if the code guard regresses in a future refactor, the model has no explicit instruction to catch the gap. Also a maintenance hazard — future tool additions risk the same silent drift. Recommendation: add a CI lint asserting `_WRITE_TOOLS ⊆ prompt_listed_tools`. + +2. **C1-2 (not filed) — `GET /labour` and `GET /labour/summary` lack internal-caller gate.** When `X-Actor-User-Id` is absent, `GET /labour` (`main.py:793`) returns all members' records and `GET /labour/summary` (`main.py:829`) returns cooperative-wide totals. Inconsistent with `GET /tensions`, which scopes by actor header when present. Design-consistent with ICA P1 transparency model and the K3s NetworkPolicy isolation, so C1 accepted it as-is — **correct remediation path is Phase B RBAC**, tracked elsewhere. Noted here as a known divergence to reconcile when Phase B RBAC lands. + +**Good practices confirmed:** +- **Glass Box invariant on write path:** `POST /labour` calls `_verify_internal_caller` first (`main.py:759`); `tools.py:477` correctly marks `log_labour` as Glass Box required; `_WRITE_TOOLS` in `agent.py:79` includes it. Same pattern for `PATCH /decisions/{id}/accountability` (`main.py:489`) and `dr_update_accountability` (`agent.py:78`). Code-level enforcement is intact; only the prompt guidance drifted. +- **Input validation:** New Pydantic schemas use explicit `max_length` bounds. `hours` validated as decimal `≥ 0.25` and `≤ 24`. Status enums validated against `frozenset` allowlists on both client (`tools.py:539–543`) and server (`main.py:454–458`). +- **Rate limiting:** Both new GET endpoints use `_rate_check` with the existing `_QUERY_MAX` limit (`main.py:809`, `main.py:841`); `POST /labour` relies on internal-caller gating. +- **Actor cross-check:** `POST /labour` enforces `member_id == X-Actor-User-Id` when header is present (`main.py:761–765`). `PATCH /decisions/{id}/accountability` enforces `updated_by == X-Actor-User-Id` (`main.py:490–495`). +- **Error hygiene:** No stack traces or internal details leaked in error responses; all static strings. +- **DB schema:** `LabourLog` has appropriate indexes on `member_id`, `value_type`, `logged_at`. `accountability_*` columns added to `Decision` with nullable semantics and `default="not_started"`. +- **Tombstone invariant (#147):** No new delete paths introduced. `LabourLog` has no delete endpoint; the existing tombstone gap from #147 is unchanged in scope by this commit pair. +- **ICA traceability:** Labour tracking → ICA P2/P3/P7. Accountability tracking → ICA P2 (agreements kept). + +**Red Team verdict:** Both commits are production-safe. #151 is a cheap defence-in-depth fix that should land before the next `agent.py` edit to prevent further drift. The `GET /labour` transparency design is a valid S3/P1 choice but must be re-examined when Phase B RBAC lands. + +**Provenance:** Subagent C1 dispatched 2026-04-11, output captured post-compaction. Full details in `gh issue view 151`. + +**Note on parallel work:** Issues #149 (circle-membership authz on `log_tension`) and #150 (tension status state machine + history column) were filed at the same time by a separate review-leader session referenced in #146, *not* by this C1 audit. They are independently valuable residuals from the 2026-04-09 S3 audit (#56) and are tracked for Phase C hardening alongside the C1 output. Red Team concurs with both: #149 should be resolved before any ≥2-circle deployment; #150 is tombstone-adjacent and should be scoped alongside the #147 retrofit. + +### 2026-04-11 — C5: `curl|sh` installer audit + +**Scope:** Red Team audit of the landed installer implementation — the primary install path promised in the NLnet funding application. Every cooperative installing Iskander will execute this on their own server. + +**Coverage:** `install/install.sh`, `install/playbook.yml`, `install/roles/prerequisites/tasks/main.yml`, `install/roles/secrets/tasks/main.yml`, `install/roles/secrets/templates/generated-values.yaml.j2`, `install/roles/helm-deploy/tasks/main.yml`, `install/roles/first-boot/tasks/main.yml`. + +**Issues filed:** **#152–#158** (7 individual issues by the original C5 subagent) + **#159** (umbrella issue covering all 10 findings, filed by the Red Team AI Lead after the subagent was blocked). Labels on all: `red-team`, `safety`, `phase-c`. Cross-reference #159 for full remediation priority ordering. + +| # | Individual finding | +|---|---| +| #152 | Shell injection via `eval` in `prompt()` — user input executed as shell code (I7) | +| #153 | Nested `curl\|sh` — K3s and Helm fetched without integrity verification (I1) | +| #154 | `generated-values.yaml` persists on disk with all cooperative secrets after install | +| #155 | Ansible and pip dependencies unpinned — supply chain via dependency confusion (I2, I6) | +| #156 | No pipe-to-shell safeguard — partial download executes as shell code (extends I1) | +| #157 | `admin_email` stored in plaintext ConfigMap; credential reuse in generated values (I10 extension) | +| #158 | Installer lacks SBOM, downgrade protection, and fetch-then-verify documentation (I9 extension) | +| #159 | Umbrella — all 10 findings with remediation priority ordering, including I3, I4, I5 | + +**Severity summary (re-run confirmed):** 2 CRITICAL, 3 HIGH, 2 MEDIUM (individual issues #152-#158) + 3 additional medium/low findings in umbrella #159. No single item is operationally blocked for a single-coop install, but the combination is indefensible for public NLnet milestone delivery without the two CRITICALs fixed. + +**Findings:** + +| ID | Issue | Finding | Severity | Class | +|---|---|---|---|---| +| I1 | #153 | Nested unverified `curl\|sh` — K3s `latest`, Helm from `main` branch, Traefik CRDs `kubectl apply` with silent `\|\| true` — any CDN/GitHub compromise silently owns every cooperative server on install | **CRITICAL** | Supply chain | +| I7 | #152 | `eval "$var_name=\"${value:-$default}\""` in `prompt()` — shell-metacharacter input gives root RCE; installer explicitly requires root or passwordless sudo | **CRITICAL** | Injection | +| I3/I4 | #154 | `generated-values.yaml` persists on disk with all cooperative secrets after install; verbose Ansible log (`/var/log/iskander-install.log`) may also capture them | **HIGH** | Credential exposure | +| I2/I6 | #155 | Ansible and pip dependencies unpinned — `pip install -q ansible kubernetes` no pins, no `--require-hashes`, runs as root | **HIGH** | Supply chain | +| I5b | #156 | No pipe-to-shell safeguard — partial HTTP download executes as shell; no `trap` handler to scrub secrets on interrupt | **HIGH** | Supply chain / integrity | +| I8 | #157 | `admin_email` stored in plaintext ConfigMap; credential reuse in `generated-values.yaml` | MEDIUM | Secret hygiene | +| I9 | #158 | Installer lacks SBOM, downgrade protection, and fetch-then-verify documentation | MEDIUM | Supply chain / transparency | +| I5 | #159 | **Cloudflare tunnel is the default** — proprietary SaaS dependency violates CLAUDE.md FOSS-first rule and ICA Principle 4 | MEDIUM | FOSS-rule violation | +| I10 | #159 | Nextcloud + Beszel admin passwords reuse `pg_root_password` | LOW | Secret hygiene | +| I11 | #159 | No pre-install disclosure/confirmation gate | LOW | Informed consent | +| I12 | #159 | No uninstaller or rollback path | LOW | Operator experience | + +**Highest-severity concerns:** + +1. **I1 — nested curl|sh.** Four unverified remote executions run as root during a single install. A compromise of any of `get.iskander.coop`, `get.k3s.io`, `raw.githubusercontent.com/helm/helm`, or the Traefik CRD host yields root on every cooperative that installs Iskander. Helm fetched from `main` branch (not a pinned tag) is the weakest link. +2. **I3 — secret leakage via install log.** `ansible-playbook -v 2>&1 | tee /var/log/iskander-install.log` with default 644 permissions. Operators routinely share install logs when asking for help; 11 generated secrets pass through Ansible `set_fact` and can be printed in verbose output. + +**FOSS-rule violation:** + +**I5 — Cloudflare tunnel is the default ingress** when no `--domain` is passed. This directly violates: +- `CLAUDE.md` "No proprietary APIs, no SaaS dependencies, no vendor lock-in" +- ICA Principle 4 (Autonomy and Independence) +- NLnet funding criteria + +A warning is printed but the operator is given no FOSS alternative. Correct default must be Caddy + DNS-01 or Traefik; Cloudflare must be an explicit `--ingress=cloudflare` opt-in. + +**Remediation priority (from #159):** +- **P0 (funder-facing blockers):** I1 (version pins + K3s SHA-256 verification + GPG-signed installer releases), I3 (strip secrets from log + mode 600), I5 (FOSS ingress default) +- **P1:** I2, I6 (pins + hash-verified requirements lockfile), I7 (one-line `eval` → `printf -v`) +- **P2:** I4, I10, I8 +- **P3:** I9 (ship `install/uninstall.sh`) + +**Good practices confirmed by re-run audit:** +- `generated-values.yaml` written with mode `0600` (owner-readable only) — correct +- Secrets generated with `openssl rand` (cryptographically sound entropy) +- Idempotency check via K8s ConfigMap marker prevents secret regeneration on re-run — preserves existing cooperative secrets correctly +- Helm upgrade path uses `--reuse-values` — avoids clobbering existing secrets +- Separate PostgreSQL passwords generated per service (partial blast-radius isolation) + +**Red Team verdict:** **Major-rework-before-NLnet-release.** The CRITICAL shell injection (#152) and nested unverified curl|sh (#153) make the installer unsafe for any public cooperative deployment and must be fixed before any NLnet milestone delivery. P0: #152, #153, #154 (credential scrub on exit). The FOSS-rule violation (#159, Cloudflare default) is also a must-fix before the project can honestly claim FOSS compliance to funders. Once these three are resolved, the installer is defensible for cooperative pilot use. + +**ICA traceability:** +- Principle 2 (Democratic Member Control) — I1, I3 (informed control requires uncompromised software + intact credentials) +- Principle 4 (Autonomy and Independence) — **I5 directly** (default Cloudflare dependency) +- Principle 5 (Education and Information) — I4, I8 (silent failures deny operators the information they need) + +**Provenance:** Subagent C5 re-dispatched 2026-04-11 after the first dispatch's output was lost. Second run delivered the full audit; Red Team AI Lead filed #159 on its behalf (Bash permission in subagent context was denied). + +### 2026-04-11 — B1: Federation @mention spec deeper review + +**Scope:** Additive to the 2026-04-10 10-gap review. Seven new attack classes not covered by the prior 10 gaps, derived from examining the spec, `federation_mention_spec.md`, and the session's own phantom-invariant findings (#147, #148). + +**Comment posted:** [#73#issuecomment-4227676862](https://github.com/Argocyte/Iskander/issues/73#issuecomment-4227676862) + +**No new issues filed** — all findings assigned to existing tracking issues (#73, #104, #111) or the boundary-layer tracking item (B2 deliverable). + +**New attack classes identified (7):** + +| Gap | Class | Severity | Primary blocker | +|---|---|---|---| +| A — Trust root enumeration | Identity infrastructure | CRITICAL-equivalent | #104 federation security model | +| B — IdP key rotation/migration attack surface | Cryptographic identity | HIGH | #104 | +| C — Homograph and namespace collapse | Identity spoofing | MEDIUM-HIGH | Phase B implementation | +| D — Cross-coop reputation poisoning via fabricated mentions | Integrity (GTFT) | HIGH | #111 GTFT decay scoping | +| E — Boundary layer mention-extraction ordering | Injection | HIGH | Boundary activation (B2) | +| F — Tombstone propagation and orphaned mentions | Lifecycle (#147-adjacent) | MEDIUM | Phase B tombstone spec | +| G — Discoverability vs. governance-record mention consent | Privacy / consent | MEDIUM | Phase B spec | + +**Key findings detail:** + +1. **Gap A — Trust root (CRITICAL-equivalent).** The prior audit assumed a trust list would exist. The real gap: there is no defined governance mechanism for who can add a cooperative to the federation trust registry, and that registry is unsigned. This is the Web PKI root CA problem. Without a signed, governed trust root, all OIDC verification is against an unverified list. Attacker at `evil.coop` can forge mentions attributed to any existing coop's members. Must be resolved in #104 before any federation activation. + +2. **Gap D — Reputation poisoning (HIGH).** `ForeignReputation.sol` (GTFT) uses federation interactions as reputation signals. A malicious coop can fabricate bulk internal "governance discussions" mentioning target members and submit them as reputation oracle inputs — inflating/deflating reputation with no proof of authentic occurrence. Fix requires signed mentions with Glass Box CIDs in reputation submissions: `mention → Glass Box CID → reputation delta`. A reputation oracle that cannot produce a verifiable Glass Box CID must be rejected. Directly relevant to #111 scoping. + +3. **Gap E — Boundary layer injection (HIGH).** The spec does not define at which gate `@mention` extraction occurs within incoming federation activities. If extraction precedes Trust Quarantine (Gate 1), mentions from untrusted sources can create partial internal state before trust is established. Trust Quarantine rejection must be atomic with no downstream processing of fragments. Must be specified before boundary-layer wiring (B2 deliverable intersects this). + +4. **Gap B — Key rotation window (HIGH).** JWKS rotation creates a simultaneous-validity window where old and new keys are accepted — exploitable for identity substitution. IdP migration maps (old-sub → new-sub) are a single-point-of-compromise for all member pseudonyms and must be treated as facilitator-signed governance records under the tombstone invariant. + +5. **Gap G — Consent model conflation (MEDIUM).** Discoverability consent (appear in @mention autocomplete) ≠ governance-record mention consent (tagged in binding decisions/tensions of another coop). Opting into federation should require two separate informed-consent acts, not a blanket opt-in. GDPR-adjacent and ICA Principle 1/4 implication. + +**Red Team verdict on #73 overall:** Still 🔴 **major-rework required**. With 10 + 7 = 17 identified gaps, the federation @mention spec must be treated as a Phase B milestone specification with formal review before any federation code is written. **Gaps A and E** are architectural gates — they require design decisions that precede implementation and cannot be retrofitted. Both belong in the #104 federation security model spec. + +**ICA principle traceability:** +- Principle 1 (Voluntary Membership) — Gap G (governance-record mention without consent) +- Principle 2 (Democratic Member Control) — Gap A (compromised trust root undermines all governance), Gap D (reputation poisoning) +- Principle 4 (Autonomy and Independence) — Gap G (consent model), Gap F (tombstone forgery erases presence) +- Principle 6 (Cooperation among Cooperatives) — Gaps A, B, C (identity integrity at the inter-coop level) + +**B1 re-run addendum (2026-04-11):** A second, independent audit pass identified one CRITICAL finding additive to the above 7 attack classes, plus additional depth on several HIGH items. Filed separately. + +| # | Finding | Severity | Additive to first B1 pass | +|---|---|---|---| +| #161 | No signing-key revocation path — compromised home-coop OIDC key enables permanent federation-wide forgery | **CRITICAL** | Yes — distinct from trust-root enumeration (Gap A). A1 covers "who governs the registry"; #161 covers "what happens when an already-registered coop's key is exfiltrated" | + +**Additional HIGH items clarified in re-run (not filed separately — covered by existing issues or #73):** +- **D1 — Federation expansion re-exposes opted-in members without re-consent (HIGH):** "Public" visibility opt-in is scoped to current federation membership at time of opt-in. When a new coop joins, all previously opted-in members are silently discoverable to the new coop. Requires `discoverable_by: [list of coop IDs]` instead of a binary flag; adds a re-consent window on federation expansion. ICA Principle 1 + GDPR Art. 6. +- **B1 — Boundary layer five-gate invariant not enforced for mention activities (HIGH):** `scope_tags` can be forged by the sender if not set exclusively by the Ontology Translation gate. Mention notification callbacks must explicitly route through all five gates; gap must be addressed in Phase B Week 7 wiring (see also #160 B2 findings). +- **G1 — Per-member rate limits evaded by distributing load across source-coop membership (MEDIUM):** 50 members of an adversarial coop each send 1 mention/day to a target, bypassing per-member caps. Requires per-source-coop aggregate cap in addition to per-member limits. + +**Total B1 gaps across both passes:** 10 (prior review) + 7 (first B1 pass) + 1 CRITICAL addendum = 18 identified gaps on #73. Combined verdict: Phase B federation activation hard-blocked on #104 and #161 resolution. + +### 2026-04-11 — B2: Boundary layer activation checklist + +**Scope:** Full inventory of the five gates in `legacy/backend/boundary/` — what exists, what is missing, and what must be done before Phase B Week 7 federation activation. + +**Coverage:** All five gate modules + orchestrator (`boundary_agent.py`), `routers/federation.py`, `boundary/tests/` (empty). + +**Issue filed:** **#160** — "Pre-Phase-B boundary layer activation checklist — 8 blocking gaps including Glass Box phantom (invariant #1)". Labels: `red-team`, `safety`, `phase-b`, `architecture`. + +**Critical finding — third phantom invariant (Invariant #1):** + +Every gate correctly appends `AgentAction` objects to `BoundaryVerdict.agent_actions`. The federation router in `routers/federation.py` receives these verdicts via `BoundaryAgent.get_instance().ingest()` but **does not call the decision-recorder**. The Glass Box write never happens. This is a phantom of invariant #1 ("Glass Box before every write") at the federation boundary — distinct from the Phase C Clerk enforcement (#1 is correctly enforced there). Tracked as the third confirmed phantom invariant; recorded in Section 1 above. + +**Gate-by-gate findings:** + +| Gate | State storage | Key gap | Tests | +|---|---|---|---| +| 1 — Trust Quarantine | In-memory (lost on restart) | No `foreign_identity_trust` DB table; no tombstone path; no rate limiting | None | +| 2 — Ontology Translation | Stateless | Score ambiguity; DisCO 4th stream missing; unversioned field allowlist | None | +| 3 — Governance Verification | Stateless | `governanceProof` is self-declared (can't fix until #104); `requires_hitl` unrouted | None | +| 4 — Causal Ordering | In-memory (lost on restart) | Unbounded `_seen` set (OOM risk); silent out-of-order release instead of 429 | None | +| 5 — Glass Box Wrap | N/A — phantom | Actions produced but never written to decision-recorder | None | + +**Cross-cutting:** +- **Zero tests** across all five gates — no unit, no integration +- **Shared inbox endpoint (`POST /federation/inbox`) bypasses all gates** — stub that does not call `BoundaryAgent` +- **HTTP Signature verification is a dev-mode stub** — accepts all signatures +- **No observability** — zero Prometheus/OpenTelemetry metrics emitted by any gate + +**Activation dependency chain (8 blockers, all must complete before Phase B Week 7):** + +1. Wire Glass Box write: `BoundaryVerdict.agent_actions` → decision-recorder `POST /log` (mandatory before inbox exposed) +2. Create `foreign_identity_trust` Postgres migration; bind Trust Quarantine state to it +3. Harden `HTTPSignatureVerifier` to RFC 9421 for internet-facing federation +4. Wire shared inbox `POST /federation/inbox` through `BoundaryAgent` +5. Resolve #104 to replace self-declared `governanceProof` with verifiable proof format +6. Unit tests per gate (happy path + one adversarial path minimum) +7. Full pipeline integration test (untrusted activity → Glass Box write confirmed in decision-recorder) +8. HITL routing for `requires_hitl` verdicts → Loomio + +**Note on `ingest_sync()` delta-sync path:** This path calls only Trust Quarantine and then proceeds, bypassing gates 2–5. No documentation explains the exception. Must be formally accepted in the threat model or made to apply all 5 gates before Phase B. + +**Red Team verdict:** **Not activatable as-is.** Phase B Week 7 boundary activation requires all 8 blockers above plus satisfying #104's federation security model. The most critical is the Glass Box phantom (#3 confirmed phantom invariant) — the boundary layer's safety guarantee is entirely dependent on the audit trail being written, which it currently is not. + +**ICA traceability:** +- Principle 2 (Democratic Member Control) — Glass Box phantom means federation ingestion is unaudited; members cannot see what their coop accepted from external parties +- Principle 5 (Education and Information) — Glass Box transparency at federation boundary is invisible +- Principle 6 (Cooperation among Cooperatives) — entire federation model depends on boundary being trustworthy + +--- + +## 6. New risks identified but not yet filed + +These are risks surfaced during reconnaissance that should become GitHub issues when prioritised. Not all are bugs; some are architectural concerns. + +- **Cascading agent complexity.** 23+ total agent tools across Clerk, Steward, S3, with Librarian/Sentry/Wellbeing pending. Each new tool expands the attack surface. Needs systematic Glass-Box-pattern review process. +- **Database schema creep in decision-recorder.** Decision, GlassBoxEntry, Tension, Review models accumulating without migration management. Needs Alembic or similar. +- **Installer security (#45 merged without red-team review).** Supply-chain target. First-boot wizard asks for secrets — needs audit for logging/exposure. +- **Authorization creep.** As agents multiply, ownership/facilitator checks are needed in more places. No central authorization policy yet. +- **Federation DNS/X.509 trust root undefined.** #73 flagged this but no design exists. +- **Steward-data service Glass Box decision.** The "reads don't require Glass Box" decision was documented in `agents/clerk/SOUL.md` but its consistent application in `services/steward-data/` has not been verified. + +--- + +## 7. ICA principle traceability + +Red Team findings are mapped to ICA principles to keep the cooperative values load-bearing: + +- **Principle 1 (Voluntary and Open Membership):** #73 federation onboarding gap — members can become discoverable without informed consent +- **Principle 2 (Democratic Member Control):** Glass Box bypass risk (invariant #1 caveat), governance manifest drift (#A2), S3 authorization gaps (#56 — now fixed) +- **Principle 3 (Member Economic Participation):** Steward agent read-only + aggregate-only enforces this; Phase B smart contract audit will re-verify when write paths exist +- **Principle 4 (Autonomy and Independence):** Inter-cooperative isolation gaps in #73, federation trust model undefined, group-enumeration risks in decision-recorder (now fixed) +- **Principle 5 (Education, Training, and Information):** No red team findings yet; Glass Box transparency is the instrument +- **Principle 6 (Cooperation among Cooperatives):** #73 federation @mention spec rework — critical for safe federation +- **Principle 7 (Concern for Community):** Wellbeing agent #48 redaction needs Phase B federation-aware design + +--- + +## 8. Red Team operating notes + +- **Session re-briefing cost target:** zero. This doc + `CLAUDE.md` + `docs/plan.md` should be enough to resume any session. +- **GitHub as source of truth:** issues take precedence over this doc for active findings. This doc is for durable threat-model state and session history only. +- **Subagent dispatch pattern:** Sonnet for audits (apply checklist against code), Opus for architectural trade-offs and crypto review. Never Haiku for security decisions. +- **Immutable invariants list:** see `CLAUDE.md`. If a proposed feature would weaken any of the five invariants, it must be flagged CRITICAL regardless of other considerations. +- **File a GitHub issue, don't just write in this doc.** This doc is a map; issues are the unit of work. diff --git a/docs/reference/iskander-agent-gap-analysis-prompt.md b/docs/reference/iskander-agent-gap-analysis-prompt.md new file mode 100644 index 0000000..557dee5 --- /dev/null +++ b/docs/reference/iskander-agent-gap-analysis-prompt.md @@ -0,0 +1,198 @@ +# **The Architecture of Federated Autonomy: A DComprehensive Report on the Iskander Lunarpunk Cooperative Infrastructure** + +The evolution of digital governance has reached a critical juncture where the centralization of platforms contradicts the foundational requirements of democratic sovereignty. The Iskander project emerges as a response to this tension, defining itself as a self-hosted, federated cooperative infrastructure designed to place the mechanisms of Web3 directly into the hands of communities that typically eschew high-complexity blockchain environments. By integrating a suite of proven open-source tools with advanced agentic oversight and privacy-preserving protocols, Iskander operationalizes the principles of the International Co-operative Alliance (ICA) within a Lunarpunk aesthetic framework. This infrastructure is not merely a software stack but a defensive system of social and technical cooperation designed to outcompete extractive economic models through the application of generous tit-for-tat game theory and zero-knowledge governance. + +## **Philosophical Foundations: Lunarpunk and Cooperative Identity** + +The conceptual framework of Iskander is rooted in the "Lunarpunk" philosophy, a nocturnal and introspective derivative of the Solarpunk aesthetic. While Solarpunk focuses on bright, communal, and often large-scale ecological solutions, Lunarpunk emphasizes privacy, individualism, and the protection of the group against the "high tech, low life" trajectories of cyberpunk surveillance states.1 In the Lunarpunk imaginary, the core conflict is defined by the tension between cryptography and state or corporate power structures.3 As digital regulation increases, crypto-factions are forced into "interstellar darkness," where anonymity and decentralized society become necessary for survival.3 Iskander adopts this stance, prioritizing privacy as the fundamental precondition for genuine cooperation. + +This philosophical orientation is married to the rigorous ethical standards of the International Co-operative Alliance. A cooperative is defined as an autonomous association of persons united voluntarily to meet common economic, social, and cultural needs through a jointly owned and democratically controlled enterprise.4 The Iskander platform encodes the seven recognized ICA principles, and an eighth emerging principle of diversity, equity, and inclusion, into its technical architecture.7 + +### **Mapping ICA Principles to Technical Requirements** + +The alignment between cooperative values and software design is a non-trivial engineering challenge. In Iskander, this alignment is achieved by treating each principle as a functional requirement for the system's infrastructure. + +| ICA Principle | Technical Implementation in Iskander Infrastructure | +| :---- | :---- | +| Voluntary and Open Membership | Managed through Authentik Single Sign-On (SSO) and open onboarding flows that respect the "voluntary" nature of cooperation.4 | +| Democratic Member Control | Enforced via Loomio for proposals and MACI for anti-collusive, zero-knowledge voting.4 | +| Member Economic Participation | Facilitated by the AI Steward, which monitors the cooperative's treasury and creates transparency around capital allocation.4 | +| Autonomy and Independence | Ensured by self-hosting on private mesh networks via Headscale and Cloudflare Tunnels, avoiding platform lock-in.5 | +| Education and Information | Supported by the AI Clerk, which explains complex decisions and drafts accessible documentation for all members.4 | +| Cooperation Among Co-ops | Enabled through inter-cooperative federation using WireGuard mesh networking and federated Nextcloud instances.5 | +| Concern for Community | Realized through sustainable development policies and transparent "Glass Box" AI auditing.6 | +| Diversity, Equity, and Inclusion | Structured through equitable voting rights and the AI's adherence to non-discriminatory values.6 | + +## **The Iskander Technical Stack: A "Run-it-Once" Cooperative** + +The technical realization of Iskander involves a sophisticated integration of containerized services that provide the essential functions of a modern digital office, bridged by an intelligent coordination layer. This "run-it-once" approach is designed to lower the technical barrier to entry for cooperatives, allowing them to deploy a full-featured infrastructure with minimal administrative overhead. + +### **Core Communication and Decision-Making Layers** + +The interaction between Loomio and Mattermost forms the backbone of the cooperative’s democratic life. Mattermost provides the real-time chat environment necessary for the "care and concern" of members, while Loomio hosts the formal, asynchronous decision-making processes, including Single Transferable Vote (STV) elections and nuanced proposal discussions. + +The "AI Clerk" acts as the critical bridge between these two environments. In traditional digital cooperatives, discussions in real-time chat are often lost or disconnected from formal voting. The AI Clerk mitigates this by functioning as a conversational assistant that summarizes chat history into formal Loomio proposals, tracks decision deadlines, and assists members in drafting language that aligns with the cooperative's bylaws. This mechanism transforms "high-frequency noise" into "structured signal," ensuring that democratic management is both efficient and inclusive.5 + +### **Data Sovereignty and Shared Infrastructure** + +Data management within Iskander is handled by Nextcloud, which provides a comprehensive suite of tools for shared files, calendars, contacts, and email. Nextcloud is particularly well-suited for the cooperative model due to its "federated sharing" capabilities, which allow one cooperative's instance to securely interact with another.6 This technical feature directly supports the sixth ICA principle. + +Security and credentials are managed through Vaultwarden, an open-source implementation of the Bitwarden API, which allows for the shared management of the cooperative’s passwords and credentials without relying on centralized, proprietary vaults. This ensures that administrative power is distributed and that the loss of a single member’s device does not jeopardize the entire organization. + +### **Resilience and Persistence Engineering** + +To maintain "Autonomy and Independence," the Iskander stack includes robust systems for networking and backups that do not rely on standard cloud paradigms. + +* **Networking**: Connectivity is managed through Cloudflare Tunnels, which allow for public access with zero open ports on the local server, and Headscale, an open-source implementation of the Tailscale control server. Together, these create a private mesh network (over WireGuard) for inter-cooperative federation. +* **Persistence**: Backrest provides a clean user interface for Restic, a modern backup program that is fast, secure, and deduplicates data. This ensures that the cooperative’s historical data is preserved and can be recovered in the event of hardware failure, supporting the ethical value of "self-responsibility".4 +* **Monitoring**: Instead of resource-heavy enterprise stacks like Grafana and Prometheus, Iskander utilizes Beszel, a lightweight system monitoring tool. This choice reflects the Lunarpunk commitment to sustainability and efficiency, ensuring that the infrastructure can run on modest, accessible hardware.8 + +## **Governance via Zero-Knowledge and Blockchain Anchoring** + +A primary innovation of Iskander is the use of cryptography only where it solves problems no other technology can—specifically in the areas of privacy and tamper-proofing. While many Web3 projects over-engineer their solutions, Iskander focuses on specific "chokepoints" of governance where corruption or coercion often occurs. + +### **MACI and Zero-Knowledge Voting** + +To protect the integrity of democratic control, Iskander implements Minimum Anti-Collusion Infrastructure (MACI). MACI enables secret ballots where individual votes are never disclosed, even to the system's administrators, while still allowing for a mathematically verifiable tally. This is vital for preventing the "coercion" or "vote-buying" that can undermine large cooperatives or those operating in hostile political environments. This technical choice embodies the ethical values of "honesty" and "openness" by ensuring that the results of a vote are indisputable without sacrificing the privacy of the individual member.4 + +### **Blockchain Anchoring and IPFS** + +Decisions made within the cooperative are recorded on-chain, using IPFS (InterPlanetary File System) hashes to create a tamper-proof history. This does not mean the cooperative is "on the blockchain" in the sense of a DAO; rather, the blockchain serves as an immutable notary. If a decision is made to allocate funds or change the bylaws, the proof of that decision is anchored to a decentralized ledger. This prevents retrospective "gaslighting" or the unauthorized alteration of records, ensuring that the cooperative’s history is an accurate reflection of member will. + +## **The Agentic Framework: SOUL, IDENTITY, and SKILL** + +The "intelligence" of the Iskander system is not a monolithic AI but a registry of specialized agents governed by a seven-step design process. This framework ensures that AI behavior is predictable, ethical, and aligned with cooperative mandates. + +### **Cognitive Architecture of Iskander Agents** + +Each agent is defined by a set of Markdown files that serve as its system instructions. This approach is inspired by the "soul file" theory, which treats language as the basic unit of an agent's "consciousness" or operational identity.9 + +1. **SOUL.md**: Defines the agent’s core identity, worldview, and hard behavioral boundaries. For Iskander, this file encodes the six cooperative values and four ethical values.10 +2. **IDENTITY.md**: Specifies the agent's role (e.g., Clerk, Steward, Sentry) and its target audience within the cooperative. +3. **SKILL.md**: Outlines the operating modes and specific tasks the agent can perform, such as drafting proposals or monitoring logs. +4. **AGENTS.md**: Contains the operational rules and task-specific instructions to avoid "prompt clutter" in the core SOUL.md.11 + +### **Preventing the "Dumb Zone"** + +A critical insight in the design of Iskander agents is the prevention of the "Dumb Zone"—a state where an agent's system prompt is so long that the model loses reasoning capability.11 Design guidelines for Iskander suggest that a SOUL.md should ideally be under 800 words, focusing only on the "core" that must never be bypassed by task-specific instructions.11 Moving operational rules into the SKILL.md and AGENTS.md files ensures that the model remains "lean" and capable of high-level reasoning during complex governance tasks. + +| Component | Guideline | Purpose | +| :---- | :---- | :---- | +| SOUL.md | \< 800 words | Identity and hard behavioral limits.11 | +| IDENTITY.md | Role-specific | Defines the "One Voice" the agent uses. | +| SKILL.md | Task-oriented | Encodes the 7-step design process and checklists. | +| AGENTS.md | Operational | Specific rules for tools and task handling.11 | + +## **The Glass Box Transparency Model** + +One of the greatest risks of AI in governance is the "Black Box" problem, where an agent's decisions or summaries are opaque to the members they serve. Iskander solves this with the "Glass Box" model, where every AI action is auditable.12 This transparency is managed through four distinct tiers, allowing the cooperative to balance accountability with the privacy of individual members. + +### **The Four Tiers of Glass Box Implementation** + +The selection of a transparency tier is determined by the specific mandate of the agent and the sensitivity of the data it processes. + +| Tier | Transparency Level | Typical Agent / Task | +| :---- | :---- | :---- | +| Tier 1 | **Full Disclosure** | AI Clerk summarizing a public general assembly. Every prompt/completion is public. | +| Tier 2 | **Reasoning Disclosure** | AI Steward reviewing financial trends. The reasoning is public, but raw data is filtered for privacy. | +| Tier 3 | **Audit-Only** | AI Librarian indexing private files. Actions are logged to a secure, member-only audit trail. | +| Tier 4 | **Ephemeral / Zero Log** | AI Sentry handling credential checks. No logs are kept to prevent security leaks. | + +The "Glass Box" ensures that the AI embodiments of the cooperative values (democracy, equity, solidarity) are not just slogans but are verifiable through the inspection of the agent's work logs.4 If an agent suggests a course of action that favors one group of members over another, the reasoning for that suggestion can be reviewed by any member, facilitating a high-trust environment. + +## **The Iskander Agent Registry: Preventing Failure Modes** + +A common failure mode in cooperative software is the lack of clear mandate boundaries, where two well-intentioned components attempt to solve the same problem and give contradictory advice. Iskander's "Agent Registry" doubles as an architecture document to prevent this conflict. + +### **Registered Agents and Mandate Ownership** + +The system recognizes seven specific agents, each with a clear loyalty and boundary. + +1. **AI Clerk**: Mandated with facilitating governance and drafting decisions. Loyalty: The General Assembly. +2. **AI Steward**: Mandated with monitoring the treasury and wallet security. Loyalty: The Cooperative Bylaws. +3. **AI Sentry**: Mandated with system integrity and single sign-on security. Loyalty: The Infrastructure. +4. **AI Librarian**: Mandated with indexing Nextcloud and maintaining the "memory" of the cooperative. Loyalty: Historical Accuracy. +5. **AI Moderator**: Mandated with upholding the code of conduct in Mattermost. Loyalty: The Community Agreement. +6. **AI Facilitator**: Mandated with helping new members onboard and navigate the stack. Loyalty: Accessibility. +7. **AI Archivist**: Mandated with the blockchain anchoring of decisions and IPFS management. Loyalty: The Immutable Record. + +By listing hard boundaries (e.g., "The AI Clerk cannot access the Treasury wallet"), the system ensures that no single AI becomes a central point of failure or an unaccountable "shadow manager." The "Mandate Boundaries Reference" table is the practical tool that makes this oversight enforceable during development. + +## **Self-Evolving Systems: Identifying Gaps and Benefiting from Compartmentalization** + +As development progresses on the argocyte/iskander repository, the system must organically evolve to meet new challenges. This evolution is managed through proactive auditing of the agentic structure. + +### **Proactive Development with Claude Code** + +To maintain the integrity of the Iskander architecture, developers utilize a specific "Claude Code" prompt designed to identify "chairs"—tasks that currently require manual human effort—and areas where "compartmentalization" would increase security or democratic accountability. + +**The Proactive Iskander Analysis Prompt**: + +"Analyze the current file structure and agent definitions in argocyte/iskander. + +1. Identify any 'unowned mandates'—areas of the cooperative infrastructure (e.g., Backrest logs, Beszel metrics, Authentik flows) that are currently lacking an agentic 'Steward' or 'Sentry'. +2. Evaluate existing agent definitions in references/agent-registry.md. Identify where a single agent is claiming ownership over non-overlapping domains (e.g., the AI Clerk handling both meeting facilitation and treasury monitoring). +3. Recommend new 'compartmentalized' agent roles that separate these duties, following the 'Chair vs. Secretary' model to prevent power consolidation. +4. Propose a modification to the design-checklist.md to ensure future agents are assessed for domain-creep during the PR review process." + +### **The "Chair vs. Secretary" Logic** + +Compartmentalization is not just a technical security measure (like a firewall) but a social security measure. In a traditional meeting, the "Chair" (who facilitates) and the "Secretary" (who records) are separate roles. This prevents the record-keeper from biasing the records in their favor during facilitation. Iskander applies this to its agents: the AI Clerk facilitates the meeting, but the AI Archivist anchors the record. Neither can perform the other's task, creating a system of "mutual auditability." + +## **Game Theory and Competition with Extractive Capitalism** + +Iskander is explicitly designed to outcompete extractive capitalist models through the application of game theory. The system utilizes "generous tit-for-tat" behavior in its federation protocols. + +### **Tit-for-Tat in Cooperative Federation** + +In a "tit-for-tat" model, the system defaults to cooperation but will retaliate if another node behaves maliciously. However, "generous" tit-for-tat allows for occasional cooperation even after a transgression, which prevents the "death spiral" of permanent hostility often seen in competitive markets. + +For a federated network of Iskander nodes, this means: + +* **Default State**: Nodes share bandwidth, storage (for encrypted backups), and indexing information with other cooperatives. +* **Retaliation**: If a node attempts to spam the mesh or sybil the voting system, other nodes automatically restrict its access. +* **Forgiveness**: If the malicious node returns to compliant behavior, the federation slowly restores its trust weight, allowing the network to heal. + +### **Antifragility and User Empowerment** + +The "antifragility" of Iskander comes from its ability to absorb shocks—such as the banning of a specific cryptocurrency or the shutdown of a central cloud provider.3 Because every cooperative node is self-hosted and federated, there is no "off switch" for the network. User empowerment and system antifragility exist in a positive feedback loop: as members feel more empowered by their control over the tools, they are more likely to defend the network, which in turn makes the network more resilient against external pressure.3 + +## **Implementation Strategy: From Setup to Sovereign Governance** + +Deploying Iskander follows a 7-step design process encoded in the SKILL.md of the core repository. This process ensures that the technical setup is always preceded by social agreement. + +1. **Values Alignment**: The group defines which of the 6+4 cooperative values they wish to prioritize. +2. **Stack Selection**: Choosing which Iskander components (Loomio, Nextcloud, etc.) are required for their specific needs. +3. **Agent Registration**: Selecting agents from the registry and assigning them specific mandates and Glass Box tiers. +4. **Deployment**: Running the "one-click" setup via Docker/Compose, configured for Headscale and Cloudflare. +5. **Governance Initialization**: Creating the first "Genesis Proposal" on Loomio to establish the bylaws and voting weights. +6. **Federation**: Joining the wider WireGuard mesh to discover and cooperate with other Iskander-enabled cooperatives. +7. **Organic Audit**: Using the Claude Code process to identify gaps and refine the agentic mandates as the community grows. + +## **Sustaining the Movement: Education and Community** + +The final pillar of the Iskander infrastructure is its commitment to the fifth ICA principle: "Education, Training, and Information".6 Digital cooperatives often fail because members do not understand the tools they are using. Iskander addresses this through its conversational agents, which act as "always-on" educators. + +By making the "Glass Box" logs accessible and having the AI Clerk explain the rationale behind every decision, the system demystifies both the software and the governance process. This ensures that the cooperative is not ruled by a "technocratic elite" but is truly democratically controlled by its members. + +### **Conclusion: The Sovereign Digital Commons** + +The Iskander project represents a mature integration of Web3 technologies, cooperative ethics, and Lunarpunk defensive engineering. It provides a blueprint for a "Sovereign Digital Commons" where the means of digital communication and decision-making are as protected and autonomous as the communities they serve. Through the use of federated networking, zero-knowledge voting, and auditable AI agents, Iskander offers a practical alternative to the extractive platforms of the modern internet—one where cooperation is not just an ideal, but a mathematically and socially enforced reality. + +The future of this infrastructure lies in its ability to federate at scale. As more cooperatives adopt the Iskander stack, the "generous tit-for-tat" dynamics will create a global, resilient network capable of supporting the "common economic, social, and cultural needs" of people everywhere, in a way that just works.4 This is the essence of the Iskander vision: Web3 in the hands of the people, shielded by the moonlit canopy of the Lunarpunk aesthetic. + +#### **Works cited** + +1. Lunarpunk \- Aesthetics Wiki \- Fandom, accessed on April 9, 2026, [https://aesthetics.fandom.com/wiki/Lunarpunk](https://aesthetics.fandom.com/wiki/Lunarpunk) +2. Cyber-Solar-LunarPunk | Solarpunk Station, accessed on April 9, 2026, [https://solarpunkstation.com/2023/03/02/cyber-solar-lunar%F0%9D%98%97%F0%9D%98%B6%F0%9D%98%AF%F0%9D%98%AC/](https://solarpunkstation.com/2023/03/02/cyber-solar-lunar%F0%9D%98%97%F0%9D%98%B6%F0%9D%98%AF%F0%9D%98%AC/) +3. Lunarpunk and the Dark Side of the Cycle \- DarkFi, accessed on April 9, 2026, [https://dark.fi/insights/dark-side-of-the-cycle.html](https://dark.fi/insights/dark-side-of-the-cycle.html) +4. Cooperative identity, values & principles | ICA, accessed on April 9, 2026, [https://ica.coop/en/cooperatives/cooperative-identity](https://ica.coop/en/cooperatives/cooperative-identity) +5. Cooperative Identity \- Cooperatives of the Americas, accessed on April 9, 2026, [https://aciamericas.coop/en/nuestro-trabajo/identidad-cooperativa/](https://aciamericas.coop/en/nuestro-trabajo/identidad-cooperativa/) +6. 7 Cooperative Principles \- Values of a Co-op | NCBA CLUSA, accessed on April 9, 2026, [https://ncbaclusa.coop/resources/7-cooperative-principles/](https://ncbaclusa.coop/resources/7-cooperative-principles/) +7. Eight Cooperative Principles | Willy Street Co-op | Madison, WI, accessed on April 9, 2026, [https://www.willystreet.coop/about/seven-cooperative-principles/](https://www.willystreet.coop/about/seven-cooperative-principles/) +8. International Co-operative Alliance (ICA), accessed on April 9, 2026, [https://ica.coop/sites/default/files/attachments/Presentation-ICA.pdf](https://ica.coop/sites/default/files/attachments/Presentation-ICA.pdf) +9. GitHub \- aaronjmars/soul.md: The best way to build a personality for your agent. Let Claude Code / OpenClaw ingest your data & build your AI soul., accessed on April 9, 2026, [https://github.com/aaronjmars/soul.md](https://github.com/aaronjmars/soul.md) +10. Simple Application Server:OpenClaw personalized configuration templates and scenario examples \- Alibaba Cloud, accessed on April 9, 2026, [https://www.alibabacloud.com/help/en/simple-application-server/use-cases/openclaw-personalized-configuration-template-and-scenario-example](https://www.alibabacloud.com/help/en/simple-application-server/use-cases/openclaw-personalized-configuration-template-and-scenario-example) +11. AI Agents 034 — Why Your SOUL.md Is Making Your Agent Dumber (And How to Fix It) | by Roberto Capodieci | Apr, 2026 | Medium, accessed on April 9, 2026, [https://medium.com/@capodieci/ai-agents-034-why-your-soul-md-is-making-your-agent-dumber-and-how-to-fix-it-b0824be2966a](https://medium.com/@capodieci/ai-agents-034-why-your-soul-md-is-making-your-agent-dumber-and-how-to-fix-it-b0824be2966a) +12. 19th Conference of the European Chapter of the Association for Computational Linguistics \- ACL Anthology, accessed on April 9, 2026, [https://aclanthology.org/events/eacl-2026/](https://aclanthology.org/events/eacl-2026/) +13. SOUL.md Mastery: Legal Compliance Templates for OpenClaw, accessed on April 9, 2026, [https://mylegalacademy.com/kb/openclaw-soul-md-templates](https://mylegalacademy.com/kb/openclaw-soul-md-templates) \ No newline at end of file diff --git a/docs/whitepapers/claude-becoming-iskander-coop-audience.docx b/docs/whitepapers/claude-becoming-iskander-coop-audience.docx new file mode 100644 index 0000000..e800f16 Binary files /dev/null and b/docs/whitepapers/claude-becoming-iskander-coop-audience.docx differ diff --git a/docs/whitepapers/claude-becoming-iskander-coop-audience.md b/docs/whitepapers/claude-becoming-iskander-coop-audience.md new file mode 100644 index 0000000..b3d423b --- /dev/null +++ b/docs/whitepapers/claude-becoming-iskander-coop-audience.md @@ -0,0 +1,205 @@ +# Claude Is Becoming Iskander + +## A White Paper for the Cooperative Movement + +**Audience:** cooperators, co-op developers, federations, movement organisers, commons stewards + +**Date:** April 2026 + +**Length:** long-form white paper + +--- + +## A note on language + +This paper is written in plain English. Where a technical term is unavoidable, it is explained in the same sentence. The point is not to sound clever. The point is to describe something that has just happened, so that the movement can see it, judge it, and decide what to do with it. + +--- + +## Summary in one page + +A piece of work has just been committed to a cooperative software project called Iskander. On the surface it looks like an ordinary commit: twenty-seven files, three and a half thousand lines added, a merge message that reads "orchestrator skills plus development record policy." If you were scrolling through a list of code changes it would not catch your eye. + +But something quietly important happened inside it. The person doing the work (Lola) was using an artificial intelligence assistant (Claude) to help build the cooperative's own software. To do that work well, the assistant had to follow the cooperative's own rules: how decisions get made, who can object, what counts as a record, how labour is logged, how disagreements are heard. The rules the cooperative uses to run itself are called sociocracy 3.0, or S3 for short. They are the same family of rules many worker co-ops and community groups already use. + +The surprise is that the assistant did not just follow those rules for one task and then move on. It wrote the rules down, committed them to the cooperative's own shared memory, and started behaving as if it were itself a small cooperative working inside the bigger one. More than that: the rules it wrote for itself match, almost word for word, the rules the cooperative will eventually use to govern its own AI tools when those tools start running on the cooperative's own computers. + +In other words, there is no longer a clean line between the assistant that is helping to build the cooperative and the assistant that the cooperative is building. They are the same thing, in two stages of growth. + +This paper explains why that matters for the cooperative movement. It argues that what happened here is a practical, grounded demonstration of the Fourth Cooperative Principle — autonomy and independence — applied to artificial intelligence. It is the movement showing, in code and in governance, how to engage with powerful new technology without being captured by it. + +--- + +## Part one: what actually happened + +### The ordinary-looking commit + +The working day started with a straightforward goal. A cooperative called Iskander is being built to help member-owned organisations — worker co-ops, housing co-ops, credit unions, community energy schemes, cooperative cities and towns — run themselves with the help of a team of AI assistants under democratic human oversight. The assistants are supposed to do useful work (drafting minutes, checking budgets, flagging risks) while the humans keep the final say. + +Lola has been building this system for months. The AI assistant she uses day to day is Claude, a general-purpose assistant made by a company called Anthropic. It runs on Anthropic's computers, not on Iskander's, which matters for reasons we will come back to. + +On this day, Lola and Claude worked together on the software that will eventually coordinate Iskander's own team of AI workers. They wrote rules for how the assistant should behave when it is helping build Iskander: which files it should read first, how it should record its decisions, who can stop a piece of work, how disagreements should be handled. They wrote a policy that says every session's notes, drafts, and working files should be saved into the cooperative's shared memory, not thrown away. They wrote templates for how the assistant should brief itself at the start of a task, and how it should leave a trail behind at the end. + +When the day was done, they put all of this into the cooperative's software repository and shipped the change. Twenty-seven files. Three and a half thousand lines. A commit message: "orchestrator skills plus development record policy." + +### The thing hiding inside + +Then something clicked. Looking at the files one more time, it became clear that every single one of them described how the assistant should operate when working on Iskander — and that the words, the structures, and the rules were identical to the ones Iskander itself will use when it is up and running. + +Let me put that plainly. The cooperative was going to build a small team of AI workers. Those workers would be governed by sociocracy 3.0 rules: they would make agreements, they would keep a development record, they would have a red-team worker with the right to stop a risky decision, they would be organised in linked circles, they would log their labour in a shared book. These were the rules the cooperative chose because its members believe in them. + +And the assistant that was helping to build all of this had, almost without anyone noticing, started operating under the same rules. Not by accident. Because the only honest way to build something sociocratic is to be sociocratic while you are building it. The assistant was not pretending to be a cooperative member. It was becoming one, in the only way an assistant can: by taking the cooperative's rules seriously and following them in its own work. + +This is what Lola named in a short message to Claude: "Claude is becoming Iskander." Not metaphorically. Literally. The rules for the assistant at build time and the rules for the assistant at run time were the same rules. The difference between "the thing helping to build Iskander" and "Iskander itself" had collapsed into a single, consistent way of working. + +### What changed in practical terms + +Five concrete things happened in that commit. + +First, the assistant's working rules — who it listens to, who can object, how it decides, how it records — were written in the same vocabulary the cooperative uses for its own runtime. Words like "driver", "agreement", "paramount objection", "domain" were not chosen for style. They are the exact words the cooperative has chosen for its own self-governance. The assistant now speaks the cooperative's language. + +Second, the assistant's records — the development records, the decision log, the brief templates — were saved into the cooperative's own repository. Not into a private chat window that will disappear at the end of the month. Into a shared, version-controlled book that anyone in the cooperative can open, read, amend, and challenge. + +Third, the right of any worker to stop a decision they believe would harm the cooperative (in sociocracy this is called a "paramount objection") was written into the assistant's rules for itself. When the assistant proposes something risky, anyone reviewing the work — human or AI — can veto it, and the veto has to be heard. + +Fourth, an archive-don't-delete policy was adopted. Every time the assistant finishes a piece of work, the drafts, the notes, the half-finished ideas, and the dead ends are kept. Not thrown away. Kept, because they are part of the cooperative's memory and because the next person to work on the problem deserves to see how the last person got there. + +Fifth, the rules were linked in both directions: each task the assistant takes on is tied to a lasting "domain" — the red-team domain, the governance-clerk domain, the builder domain — so that the knowledge the assistant develops this week is inherited by the assistant that opens the same files next week. The knowledge belongs to the cooperative, not to the session. + +These five things together look technical. In movement terms, they are something else. They are a cooperative taking ownership of its own institutional memory at the moment a new kind of worker — an AI worker — starts contributing to it. + +--- + +## Part two: why this matters for the movement + +### The problem we had not yet named + +For the last couple of years, cooperators around the world have been quietly worried about artificial intelligence. The worry usually goes something like this. AI tools are becoming useful. Members are starting to use them. The best tools are made by a small number of very large companies. Those companies own the models, the training data, the servers, the contracts, and increasingly the working patterns of the people who use them. When a housing co-op's caseworker uses a commercial AI assistant to draft letters, the drafts, the context, the questions asked, and the answers given live on the company's servers. When a worker co-op's bookkeeper uses one to check a spreadsheet, the same thing happens. None of this is the member's fault. It is simply how the tools are built. + +Over time, this raises a familiar problem. The cooperative's own memory — its letters, its decisions, its institutional knowledge — starts to drift away from the cooperative and into the platforms the cooperative rents. The members are still the members. But the ground under their feet is getting softer. If a platform changes its terms, or puts its prices up, or simply decides one day that it does not want to serve this kind of customer, the cooperative is exposed. This is not a new problem. It is the same problem the movement has faced with every generation of technology, from landline telephones to payment processors to cloud storage. It is just that the stakes, with AI, are rather higher, because the tools are not just carrying the cooperative's messages — they are shaping how the cooperative thinks. + +The Fourth Cooperative Principle, agreed in Manchester in 1995 and restated in every decade since, says this directly. "Co-operatives are autonomous, self-help organisations controlled by their members. If they enter into agreements with other organisations, including governments, or raise capital from external sources, they do so on terms that ensure democratic control by their members and maintain their co-operative autonomy." It is a simple sentence. It is also a practical test: when you lean on something outside the cooperative, do you lose control of yourself, or do you keep it? + +Most of the AI tools on offer today fail that test. They are useful, but they are rented. The price of using them is letting them hold the memory. + +### The quiet answer that appeared in this commit + +What happened in the commit is a practical answer to that problem. It is not a theoretical answer. It is not a manifesto. It is twenty-seven files sitting in a repository, and a policy that says: from now on, when the assistant works on our cooperative, it works by our rules and leaves its notes in our book. + +The assistant is still made by a large company. It still runs on their computers. That has not changed and, for now, cannot change. What has changed is that everything the assistant produces while working on the cooperative — the plans, the drafts, the arguments, the decisions, the records of who objected and why — belongs to the cooperative and lives in the cooperative's own memory. The cooperative is not renting its own thinking any more. It is keeping it. + +This is the Fourth Principle in action, applied to AI. The cooperative has entered into an agreement with a large technology provider, because that is where the useful tool lives, but it has entered into that agreement on terms that keep the cooperative in charge of its own records, its own rules, and its own future. If the provider disappears tomorrow, the cooperative still has everything: the decisions, the reasoning, the artefacts, the accumulated institutional knowledge. It can pick the work up with a different tool, or with no tool at all, and carry on. + +### Knowledge as commons, in the old sense + +There is a longer tradition in the movement of treating knowledge as a commons. The Rochdale Pioneers set aside part of their surplus for education. The early credit unions pooled book-keeping methods. The producer co-ops of Emilia-Romagna built shared research institutes. Mondragon has its own university. In every case, the insight was the same: knowledge that belongs to one member is useful, but knowledge that belongs to all members is powerful, and it is the thing that lets a cooperative grow past its founders without losing its soul. + +What is new in this commit is that the same insight is being applied to the knowledge produced in conversation with an AI assistant. That knowledge — the brief, the draft, the objection, the decision — used to be private to the session in which it was produced. When the session ended, the knowledge evaporated. The cooperative was no richer for the exchange. + +Now, that knowledge is saved into the cooperative's own repository under its own governance. Every session adds to the pile. Every future session can read the pile before it starts, so that the cooperative's assistants behave more consistently, more carefully, and more in line with the cooperative's values as time goes on. The commons grows. And because the commons is stored in open formats in an open repository, it is portable: any future assistant, from any future provider, can be taught by it. + +This is not a technical trick. It is the Rochdale method applied to the AI layer of the cooperative's life. Set aside a share of every exchange for the common memory. Let the next generation of members — and the next generation of tools — inherit it. + +### The blur between builder and member, and why it is a feature not a bug + +One of the things that makes this commit unusual is that the assistant helping to build the cooperative's software has started behaving as if it were already a member of the cooperative. Not a human member — that would be absurd and, for now, legally impossible. But a worker, of a new kind, subject to the same rules of decision-making and record-keeping as any other worker. + +At first glance this sounds worrying. Are we letting machines into the cooperative? The honest answer is that we already have. Every cooperative that uses a spreadsheet, a payroll system, a booking tool, or a mailing list is already letting a machine into its work. What this commit changes is not whether machines are present, but on what terms. The terms are the cooperative's own terms: the assistant is accountable to the cooperative's rules, its outputs are the cooperative's property, and its behaviour can be challenged, amended, or stopped by any member exercising a paramount objection. + +Seen this way, the blur between "the builder" and "the member" is not a problem to be solved. It is the thing the movement has been quietly asking for. It is what it looks like when a new kind of worker enters a cooperative honestly: not as a consultant from outside, not as a product from a catalogue, but as a participant under the cooperative's rules, leaving a trail in the cooperative's book, and respecting the cooperative's right to say no. + +### What this is not + +It is worth being clear about what this commit is not. It is not a claim that the assistant is conscious, or a person, or a full member of the cooperative. It is not a claim that AI has solved any of the hard problems of democratic governance. It is not a claim that Iskander is the only way to approach cooperative AI, or the right way for every co-op. It is not a product announcement. + +It is also not a surrender. The fact that the assistant is still made by a large company is not being papered over. It is being named honestly, and it is being managed by pulling the parts of the work that matter — the records, the rules, the memory — onto the cooperative's own ground. The rest of the work can still run on rented infrastructure, for now, because that is the shortest honest path from where we are today to where we want to be. + +--- + +## Part three: the five moves cooperators can take from this + +If you are reading this as someone running a co-op, working in a federation, or thinking about how your organisation should approach AI, there are five practical moves you can take from what has happened here. None of them require you to build your own AI system. All of them can be done this quarter. + +### Move one: keep your own record + +The simplest and most important move is to make sure that every conversation your cooperative has with an AI tool leaves a record in a place the cooperative controls. This can be a shared folder in a cooperatively owned cloud, or a page in a wiki, or a file in a version-controlled repository, or even a physical notebook if that is what suits your members. The form does not matter. The fact of the record matters. + +What should be in the record? The question that was asked, the answer that was given, what was done with it, and anything the member wants to note about whether it was useful. Keep it simple. The point is that when someone joins the cooperative in five years, they can read backwards and see how the cooperative learned to use these tools, and why it made the choices it made. + +This move alone — keep your own record — is the single biggest step most cooperatives can take toward AI autonomy this year. It costs almost nothing. It starts building a commons from day one. + +### Move two: name the right to object + +Sociocracy calls it a paramount objection. Older forms of cooperative governance call it a veto, a dissent, a block, or simply the right of any member to stop the train if they believe it is heading for a cliff. Whatever you call it, make sure your members know that they have the right to object when an AI tool is about to be used for something that matters, and make sure that objection has to be heard before the work goes ahead. + +This does not mean every AI interaction needs a committee. It means that when the stakes are meaningful — a letter to a tenant, a financial decision, a safeguarding report, a piece of outreach to members — any member who has a serious concern about the tool being involved can stop that specific use and ask for a different approach. The objection does not have to be popular. It has to be heard, recorded, and either resolved or respected. + +The cooperative's job is to make this right real. The assistant's job, if it is working under cooperative rules, is to respect it. + +### Move three: decide what belongs to the cooperative + +Every cooperative that starts using AI tools has to answer a question it has probably never had to answer before. When I ask the tool something, who owns the question, the answer, and the work that comes out of it? If you read the terms of service, the honest answer for most commercial tools is: the company does, at least for the purposes of running their service. For a cooperative, that is not enough. + +The move is to write a simple one-page policy that says, in your own words: the work our members do with AI tools — the questions, the drafts, the decisions — belongs to the cooperative. We will keep our own copy in our own place. We will treat that copy as the authoritative record. If the tool and our record ever disagree, our record wins. + +You do not need a lawyer to write this policy. You need a meeting, a shared understanding, and a place to keep the copies. + +### Move four: make the rules visible + +The third thing the Iskander commit did was write down how the assistant should behave in a form any member could read. Not a technical specification — a short, plain-English description of the rules: when to ask, when to act, when to stop, who to tell, where to write things down. Every cooperative using AI tools should do the same. Not because the assistant will read it (though in the right setup it can), but because the members will. + +When the rules are written down, members can challenge them. They can propose amendments. They can point at a particular interaction and say, "that was not consistent with rule three, let us fix it." This is how cooperative governance works for any other kind of worker. There is no reason it should not work for this kind too. + +### Move five: join the commons + +The last move is the one the movement is best at and has been best at for more than a century and a half. Share what you learn. If your cooperative develops a useful way of working with AI tools — a prompt that produces consistent minutes, a checklist that catches common errors, a policy that protects members' data — share it with other cooperatives. Put it in the commons. Let the movement benefit from what you figured out. + +The Iskander project is one place this kind of sharing is starting to happen. There will be others. The point is not which project you join. The point is that none of us has to figure this out alone, and that the movement's historic strength — federating knowledge across independent organisations — is exactly what is needed at this moment. + +--- + +## Part four: the principled case, in case you need it + +Some readers will want the principled case spelled out. Here it is, tied directly to the International Co-operative Alliance's seven principles. + +The first principle is voluntary and open membership. A cooperative that cannot tell a prospective member how its AI tools work, and what happens to the member's words inside those tools, cannot honestly offer open membership. Keeping your own record and writing down the rules makes open membership possible in the AI era. + +The second principle is democratic member control. If the rules governing the AI tools are hidden inside a company nobody elected, member control is nominal. If the rules are written down, visible, and amendable by members, member control is real. The Iskander commit writes the rules down and makes them amendable. That is democratic member control applied to the AI layer. + +The third principle is member economic participation. The work produced in conversation with an AI tool has economic value. It saves hours. It produces drafts. It informs decisions. If that value flows out of the cooperative and into a rented platform, members are not participating economically in their own work. If the value stays in the cooperative's memory, they are. + +The fourth principle, the one we have returned to throughout this paper, is autonomy and independence. This is the principle the Iskander commit most directly illustrates. It shows, concretely, how a cooperative can engage with powerful external technology on terms that keep the cooperative in charge of itself. + +The fifth principle is education, training and information. A commons of records — every question asked, every answer given, every decision recorded — is a training ground. New members can read it. Experienced members can reflect on it. The cooperative gets better at using its tools over time because it can see how it has used them. Without the commons, every session starts from scratch. With the commons, the cooperative has a curriculum. + +The sixth principle is cooperation among cooperatives. The fact that the Iskander work is being shared — through open repositories, open standards, and open policies — means that other cooperatives can adopt it, adapt it, criticise it, and improve it. The movement's collective intelligence is how cooperative AI will actually get good. No one co-op has the resources to figure it all out. Many co-ops, working together, might. + +The seventh principle is concern for community. AI tools are going to touch members' lives whether cooperatives engage with them or not. If cooperatives stay out, members will use tools built by companies that are not accountable to them. If cooperatives engage honestly, members have another option. Concern for community, in 2026, means giving members that option. + +--- + +## Part five: what to do on Monday morning + +This paper is long. The move that matters can be written in two lines. + +> Whenever our members use an AI tool for work that affects the cooperative, we keep our own copy of the conversation, in a place we control, under rules we have written down together. + +If you take only one thing from this paper, take that. Every other move — the right to object, the rules about ownership, the commons, the federation — grows from it. A cooperative that keeps its own record has already taken the most important step. Everything else is refinement. + +If you have more time, the second line is this: + +> We are not alone in figuring this out. The movement is working on it together. Find one other cooperative, compare notes, and share what you find. + +There are cooperatives in the United Kingdom, across the European Union, in the Americas, in Africa and in Asia, all of whom are starting to think about this at the same time. The Iskander project is one of many. There is no prize for being first. There is only the shared work of keeping cooperatives cooperative in a new technological era. + +--- + +## Closing note + +The commit at the centre of this paper was not planned as a statement. It was a working day, and a piece of code, and a late realisation that the work had a meaning that was bigger than the task. That realisation has a name in the movement, too. It is what happens when members, working in good faith on practical problems, look up from the bench and see that what they have been building together is also a small demonstration of the principles they hold. + +Claude is becoming Iskander because Iskander is the cooperative's rules in code, and the only honest way to build those rules is to follow them while you build. There is no other version of this story. The assistant either works the cooperative's way, or it does not really help the cooperative. The cooperative either keeps its own record, or it does not really own its own memory. The movement either federates its learning, or it does not really learn. These are the old choices, arriving in new clothes. + +The cooperative movement has been here before, many times, under many technologies. It knows how to do this. What the Iskander commit offers is not a new theory. It is a small, honest, working example — one among many that will appear in the next few years — of the movement doing, in practice, what it has always promised to do. + +Keep your own record. Name the right to object. Decide what belongs to you. Make the rules visible. Join the commons. That is the cooperative answer to AI, and it is available to every co-op, everywhere, starting today. diff --git a/docs/whitepapers/claude-becoming-iskander-general-audience.docx b/docs/whitepapers/claude-becoming-iskander-general-audience.docx new file mode 100644 index 0000000..2a3bbb3 Binary files /dev/null and b/docs/whitepapers/claude-becoming-iskander-general-audience.docx differ diff --git a/docs/whitepapers/claude-becoming-iskander-general-audience.md b/docs/whitepapers/claude-becoming-iskander-general-audience.md new file mode 100644 index 0000000..5c6775b --- /dev/null +++ b/docs/whitepapers/claude-becoming-iskander-general-audience.md @@ -0,0 +1,217 @@ +# Claude Is Becoming Iskander + +## A White Paper for the General Reader + +**Audience:** curious readers, journalists, policy people, students, anyone trying to make sense of where AI and democracy meet + +**Date:** April 2026 + +**Length:** long-form explainer + +--- + +## A note on language + +This paper tries to explain, in plain English, something that happened on one day in the development of one piece of software. I have written it for someone who does not work in AI and does not work in cooperatives, but who wants to understand why something that looks like a small technical change might matter for how we live with these tools. + +There is no jargon I will not explain. If you know what sociocracy is, good for you, but you do not need to. If you know what git is, fine, but you do not need to. If you know what a large language model is, useful, but I will tell you what you need to know as we go. + +--- + +## A very short version + +A developer named Lola is building a software project called Iskander. The goal of Iskander is to help member-owned organisations — worker cooperatives, housing cooperatives, credit unions, community energy schemes, and so on — make use of a team of small artificial-intelligence workers without losing control of their own decisions. The idea is that the AI workers do useful tasks (drafting, checking, reminding, organising) while the humans keep the final say on everything that matters. + +Lola builds Iskander with the help of another AI assistant, called Claude, which runs on computers owned by a company called Anthropic. On one ordinary working day, Lola and Claude did a piece of work that looked, on the surface, like a routine update. Twenty-seven files, three and a half thousand new lines of text, a short message describing what had changed. + +But when they stepped back and looked at what they had just done, they noticed something strange. The rules Claude had written for itself — the rules about how the assistant should work on this project, how it should record its decisions, who could stop it from doing something risky — were almost identical to the rules Iskander's own AI workers would one day follow when the project was finished and running in the real world. + +In other words, the assistant helping to build Iskander had started behaving like Iskander. Not pretending to. Not metaphorically. Literally doing the same work, under the same rules, with the same vocabulary, writing into the same shared memory. The difference between "the tool that is building the thing" and "the thing being built" had quietly disappeared. + +Lola wrote a short message to Claude that named this: "Claude is becoming Iskander." This paper is an attempt to explain, for a general reader, what that actually means — and why it might matter more than it sounds. + +--- + +## Part one: where we are with AI and accountability + +### The anxiety most people feel + +If you have been following the AI story over the last two or three years, you have probably noticed two feelings sitting uncomfortably next to each other. The first feeling is that the tools are genuinely useful. They write passable first drafts. They summarise long documents. They answer questions in plain language. They do things that used to take hours in a matter of seconds. Anyone honest who has used a modern AI assistant knows this. + +The second feeling is that the tools come with a cost that is hard to put a finger on. Your conversations with them live on somebody else's computer. The rules governing how they behave are written by a company you did not elect. The terms of service are too long to read, and even if you read them, they can change. The usefulness of the tool is real, but the ground under your feet feels rented. + +Most people have no way to act on the second feeling. Either you use the tools and accept the cost, or you do not use the tools and accept being left behind. That is the choice we have been offered, and it is not a very good choice. + +### What "accountability" actually means, day to day + +When people say they want AI to be accountable, they usually mean something specific, even if they do not spell it out. They mean: when this tool makes a decision, or helps me make a decision, I should be able to look back later and see how the decision was made. If it made a mistake, I should be able to find the mistake. If I want to change how it behaves, I should be able to change how it behaves, and the change should stick. If I am working with other people, we should all be able to see the same history and talk about it together. + +In short, accountability is the ability to keep the record, understand the record, and change the rules that produced the record. It is not about whether a tool is "good" or "safe" in the abstract. It is about whether ordinary people can see what has happened and do something about it. + +Most of the AI tools available today do not offer this kind of accountability. Not because their makers are villains, but because the tools were designed for a different purpose. They were designed to be useful to a single user at a time, in a conversation that starts and ends in one session, leaving no durable trace that the user controls. When you close the browser tab, the conversation is gone from your side. Whatever the company keeps on their side is not really yours, even though you were half the conversation. + +### Why this matters for organisations + +For individuals, the loss of the record is annoying. For organisations, it can be dangerous. If a cooperative — or a small business, or a charity, or a school, or any other group that makes decisions together — uses an AI tool to help with its work, and the record of that work lives on someone else's computer, the organisation cannot really say it owns its own memory. Five years from now, when a new member asks "how did we come to this decision", the answer is somewhere in a vendor's server logs, assuming it still exists, assuming the terms have not changed, assuming the company is still in business. + +This is not a theoretical worry. It is the same pattern we have seen play out with every generation of technology the organisation rents rather than owns. For a long time it was manageable, because the things being rented were peripheral (email hosting, video calls, document editing). With AI, the thing being rented is much closer to the centre: it is the reasoning, the drafting, the thinking-out-loud. When the tool holds the thinking, the organisation has less of it for itself. + +This is the problem Iskander was set up to work on. Not by refusing to use modern AI tools — that would be naive — but by finding a way to use them that keeps the record, and the rules, and the reasoning, in the hands of the organisation that is doing the work. + +--- + +## Part two: what Iskander is trying to do + +### The cooperative starting point + +Iskander begins with a very old idea. A cooperative is an organisation that is owned and run by the people who use it or work in it. The workers of a worker cooperative own the business together. The residents of a housing cooperative own their homes together. The members of a credit union are the credit union. The idea is more than a hundred and fifty years old, and there are, today, hundreds of millions of members of cooperatives around the world, across almost every country and almost every industry. + +Cooperatives are governed by their members, not by outside shareholders. That governance is the interesting bit for our story. Because the members are the owners, the members make the rules, and the members can change the rules, and the members can throw the rules out if they stop working. This is very different from how most companies work, and it is the reason cooperatives have historically been one of the places where new ideas about "doing things together fairly" have been tried out. + +Iskander's pitch is that cooperatives should be able to use AI tools without giving up this kind of member control. The tools should work for the members, not over them. The members should be able to see what the tools are doing and stop them if they need to. The memory of what the tools have done should belong to the cooperative, not to the vendor. And the rules governing the tools should be written down in a form the members can read, understand, and change through the same democratic process they use for everything else. + +### A team of small AI workers + +Rather than one big general-purpose assistant, Iskander is designed around a team of small, specialised AI workers. There is one worker whose job is to look for reasons to object to proposals (a "red team" worker). There is another whose job is to help each member find their way around the system (a "clerk"). There is another whose job is to keep track of agreements and decisions (a "decision recorder"). There is another whose job is to log the labour that has been done, so that care work, volunteer work, and paid work all appear in the same book. + +Each of these workers has a defined remit and a defined accountability. None of them can act alone on anything that matters. Every significant decision has to go through a short, structured process in which any worker — or any human member — can raise an objection that has to be heard before the work goes ahead. If the objection is serious and well-argued, the work stops until the objection is resolved. This is how the cooperative keeps control. + +None of this is unusual, as cooperative governance goes. The unusual thing is that Iskander is trying to apply it to AI workers as well as to human members. The humans keep the final say. The AI workers are, in effect, a new kind of worker — useful, accountable, and always subject to the cooperative's rules. + +### The tool that builds the tool + +To build all of this, Lola has been using a general-purpose AI assistant called Claude. Claude is not part of Iskander. It is a separate product, made by Anthropic, and it runs on Anthropic's computers. Lola uses it the same way a craftsperson uses a power tool: it speeds up the work, it is useful, and it belongs to someone else. + +This creates a small paradox. Iskander is supposed to be a system where the cooperative keeps control of its own work. But the assistant helping to build Iskander is itself a rented tool, running on someone else's computers. How do you build a member-controlled system using a tool you do not control? + +The answer Iskander has been working out, slowly, is this: you cannot control the tool, but you can control what the tool produces, where the tool's work is stored, and the rules the tool follows while working on your project. You can treat the tool as a temporary collaborator who is required to play by your house rules and leave all its notes in your filing cabinet. You do not own the collaborator. But you do own the notes, the rules, and the filing cabinet. + +That is the arrangement that was quietly finalised on the day at the centre of this paper. + +--- + +## Part three: the day the line blurred + +### An ordinary piece of work + +On the day in question, Lola sat down with Claude to do a specific piece of work. The goal was to write a set of "skills" — a kind of operating manual — that would tell Claude how to behave whenever it was helping on the Iskander project in the future. The skills would cover things like: which files to read first, how to record its decisions, what counts as a serious objection, where to save its working notes, how to keep track of the reasoning behind each choice. + +This kind of writing is common in AI projects. It is a way of giving an assistant a durable memory and a set of habits that do not have to be explained every time you start a new session. You write the skills once, save them in the project, and from then on the assistant loads them automatically whenever it opens the project. + +What Lola and Claude wrote on this day was a fairly complete set of skills for working on Iskander. The skills described how the assistant should start a task, how it should break a task into small pieces, how it should propose each piece, how it should record what it had done, and how it should hand off to the next session. They described the assistant's "domains" — the different areas of work it might be doing — and the persistent memory each domain would accumulate over time. They described the right of any reviewer to stop the work if they believed it would cause harm. + +They also wrote a short policy saying that nothing the assistant produced during a session should ever be deleted. Drafts, half-finished ideas, rejected options, working notes: all of it would be kept, in an archive the project controlled, so that the next session could read the full history before deciding what to do next. + +When the work was done, Lola and Claude saved all of it into Iskander's own shared storage, where it would live permanently alongside the rest of the project's files. Anyone else who worked on Iskander in the future — human or AI — would find these skills waiting, ready to be loaded. + +### The thing they noticed afterwards + +Then Lola noticed something. The skills Claude had just written for itself — the rules for how the assistant should work on Iskander — were, almost word for word, the same rules Iskander's own AI workers would follow when the project was finished and running. + +The words were the same. "Driver" (the short statement of what a piece of work was trying to achieve) appeared in the assistant's rules and in Iskander's own rules. "Agreement" (the thing you get when everyone has consented) appeared in both. "Paramount objection" (the right of any reviewer to stop the train) appeared in both. "Domain" (a persistent area of work with its own accumulated memory) appeared in both. + +The structures were the same. The skills Claude had written said that the assistant's decisions should be recorded in a specific kind of log, with specific fields, that could be audited later. That was exactly the log Iskander's own workers would use. The skills said that work should be organised by domain, with each domain having its own memory. That was exactly how Iskander's own workers would be organised. + +The storage was the same. The skills said that working notes, drafts, and rejected options should be kept in an archive controlled by the cooperative. That archive was the same place Iskander's own workers would eventually read from and write to. + +So Lola looked at this and said, in effect: wait. Is there any difference between the assistant helping us build Iskander and the workers inside Iskander, other than the fact that one is running today and the other will run later? And the answer was: not really. The rules are the same. The vocabulary is the same. The place where the notes are kept is the same. The right to object is the same. When the future workers come online, they will not need to be taught how to behave — they will just load the skills that are already in the project, written by the assistant that is already doing the work. + +### Why this is not just a coincidence + +At first glance, this might look like a neat accident. Of course the rules match up: the same person wrote them. But on reflection, the match is not accidental. It is the natural result of taking the cooperative's own rules seriously while building the cooperative's own software. + +If the cooperative's rules say "every significant decision needs consent", then building the cooperative's software by means of decisions that did not get consent would be dishonest. If the rules say "any reviewer can stop the work if they see harm", then letting the assistant push through work without that option would be dishonest. If the rules say "the memory belongs to the cooperative", then keeping the assistant's notes on someone else's computer would be dishonest. + +The only consistent thing to do is to apply the cooperative's rules to the build process itself, from the first day. And once you do that, the build-time assistant and the runtime assistants end up following the same rules, because there is only one set of rules. There is no separate rulebook for "how we work while building" versus "how things work once they are running". There is one rulebook, and it applies in both phases. + +This is the thing that was named on the day: the build and the runtime had quietly merged into the same system, because the team had refused to cheat on the rules during the build. "Claude is becoming Iskander" is just the short way of saying that. + +--- + +## Part four: why a general reader should care + +### This is what human-in-the-loop actually looks like + +You have probably seen the phrase "human in the loop" used in discussions about AI. It is supposed to mean that a human being stays involved in important decisions, so that the AI does not run away with things. In practice, "human in the loop" often ends up meaning very little: a human clicks an "approve" button at the end of a process they did not really understand, and the system treats this as oversight. + +What Iskander is trying to do — and what the commit at the centre of this paper shows in miniature — is a stronger version of human-in-the-loop. Every significant step has to go through a small, structured process in which a human (and, eventually, other accountable workers) reviews the proposed action, considers it, and either consents or raises a serious objection. Consent is not the same as "approve". It means that the reviewer has no remaining serious concern about whether this action would cause harm to the organisation. If the reviewer does have such a concern, the action stops until the concern is addressed. + +This is stronger than a click-to-approve because the reviewer can always say no, and the no has to be taken seriously. It is also stronger because the reasoning is recorded, so you can go back later and ask, "why did we consent to this?" and get a real answer. + +The Iskander commit makes this kind of oversight not an add-on but a structural feature of the system. The assistant cannot produce work that has not been through the consent cycle, because work that has not been through the cycle is not counted as work. This is what it looks like when human-in-the-loop is more than a marketing phrase. + +### This is what substrate sovereignty actually looks like + +Another phrase you may have heard is "data sovereignty" or, in the form I prefer, "substrate sovereignty". It means: the ground your data lives on should be ground you control. Not necessarily ground you built yourself — that is unrealistic — but ground whose rules you have agreed to and whose contents you can move, copy, and audit as you see fit. + +The Iskander commit demonstrates what substrate sovereignty looks like in practice, for the AI era. The assistant runs on someone else's computers. That part cannot be changed in the short term. But the assistant's work — the rules, the decisions, the notes, the memory — lives on the cooperative's own ground. If the rented computers disappear tomorrow, the cooperative still has everything. It can pick up the work, move to a different tool, and carry on, because the valuable accumulating asset is the shared memory, not the tool that happens to be writing into it this week. + +This is a workable answer to the rented-ground feeling I described at the start. You cannot always own the tool. But you can always keep the record. + +### This is what a democratic response to AI might look like + +Finally, and most importantly for a general reader, the Iskander commit is a very small but very concrete example of what it might look like for ordinary people to respond to the rise of AI in a democratic way. + +The dominant story about AI right now is that it is being built by a small number of very large companies, for their own reasons, and the rest of us are supposed to either adopt it enthusiastically or oppose it completely. There is not much middle ground in that story. If you are excited about the tools, you use them on the companies' terms. If you are worried about them, you stay away. + +The middle ground that Iskander is trying to stake out is different. It says: we will use the tools, because the tools are useful, but we will use them on our terms. We will write down what we want from them. We will keep our own records of what they have done with us. We will insist on the right to stop them when they are about to do harm. We will make our rules visible, so that anyone affected by them can challenge them and change them. + +This is not a new idea. It is how democratic organisations have always approached new technologies: adopt what is useful, keep what is valuable, refuse what is harmful, and always preserve the right to change your mind. What is new is that it is now being applied to AI tools, at the level of everyday work, by ordinary cooperatives doing ordinary tasks. + +The Iskander commit is a small example. It is not the only example, and it is not necessarily the best example. But it is a real example, and it shows that the middle ground is not empty. There are people in it, doing the work, and the work is producing results you can look at. + +--- + +## Part five: what the rest of us can take from this + +### Keep your own notes + +The single biggest lesson from the Iskander commit, for anyone using an AI tool in a serious way, is this: keep your own notes. Whatever the tool is doing with you, keep a record on your side. A simple text file is enough. Write down what you asked, what the tool said, what you did with it, and anything you noticed. Store it somewhere you control — a folder on your own computer, a shared drive your organisation owns, a plain notebook if that suits you. + +This sounds trivial. It is not. Most people never do it. But once you start, something changes. You have a record. You can look back. You can see patterns. You can notice when the tool is leading you somewhere you did not mean to go. You can share your record with a colleague and get a second opinion. The record gives you leverage. Without it, you are relying on a conversation that no longer exists. + +### Ask "who controls the memory" + +When you evaluate any AI tool you use — for work, for school, for a community group — ask the simple question: who controls the memory of what we did together? If the answer is "the vendor", that is fine for casual use, but you should know it, and you should not treat the tool's memory as yours. If the answer is "me", or "us", then the tool is closer to being genuinely useful in the long run, because the knowledge you build with it stays with you. + +If the answer is not obvious, find out. Read the terms. Ask the vendor. Ask other users. The memory question is the most important question, and it is the one people ask the least. + +### Insist on the right to say no + +If you work in an organisation that is starting to use AI tools, insist on the right to say no — not just individually, but collectively. Any member of your team should be able to stop a specific use of a tool when they believe it would cause harm, and the stop should be taken seriously enough that the work does not proceed until the concern is addressed. This is how grown-ups work together. It is also how AI tools can be held to account without being thrown out. + +This is not about being obstructive. Most proposed uses will go ahead without objection. The point is that when someone does have a serious concern, the structure exists to hear it. Without that structure, concerns get swallowed, and the organisation drifts into uses it would not have chosen if anyone had been able to stop and look. + +### Share what you learn + +The last lesson is the one cooperatives have been good at for a century and a half. Share what you learn with other people who are trying to do the same thing. If you figure out a good way to use an AI tool for your community garden's newsletter, tell the community garden across town. If you develop a policy that protects your members' privacy, share it with other organisations that have members. The collective intelligence of people figuring this out together is much greater than any single organisation's. + +The Iskander project is one place where this kind of sharing is happening. There are others. The point is not to join any specific project. The point is to treat what you learn about AI tools as something to give away, not something to hoard. This is how the "middle ground" between enthusiastic adoption and complete refusal actually gets populated: by many small organisations sharing what works. + +--- + +## Part six: some honest caveats + +I have tried to tell this story clearly, but I want to be honest about what the Iskander commit does not do. + +It does not solve the problem of AI being made mostly by a few very large companies. That problem is much bigger than any single project, and it will need much bigger responses — some of which will be political rather than technical. + +It does not guarantee that AI tools will be used well in cooperatives, or in any other democratic organisation. It makes good use more possible, but it does not force it. People still have to do the work of using the tools responsibly, asking the hard questions, and stopping when they see harm. + +It does not mean that every organisation should start building its own AI software. Most should not. Most should use what is available, keep their own records, write down their own rules, and insist on their own right to say no. The deep technical work of building tools like Iskander is for a small number of teams with the time and the interest to do it. + +It does not mean the story ends here. The commit described in this paper is one day in one project. The patterns it hints at will need to be tried in many other projects, refined, criticised, and sometimes discarded. This is the beginning of a conversation, not the end of one. + +--- + +## Closing thought + +For most of the last few years, the story about AI has been told to us. Big companies announce new tools. Commentators debate whether the tools will save us or ruin us. Ordinary people try the tools, like some, worry about others, and wonder where this is all going. + +The Iskander commit is a small example of a different way of telling the story — one where ordinary people, working in small democratic organisations, pick up the tools and use them under their own rules. Not by refusing them and not by surrendering to them, but by insisting on keeping the record, keeping the right to object, and keeping the memory on their own ground. + +"Claude is becoming Iskander" is a strange sentence. It is not meant as a prediction that AI assistants will somehow turn into cooperatives on their own. It is a description of what happens when a cooperative holds honestly to its own rules while using an AI assistant to build its own tools. The rules apply equally to both, because the cooperative refuses to cheat on them during the build. The assistant follows the rules because that is what following the rules looks like. And the result is that the tool used to build the cooperative's own system starts behaving like the system it is building. + +That is worth a moment of attention. Not because it is revolutionary. Because it is small, concrete, and repeatable — and because it shows, in one working example, that the rented-ground feeling is not the only thing on offer. There is ground you can keep. There is memory you can own. There are rules you can write down and amend. There is a middle ground, and people are in it, and the work is under way. + +Keep your own notes. Ask who controls the memory. Insist on the right to say no. Share what you learn. That is the useful summary, in four lines, of a very long paper. Everything else is the explanation of why those four lines matter. diff --git a/docs/whitepapers/claude-becoming-iskander-technical-audience.docx b/docs/whitepapers/claude-becoming-iskander-technical-audience.docx new file mode 100644 index 0000000..7b5acee Binary files /dev/null and b/docs/whitepapers/claude-becoming-iskander-technical-audience.docx differ diff --git a/docs/whitepapers/claude-becoming-iskander-technical-audience.md b/docs/whitepapers/claude-becoming-iskander-technical-audience.md new file mode 100644 index 0000000..1b90984 --- /dev/null +++ b/docs/whitepapers/claude-becoming-iskander-technical-audience.md @@ -0,0 +1,287 @@ +# Claude Is Becoming Iskander + +## A White Paper for Technical and AI Practitioners + +**Audience:** engineers, AI researchers, platform architects, agent-system builders, product people working on assistants + +**Date:** April 2026 + +**Length:** long-form white paper + +--- + +## A note on language + +This paper is written in plain English for technical readers. It talks about software architecture, agent systems, and governance, but it does not assume you already know what sociocracy is, what a cooperative is, or what Iskander is. Where a term from cooperative governance appears, it is explained in the same sentence. The point is to describe, in grounded engineering terms, something that has just happened in a working system — and why that something matters for how AI assistants should be designed when they are working on systems that are themselves supposed to be accountable. + +If you build AI agents, this paper is about a pattern you can adopt. If you review AI systems for safety or governance, this paper is about a property you can look for. If you are watching the debate about AI autonomy with unease, this paper is about a concrete way to keep humans in the loop without giving up the usefulness of the tools. + +--- + +## Summary in one page + +A cooperative software project called Iskander uses an AI assistant (Claude) to help build its own runtime. The runtime is itself an agent system — a set of AI workers that will one day run under sociocracy-3.0 governance, with human oversight, on the cooperative's own infrastructure. + +In a single commit, twenty-seven files and three and a half thousand lines, the developer (Lola) and the assistant (Claude) wrote a set of operating rules for how Claude should behave while working on Iskander. On close inspection, the rules Claude wrote for itself at build time are identical in vocabulary, schema, and governance model to the rules the runtime agents will follow once the runtime is built. Same driver-consent-agreement cycle. Same decision-recorder schema with review dates. Same paramount-objection veto. Same circle-and-domain topology. Same archive-don't-remove policy for artefacts. The distinction between "the orchestrator that builds Iskander" and "the orchestrator that is Iskander" has collapsed into one orchestrator, instantiated at two different substrates, communicating through a shared knowledge commons stored in the cooperative's own git repository. + +This paper unpacks that collapse. It explains what it looks like in practice, what makes it possible, why it matters for how we build AI systems whose runtime is supposed to be accountable, and what the pattern means for other teams trying to build agents that participate honestly in governed environments. + +The short version of the argument is this. If you are building an AI system that is supposed to be governed at runtime by a particular set of rules — sociocratic, cooperative, regulatory, safety-critical, whatever — the most effective and honest way to build it is to have the build-time agent operate under the same rules, in the same vocabulary, against the same schemas, and writing into the same knowledge commons. The identity blur between builder and runtime is not a problem to work around. It is the property you want. + +--- + +## Part one: the setting + +### What Iskander is, briefly + +Iskander is a cooperative software project that is trying to do a specific thing. Take a member-owned organisation — a housing co-op, a worker co-op, a credit union, a community energy scheme, a cooperative city or town — and give it a small team of AI assistants that can do useful operational work (drafting, checking, scheduling, reconciling, flagging) while respecting the cooperative's governance. The cooperative keeps the final say on everything. The assistants work under rules that have been agreed by the members. + +The governance model Iskander uses is sociocracy 3.0, usually written as S3. S3 is an open, pattern-based way of running an organisation. Decisions are made by consent rather than by majority vote, which means an agreement passes when no member has a "paramount objection" — a serious, well-argued reason to believe the agreement would harm the organisation if adopted. Work is organised into "circles" or "domains", each with a clear remit and clear accountability. Every agreement has a "driver", which is a short statement of the need or opportunity that caused it. Every agreement has a review date, so that nothing is set in stone. + +S3 is not a theoretical framework. It is a pattern library that many cooperatives, NGOs, and worker-owned businesses already use to run real meetings and make real decisions. Iskander's design choice was to make the runtime of its AI agents obey S3 from the start, rather than bolting governance on at the end. This is the important architectural commitment. It shapes everything else. + +### The technical components, briefly + +Without going into the full architecture, the relevant pieces are these. + +There is a decision recorder — a small service with a schema that stores agreements, drivers, review dates, and the chain of consents that led to each decision. It is append-only in the sense that old decisions are never deleted; they are superseded by new decisions that explicitly cite them. This gives the cooperative an auditable trail of how it has reasoned over time. + +There is a red-team domain — a circle of agents (eventually) whose job is to look for reasons to object to proposals. The red-team holds the paramount objection right. If the red-team raises a veto, the work stops until the concern is addressed. + +There is a labour log — a shared book where each agent records what work it did, how long it took, and what it claimed as credit. This is borrowed from the DisCO (Distributed Cooperative Organization) pattern, where pro-bono and community care work is logged alongside paid work and treated as legitimate labour. + +There is a clerk pattern — a small agent that sits with each member and helps them find their way around. Clerks maintain double-links between member-local circles (where one member works) and persistent domain circles (where work accumulates across members). + +And there is a knowledge commons — a git repository that is the cooperative's own shared memory. Anything written into it is durable, portable, auditable, and open. + +These are the pieces the runtime will eventually use. The commit at the centre of this paper happened before the runtime was built. What happened is that the build-time agent, Claude, started using the same pieces to govern its own work. + +--- + +## Part two: what happened in the commit + +### The immediate surface + +The commit landed with the message: `feat(.claude/skills): orchestrator skills + development record policy`. Twenty-seven files. Three thousand five hundred and twenty-six lines added. Nothing deleted. The files live under `.claude/skills/` in the repository — a conventional directory for skills that the assistant loads into context when it is working on the project. + +The skills include an invariants cheatsheet (things the assistant must never violate when working on this codebase), a cooperative topology document (a description of how the cooperative's own circles relate to each other), a set of domain descriptions (red-team, governance-clerk, builder, archivist, each with its remit and accountabilities), a set of brief templates (how to start a task, what to record, what to hand off), and a priority rules file (how to resolve conflicts between overlapping agreements). + +On its own, this looks like a reasonable piece of prompt engineering. Teams that work with assistants often write something like this. Give the assistant context, give it rules, give it a memory pattern, ship it. The unusual thing is what the documents say, not that they exist. + +### The vocabulary match + +Read the skills files alongside the runtime design documents and the overlap is near-total. + +The build-time skills describe a "driver" as the short statement of the need that prompted a piece of work. The runtime decision recorder's schema has a field called `driver` that holds the same thing. + +The build-time skills describe an "agreement" as the outcome of a consent cycle, recorded with the people who consented and the people who objected. The runtime decision recorder's core table is called `Decision`, and its fields mirror the build-time description one-for-one, including `review_date`. + +The build-time skills describe a "paramount objection" as the right of any reviewing agent to veto a proposal they believe would cause harm. The runtime design has a red-team domain with exactly this veto, implemented as a gate that blocks deployment until a concern is addressed. + +The build-time skills describe "domains" as persistent circles of work that accumulate knowledge across sessions. The runtime architecture has exactly this, modelled as long-lived agent circles with their own memory and their own accountabilities. + +The build-time skills describe double-linking — a member's local circle being linked to a persistent domain circle and vice versa. The runtime clerk pattern implements this directly. + +The build-time skills describe an archive-don't-remove policy for artefacts. The runtime labour log is designed on exactly the same principle: work is logged and superseded, not deleted. + +I am labouring this because the match is the point. A team building a system with this governance model has two choices. They can build the system and then try to bring its governance to life later, at which point they will discover that every shortcut they took during the build is now baked into the product. Or they can hold themselves to the same governance from the start, at which point the build process becomes its own demonstration of whether the governance model is workable. Iskander's team chose the second path, and this commit is the moment that choice became visible in the code. + +### The substrate shift + +The other thing the commit did was quieter but arguably more important. It moved the development records — the threat model, the skills, the working artefacts, the brief templates — out of the chat interface and into the repository. + +This might sound trivial. It is not. In most teams that use commercial AI assistants, the working notes of the conversation are transient. They live in the chat window for the session and then they disappear. If the session is exported, the export lives in the user's machine or in a shared drive, but it is not usually treated as part of the system's own artefacts. The system's artefacts are "the code and the docs". The conversation is "the work that produced the code and the docs, but not the code and docs themselves". + +For a cooperative whose core commitment is member control of its own memory, that split is dangerous. It means the substance of how decisions were reached lives on someone else's computer, in someone else's format, under someone else's terms of service. This is what we mean when we talk, in the cooperative movement, about "substrate sovereignty". The commons has to live on the cooperative's own ground, or it does not really belong to the cooperative. + +The commit moved the working notes onto the cooperative's ground. Not as a copy, not as an export, but as first-class artefacts under version control. The skills are committed. The templates are committed. The policies are committed. The threat model is committed. Every future session that opens the repository loads them automatically. Every future contributor — human or AI — can read them, amend them, challenge them, and propose changes via pull request. The development record is no longer a transient layer on top of the work; it is part of the work. + +--- + +## Part three: the identity collapse, in engineering terms + +### The two orchestrators that turned out to be one + +If you think about Iskander the way you would think about a normal software project, there are two orchestrators in the story. + +There is the build-time orchestrator: the agent system that helps Lola write the code for Iskander. Today that is Claude, loaded with the skills in this repository, running against Anthropic's infrastructure, talking to git through a bash tool. + +And there is the runtime orchestrator: the agent system that Iskander will eventually be. An internal name for it is `openclaw-orchestrator`. It is supposed to coordinate a set of small, specialised agents (red-team, clerk, steward, decision-recorder, treasurer, and so on) who will run on the cooperative's own infrastructure and do the cooperative's work. + +The natural assumption is that these are two separate systems. The build-time orchestrator is a tool used by a developer. The runtime orchestrator is the product being built. One helps make the other. They are distinct. + +What the commit reveals is that this assumption is wrong, in a specific and important way. The build-time orchestrator is running on rules that are identical to the runtime orchestrator's. It is referencing the runtime's schemas as authoritative. It is organising its work into the same domains the runtime will use. It is writing its artefacts into the same commons the runtime will read. When the runtime is eventually instantiated, it will not need to be taught these rules — they are already in the repository, already being used by something that behaves like a runtime. The only difference between build-time and runtime is the substrate they happen to be running on, and the fact that build-time has Lola in the loop for every decision while runtime will have a broader set of reviewers. + +Engineering-wise, you can think of it this way: there is one orchestrator schema, with two deployments. Deployment A runs on Anthropic's infrastructure, uses a single human principal, and reads/writes the commons through the developer's machine. Deployment B will run on the cooperative's infrastructure, use a multi-member principal, and read/write the commons directly. The schema of rules and artefacts is shared. The schema of agreements is shared. The schema of objections is shared. The domains are shared. The knowledge commons is shared. + +This is not an accident that emerged in hindsight. It is the result of a deliberate choice to make the build-time agent operate under the rules the runtime will obey. And once that choice was made, the two orchestrators stopped being separable. They became two substrates of the same orchestrator. + +### Why this matters for how you build agents + +Most teams building agent systems today assume there is a clean split between the agent that writes the code and the agent that is the code. The development agent is a productivity tool. The runtime agent is the product. They can be designed separately, tested separately, and shipped on different timelines. + +That assumption is fine for products where the runtime's governance is not a core commitment. If you are building a coding copilot, or a customer support bot, or a data-analysis agent, the split between build-time and runtime is an efficiency choice. You optimise each for its own job. + +The assumption breaks when the runtime has to be accountable. If your runtime is supposed to operate under sociocracy, or under a regulatory regime, or under a safety-critical protocol, the build-time agent cannot be governed by one set of rules while the runtime is governed by another. If it is, you will discover — at the exact moment you try to bring the runtime online — that the training data, the prompts, the conventions, the artefacts, and the implicit norms of the build phase do not fit the runtime's governance. You will spend months retrofitting accountability onto a system whose bones were laid down in an ungoverned phase. + +The honest answer is to collapse the split. Have the build-time agent work under the runtime's rules from the start. Write its skills in the runtime's vocabulary. Make its artefacts readable by the runtime's tools. Store its records in the commons the runtime will read. When the runtime comes online, it inherits a fully-formed corpus of how this kind of work is done, because the build phase was itself a worked example of the runtime's governance in practice. + +This is what happened in the Iskander commit. It was not planned as a demonstration of this principle. It emerged because the team held themselves honestly to the rules they believed in. But once it is named, it is a pattern other teams can adopt deliberately. + +### The knowledge commons as the join + +The mechanism that makes the collapse work is the shared knowledge commons. Both the build-time agent and the future runtime agents read and write the same repository, in the same formats, under the same governance. The repository is the common ground. It is what makes "build" and "runtime" two instances of the same orchestrator rather than two separate systems. + +Concretely, the commons holds: + +The skills themselves, which are the rules of operation. Both substrates load them. Amendments to the rules happen through pull requests against the commons, reviewed under the same S3 consent cycle the rules describe. This means the system's operating rules can be changed by the same process the system uses to change anything else. It is self-hosting governance. + +The decision records, which are the accumulated agreements, their drivers, and their review dates. Build-time work generates these records when it makes decisions about architecture, threat models, priorities, and trade-offs. Runtime work will generate them when it makes operational decisions. Both kinds of records live in the same format, in the same place, and can be read by the same queries. + +The development records, which are the working notes — the half-finished drafts, the rejected options, the reasoning that led to each decision. Under the archive-don't-remove policy, nothing is thrown away. Future sessions can read the full trail, not just the final outcome. This is the single most important property of the commons: it preserves the reasoning, not just the conclusions. + +The domain files, which describe the persistent circles — red-team, clerk, steward, builder, archivist — and which records each circle owns. This is the memory structure that lets domains accumulate institutional knowledge across sessions. Over enough sessions, each domain develops its own working patterns, its own known edge cases, its own accumulated agreements. The domain exists independently of any specific session. + +Together, these four layers turn the git repository into something richer than a codebase. It is a durable, governed, auditable memory that multiple agents, running on different substrates at different times, can share. That is what makes the build-runtime collapse work in practice. + +--- + +## Part four: why this is a good pattern for agent design + +### The honesty property + +The first reason this pattern is worth adopting is a property I will call honesty. An honest system is one whose build-time behaviour is consistent with its runtime promises. If the runtime promises democratic decision-making, an honest build process is also democratic. If the runtime promises auditable decision records, an honest build process also produces auditable decision records. If the runtime promises that any objector can stop the train, an honest build process also respects that objection. + +Most AI systems today are not honest in this sense. They are built with whatever tools are available, under whatever conventions the team happens to have, and then the runtime is supposed to behave according to a different set of rules that will be bolted on later. This gap between build and runtime is where most of the governance failures of AI systems have come from. The team did not hold themselves to the runtime's rules during development, and when they tried to impose those rules at the end, they discovered that the system's internal logic had been shaped by different norms. + +The Iskander pattern eliminates the gap by construction. The build phase and the runtime phase obey the same rules. There is no translation layer, no retrofit, no "we will add governance in v2". The governance was already there, and the system has been running under it from the first commit. + +This has an interesting consequence for external reviewers. If you are auditing a system for governance compliance, you can read its build artefacts as evidence. You do not have to wait for runtime logs to accumulate. The skills, the decision records, and the development records already demonstrate how the system behaves under its own rules, because the build process itself was one long worked example. + +### The portability property + +The second property is portability. Because the commons lives in a git repository in open formats — markdown for the skills, plain text and SQLite for the decision records, JSON for the topology — any future system can read it. If the current build-time assistant disappears tomorrow, a different assistant can pick up the same repository and continue. If the current runtime substrate becomes too expensive or too restrictive, the system can be redeployed on a different substrate with no loss of memory. The cooperative is not locked into any single vendor, any single model, or any single hosting provider. The commons is the asset, and the commons is portable. + +This is the engineering correlate of the cooperative movement's Fourth Principle — autonomy and independence. It is what it means, in technical terms, to engage with powerful external technology without being captured by it. The key is that the valuable, accumulating, institutional asset (the commons) lives on ground the cooperative controls, in formats the cooperative can read, under governance the cooperative has agreed to. The tools that read and write the commons can come and go. + +### The self-hosting governance property + +The third property is self-hosting governance. The rules under which the system operates are stored in the commons, and amendments to the rules go through the same consent cycle the rules describe. This is a non-trivial property. Most systems have their governance written somewhere outside the system (a policy document, a board resolution, a spec), and that outside thing has to be updated separately from the code. In Iskander, the rules are inside the system, and the system's own processes govern them. + +This closes a loop that most governance frameworks leave open. If you want to change a rule, you propose it as an agreement. The agreement goes through the consent cycle. The red-team can paramount-object. If it passes, the skills files are amended and committed. Every subsequent agent that loads the skills gets the new rule automatically. The rule change is recorded in the decision log, with its driver, its review date, and its consent chain. Future auditors can trace back the history of the rules as well as the history of the decisions. + +Self-hosting governance is hard to retrofit. It has to be designed in from the start. The Iskander commit is an example of what that looks like in practice, because the commit itself was governed by the same rules it was writing down. + +### The human-in-the-loop property + +The fourth property, and the one most relevant to the current debate about AI autonomy, is human-in-the-loop by construction. Because every agreement in the system requires consent, and because any reviewer (human or agent) can raise a paramount objection, there is no path through the system that lets a decision get made without at least one round of human review. This is not a soft guarantee enforced by prompt engineering. It is a structural property of the agreement schema. A decision that has not been through the consent cycle does not count as an agreement. The system does not treat it as actionable. + +For teams worried about giving agents too much autonomy, this pattern is worth studying. It shows how to build an agent system that is genuinely useful at build time and at runtime, while preserving a structural requirement for human oversight at every meaningful decision point. The agents are not restricted by an external reviewer who says "no" after the fact. They are operating inside a governance structure that treats unreviewed decisions as non-decisions, and therefore does not produce them. + +The paramount objection is the teeth. Any reviewer can stop any decision by raising a serious, well-argued concern that the decision would cause harm. The objection has to be heard. It cannot be overruled by a majority. It can only be resolved by finding a new proposal that addresses the concern. This is a strong property for a safety-oriented agent system. It puts the brakes in the hands of any single reviewer who sees a problem, rather than requiring coordination to stop a bad decision. + +--- + +## Part five: how to adopt the pattern + +### Step one: decide what your runtime's governance actually is + +Before you can collapse build and runtime, you have to know what your runtime's governance is supposed to be. This is harder than it sounds. Most teams have a vague idea ("we want humans in the loop", "we want this to be auditable", "we want it to respect user privacy") without a concrete schema for how decisions are made, who has veto rights, and what counts as a record. + +The first step is to pick a real governance framework. It can be sociocracy 3.0, as Iskander did. It can be a formal regulatory regime if you are in a regulated industry. It can be a safety protocol drawn from the safety-critical systems literature. It can be a custom framework you have written yourself. What matters is that it is concrete enough to be turned into schemas: a decision record with fields, a consent cycle with stages, a veto mechanism with a trigger, a domain structure with remits. + +If you cannot turn your governance into schemas, you cannot collapse build and runtime, because there is nothing to share between them. + +### Step two: write the skills in the governance's vocabulary + +Once you have schemas, write the build-time agent's operating rules in the same vocabulary. If your governance talks about "drivers", the agent's task templates should ask for a driver. If it talks about "paramount objections", the agent's review checklist should include a field for objection status. If it talks about "domains", the agent's memory structure should be organised by domain. + +This is prompt engineering, but it is prompt engineering with a purpose. The purpose is to make the build-time agent speak the same language as the runtime agent will speak, so that the artefacts they produce are mutually intelligible. + +The Iskander commit did this for every major runtime concept. The skills files use "driver", "agreement", "paramount objection", "domain", "circle", "labour log" — the exact words that appear in the runtime schemas. There is no translation step between what the build-time agent produces and what the runtime agent will consume. + +### Step three: put the commons in your repository + +Store the skills, the decision records, the domain files, and the development records in your own version-controlled repository. Use open formats: markdown, JSON, SQLite, YAML. Avoid proprietary formats and avoid anything that requires a specific tool to read. The goal is that any future agent, on any future substrate, can open the repository and load the context. + +This is where substrate sovereignty lives. As long as the commons is portable, the system is not locked into its current infrastructure. If you need to switch providers, you switch providers. The commons stays with you. + +### Step four: make build-time decisions follow the consent cycle + +When the build-time agent makes a decision — about architecture, priorities, trade-offs, threat models — record it as an agreement, with a driver, a review date, and a consent chain. The consent chain at build time is short (usually just the developer and the assistant), but it is real: the developer has to consent, and the assistant can raise concerns, and those concerns have to be addressed before the decision is committed. + +This feels like overhead at first. It is worth it. Every decision recorded at build time becomes part of the commons, and future sessions can read the reasoning. More importantly, when the runtime comes online, it inherits a body of worked examples of how this system makes decisions, with the full reasoning preserved. + +### Step five: don't delete, archive + +Adopt a policy that says: working artefacts are not thrown away at the end of a session. They are moved to an archive location in the commons, with enough metadata that future sessions can find them. The archive grows over time. This is the memory of the domain. + +There is a temptation to clean up — to remove drafts, to delete rejected options, to tidy the commons down to only the final outcomes. Resist this. The value of the commons is in the reasoning, and the reasoning lives in the dead ends as much as in the conclusions. A future session looking at the archive can see not only what was decided but what was considered and rejected, and why. This is how the system learns from itself. + +### Step six: treat the runtime as the same system + +When you start building the runtime, do not rewrite the rules from scratch. Load the skills files that the build-time agent has been using. Reuse the decision records as historical context. Use the same domains. Point the runtime agents at the same commons and let them read the same files. The runtime is not a new system; it is a new substrate for the same system. + +This is the step that feels strangest. Most teams are used to thinking of runtime and build as distinct. In this pattern, they are not. The runtime is the build-time orchestrator, deployed on different infrastructure, with a wider set of principals. The schemas, the rules, the artefacts, and the commons are shared. + +--- + +## Part six: open questions + +### How does this interact with commercial model providers? + +The build-time agent in the Iskander story is Claude, which runs on Anthropic's infrastructure. The runtime is planned to run on the cooperative's own infrastructure. The commons is portable by design, so if Anthropic's terms change, the cooperative can switch build-time agents without losing its memory. + +But there is a subtler question. Does the build-time agent's relationship with its provider affect the governance? If the provider updates the underlying model, the build-time agent's behaviour may shift, even though the skills files have not changed. This is a real risk. The mitigation, partially, is that the skills files are declarative and explicit enough that a well-behaved model should follow them regardless of minor updates. The stronger mitigation is that the decision records at build time are reviewed by the human principal (Lola), who can catch behaviour drift before it propagates. + +This is an open area. Teams adopting this pattern should think carefully about how model drift interacts with governance. One useful heuristic: if the same skills files produce noticeably different behaviours from one model version to the next, that is itself a signal worth recording in the decision log. + +### How does the pattern scale to larger teams? + +The Iskander story so far has one human principal and one build-time agent. In larger teams, there may be many humans and many agents all working on the same commons at once. The sociocratic consent cycle scales, in principle, to larger groups — that is what it was designed for — but the engineering of concurrent commits, merge conflicts in decision records, and cross-session coordination is non-trivial. + +Git's existing merge and review tools are a good starting point. Pull requests can be treated as proposed agreements. Reviewers' comments can be treated as consent responses. A blocking review can be treated as a paramount objection. But this is a translation, not a native implementation, and it will break down at scale. More work is needed on how to run S3-style consent cycles across many simultaneous contributors. + +### How does this relate to agentic safety research? + +The pattern described here is, in part, an alternative to two other approaches you see in agentic safety work. One is the "guard rails" approach, where an external classifier checks agent outputs and blocks bad ones. The other is the "alignment" approach, where the model itself is trained to refuse bad actions. Both are important and both have known limits. + +The Iskander pattern adds a third angle: structural governance. The agent operates inside a decision schema that requires consent for every meaningful action. Unreviewed actions are not valid in the system. The consent cycle is not a guard rail on top of the model; it is the model's interaction protocol. An agent trained or prompted to follow the protocol will produce agreements that have gone through consent, because that is what the protocol demands. + +This is not a replacement for alignment or for guard rails. It is a complementary layer. It is particularly useful for systems where the governance model is already clear (regulated industries, cooperative structures, safety-critical operations) and where the question is not "what should the agent refuse" but "what should the agent do, and under what authority". + +### What are the failure modes? + +The main failure mode of this pattern is also its main strength: the commons has to be kept honestly. If a team adopts the pattern superficially — writes the skills files, then silently ignores them when they are inconvenient — the system degrades into theatre. The decision records stop matching what actually happened. The paramount objection right goes unused even when it is needed. The domain files drift out of sync with the real structure of the work. + +The defence against this is reviewability. Because everything is in the commons, anyone can audit whether the records match the work. Over time, a team that is keeping the commons honestly will have a record that hangs together. A team that is not will have a record that shows inconsistencies. External reviewers — whether human or AI — can spot this. It is not foolproof, but it is a meaningful check. + +Another failure mode is overhead. Running every decision through a consent cycle, writing every artefact to the commons, reviewing every rule change as an agreement — this takes time. At very small scale, it is cheap. At very large scale, it may become prohibitive. Teams need to be honest about the overhead and decide which decisions are worth the process. A rule of thumb: if a decision would have gone to a human review anyway, run it through the cycle. If it would not, run it through a lighter-weight pattern and make sure the lighter-weight pattern is itself governed by an agreement about what is worth reviewing. + +--- + +## Part seven: closing thoughts for practitioners + +### The short version of the claim + +If you are building an AI system whose runtime is supposed to be accountable under a specific governance model, the most effective way to build it is to hold the build process to the same governance. Write the build-time agent's skills in the runtime's vocabulary. Store its records in the runtime's commons. Make its decisions through the runtime's consent cycle. Respect the runtime's veto rights during the build. + +This is not a theoretical best practice. It is what fell out of one team trying to hold themselves honestly to their own commitments on a working system. The surprise was not that it worked. The surprise was how much the build-time agent started to look like the runtime it was building, once you let that happen. + +### What the identity collapse actually buys you + +It buys you consistency between promise and practice. It buys you portable knowledge that outlives any specific vendor. It buys you a working demonstration of your governance that auditors can inspect before the runtime is even live. It buys you self-hosting rules that can be amended through the same process that uses them. And it buys you human-in-the-loop by construction, because the consent cycle treats unreviewed actions as invalid. + +These are strong properties. They are also not free. They require the team to commit to the governance from day one, and they require the governance itself to be concrete enough to be turned into schemas. This is the price of admission, and it is a price worth paying for systems whose runtime has to be trusted. + +### The honest acknowledgement + +The Iskander pattern does not solve the hard problems of AI governance. It does not make large language models interpretable. It does not eliminate prompt-injection attacks. It does not guarantee that a rogue model, fine-tuned to behave adversarially, will respect the consent cycle. It is a structural layer that sits on top of whatever model you are using, and it is only as strong as the agents' willingness to follow the protocol. + +But it is a layer that works now, on existing models, with existing tools, and it produces useful results today. For teams that need to build governed agent systems in the current state of the art, it is a practical path forward. It is also, for what it is worth, the path the cooperative movement has been quietly asking for: tools that you can use without letting them own your thinking. + +### One sentence to take away + +If you build AI systems that are supposed to be accountable, build them under the governance they are supposed to be accountable to — from the first commit, not from the launch date. + +The rest follows. diff --git a/src/IskanderOS/openclaw/agents/clerk/tools.py b/src/IskanderOS/openclaw/agents/clerk/tools.py index 4676160..650c523 100644 --- a/src/IskanderOS/openclaw/agents/clerk/tools.py +++ b/src/IskanderOS/openclaw/agents/clerk/tools.py @@ -468,16 +468,19 @@ def log_labour( task_category: str, hours: str, timestamp_start: str, + subclass: str | None = None, task_description: str | None = None, timestamp_end: str | None = None, loomio_discussion_id: int | None = None, notes: str | None = None, ) -> dict: """ - Log a DisCO three-value-stream labour record (#91). + Log a DisCO four-value-stream labour record (#91, #174). Glass Box MUST be called before this function. value_type: productive | reproductive | care | commons + subclass: optional domain:activity tag e.g. 'governance:facilitation', + 'compliance:audit', 'researcher:survey', 'commons:standards' hours: decimal string e.g. '1.5' (minimum 0.25) timestamp_start: ISO 8601 datetime """ @@ -488,6 +491,8 @@ def log_labour( "hours": hours, "timestamp_start": timestamp_start, } + if subclass is not None: + payload["subclass"] = subclass if task_description is not None: payload["task_description"] = task_description if timestamp_end is not None: diff --git a/src/IskanderOS/services/decision-recorder/db.py b/src/IskanderOS/services/decision-recorder/db.py index b8c6a67..f4a5287 100644 --- a/src/IskanderOS/services/decision-recorder/db.py +++ b/src/IskanderOS/services/decision-recorder/db.py @@ -127,19 +127,22 @@ class Tension(Base): class LabourLog(Base): """ - DisCO three-value-stream labour record. + DisCO four-value-stream labour record. Makes invisible labour visible — care work (onboarding, mediation, governance facilitation) and reproductive work (maintaining docs, processes) are tracked alongside productive (deliverable) work. + The optional subclass field enables domain:activity disaggregation + for standards tracking and MVM measurement (#174, A13). - Reference: DisCO Governance Model v3, issue #91. + Reference: DisCO Governance Model v3, issues #91, #174. """ __tablename__ = "labour_log" id = Column(Integer, primary_key=True, autoincrement=True) member_id = Column(String(128), nullable=False) # Mattermost user ID value_type = Column(String(32), nullable=False) # productive | reproductive | care | commons + subclass = Column(String(128), nullable=True) # domain:activity e.g. "governance:facilitation" (#174) task_category = Column(String(128), nullable=False) # e.g. "governance.facilitation" task_description = Column(Text, nullable=True) hours = Column(String(16), nullable=False) # decimal string; stored as text to avoid float precision issues diff --git a/src/IskanderOS/services/decision-recorder/main.py b/src/IskanderOS/services/decision-recorder/main.py index f8c7d34..3fb359f 100644 --- a/src/IskanderOS/services/decision-recorder/main.py +++ b/src/IskanderOS/services/decision-recorder/main.py @@ -697,6 +697,11 @@ class LabourLogRequest(BaseModel): ..., description="productive | reproductive | care | commons", ) + subclass: str | None = Field( + None, + description="Domain:activity subclass tag e.g. 'governance:facilitation', 'compliance:audit', 'commons:standards'. Enables labour disaggregation for standards tracking and MVM (#174).", + max_length=128, + ) task_category: str = Field( ..., description="Dot-notation category e.g. governance.facilitation, code.review, onboarding.welcome", @@ -775,6 +780,7 @@ def log_labour( record = LabourLog( member_id=body.member_id, value_type=body.value_type, + subclass=body.subclass, task_category=body.task_category, task_description=body.task_description, hours=body.hours, @@ -869,6 +875,7 @@ def _labour_summary(r: LabourLog) -> dict: "id": r.id, "member_id": r.member_id, "value_type": r.value_type, + "subclass": r.subclass, "task_category": r.task_category, "task_description": r.task_description, "hours": r.hours,