Skip to content

kangig94/coral

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

242 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸͺΈ Coral

ν•œκ΅­μ–΄

Claude Code already knows how to code. Coral teaches it how you work.

Coral is a CLI-first plugin backed by a persistent local coordinator for orchestration, sessions, discussion, and knowledge-base workflows.

Install

Requirements: Node.js 18+

# Claude Code:
/plugin marketplace add https://github.com/kangig94/coral
/plugin install coral

# Codex (also enables --delegate cross-model delegation):
npm install -g @openai/codex
codex plugin marketplace add kangig94/coral
# Restart Codex, then run /plugins and install Coral from the Coral marketplace.

# Update the Codex marketplace and installed plugin cache:
codex plugin marketplace upgrade coral

Try It Now

Run this on any existing project:

/coral:analyze what does this codebase do?

Structure a Project

demo-init-project.mp4
/coral:init-project

Coral scans your stack and generates .claude/ β€” conventions, review agents, architecture docs β€” tailored to your project.

Generated agents aren't boilerplate β€” they encode evaluation rubrics calibrated to your project's stack and audience. Claude follows your rules, not generic defaults.

/coral:init-project                                          # existing project
/coral:init-project "React + FastAPI"                        # tech stack hint
/coral:init-project "multi-tenant SaaS REST API with Go"     # full description
Generated structure
my-project/
+ .claude/
+   CLAUDE.md                 ← project hub: build commands, workflow, critical rules
+   agents/
+     code-critic.md          ← code quality review
+     ...                     ← domain agents (React, Go, ML, infra, etc.)
+   rules/
+     conventions.md          ← naming, git, style
+     ...                     ← domain rules, auto-activated by file path
+ docs/
+   architecture.md           ← module map, dependency graph

Browse this repository's .claude/ folder for a real example.

Plan, Build, and Fix

pathfind  β†’  preplan  β†’  plan  β†’  ralph
(explore)    (define)    (design)  (implement + verify)
# Know the problem, need a plan:
/coral:plan add retry logic to the API client

# Have symptoms, not sure what's wrong:
/coral:pathfind API is slow, DB hits limits, users are complaining

# Complex problem, need alignment first:
/coral:preplan race condition in the session manager

# Have a plan, just implement it:
/coral:ralph implement the caching layer

# Bug β€” diagnose, plan, fix in one shot:
/coral:bugfix why does session lookup return null?
Pipeline details

Each stage produces an artifact that feeds the next. Enter at any point β€” skip stages you don't need.

pathfind β€” "I have problems but don't know what to build." Clusters symptoms, investigates root causes (spawns scanner for codebase analysis), generates divergent directions through orthogonal lanes, and spawns pioneer for elegant alternatives. Outputs a ranked direction list with scoring matrix. Hands off the chosen direction to preplan.

preplan β€” "I know the direction but need agreement on scope." Fills a 7-item agreement (problem statement, success criteria, scope, assumptions, affected systems, constraints, approach direction) autonomously from codebase analysis, then presents to the user for correction. Spawns pioneer to find elegant alternatives for uncertain items, offering default/minimal/elegant spectrums. Produces pre-{topic}.md β€” the contract that plan must satisfy.

plan β€” "I need a design before I build." Multi-round review loop: dispatches architect + critic (and resolver in --deep mode) as a workflow, synthesizes findings, edits the plan file, and evaluates an exit gate. The resolver classifies findings (Adopt/Adapt/Defer/Diverge), applies changes, and decides whether another round is needed based on finding severity and nature. Produces {topic}.md with acceptance criteria, implementation phases, and execution order (dependency graph + parallel batches).

ralph β€” "I have a plan (or prompt). Just build it." Persistent executor with verification loop. In plan mode, reads the execution order and dispatches batches β€” parallelizing independent ACs. Every completion claim requires fresh verification evidence (lint β†’ build β†’ test). --red spawns a red-attacker in parallel to write adversarial tests targeting blind spots. --team uses Agent Teams for parallel AC execution.

Advanced flags
# Deep review with methodology-driven synthesis (resolver + HOW reasoning):
/coral:plan --deep add retry logic to the API client

# Adversarial testing β€” spawns a red-attacker to target blind spots:
/coral:ralph --red implement the caching layer

# Cross-model delegation (Codex when on Claude, Claude when on Codex):
/coral:plan --delegate redesign the session management system

--delegate runs the work on the other host (Codex if you're on Claude, Claude if you're on Codex).

Discuss

/coral:discuss should we use microservices or a monolith?

Multiple AI personas argue from different angles. Bid-based turn-taking, genuine cross-examination, structured synthesis at the end.

Join as a participant: /coral:discuss --user "topic", then /coral:bid to submit your turns.

Example: "Am I AGI?" β€” Full transcript EN Β· KO

Highlights from the discussion

A phenomenologist, a computational neuroscientist, an AI safety researcher, a robotics engineer, and an Eastern philosophy scholar debate whether LLMs constitute AGI. 5 agents, 15 speeches, 3 convergence points.

"Your robots have given me pause β€” genuinely. A robot arm that has touched ten thousand objects still can't generalize the way an LLM can." β€” Prof. Klaus Hartmann, conceding to Daan Vermeer's empirical challenge

"LLMs may be the first external instantiation of a theoretical structure Buddhist philosophers argued for 1,500 years ago." β€” Priya Raghunathan, mapping Yogacara's alaya-vijnana onto transformer architecture

"Think of the difference between an amnesiac with a diary and a person with intact memory. The scaffolding doesn't buy us the continuity we need. It buys us the appearance of it, which is worse." β€” Daan Vermeer, on why persistent memory tools don't solve the temporal discontinuity problem

The panel converged on: LLMs are not AGI but a genuinely novel temporal entity β€” with impressive competence within their characteristic timescale and unknown behavior at their structural boundaries.

Statusline

/coral:statusline install
opus 4.6 β”‚ 5h:39% (1:23) wk:36% (5.2d) β”‚ ctx:58% β”‚ $1.57 50m β”‚ coral:analyze
gpt-5.4  β”‚ 5h: 0% (4:59) wk:22% (2.8d) β”‚ spark 5h: 3% (0:47) wk: 1% (6.8d)

Skills

Skill Description --delegate
/coral:analyze Deep analysis and investigation βœ“
/coral:pathfind Divergent direction discovery from problem symptoms -
/coral:preplan Problem definition before planning βœ“
/coral:plan Multi-round planning with structured review. --deep for methodology-driven synthesis βœ“
/coral:ralph Persistent execution with verification. --red for adversarial tests βœ“
/coral:bugfix Bug diagnosis, planning, and fix in one shot βœ“
/coral:code-simplify Simplify and refine code for clarity βœ“
/coral:init-project Project initialization orchestrator -
/coral:discuss Moderated multi-agent discussion -
/coral:bid Submit bid/speech in active --user discuss session -
/coral:statusline Install or remove HUD statusline -

Knowledge Base

Coral learns from every session. Root causes, gotchas, and patterns stay searchable so the next session can check prior work before debugging from scratch.

  • Semantic search: /coral:equip needle upgrades vector search with hybrid BM25 + embedding retrieval (Gemini, OpenAI, or local ONNX models)

Configuration

Variable Default Description
CORAL_KB_PATH ~/.coral/kb Custom KB markdown root
CORAL_CODEX_MODEL gpt-5.4 Default Codex CLI model
CORAL_CODEX_EFFORT xhigh Codex reasoning effort (low, medium, high, xhigh)
CORAL_CODEX_FAST (none) Codex service tier toggle (1 = fast, 0 = flex); falls back to ~/.codex/config.toml top-level service_tier
CORAL_CLAUDE_EFFORT xhigh Claude reasoning effort (low, medium, high, xhigh, max). Sonnet/Haiku have no xhigh; the adapter collapses xhigh to the provider ceiling (max)
CORAL_CLAUDE_MODEL_CAP opus Maximum Claude model tier (opus, sonnet, haiku)
CORAL_EFFORT (none) Global effort override used when the provider-specific CORAL_{CLAUDE,CODEX}_EFFORT is unset
CORAL_DEV_ASSERTIONS (none) Contributor-only developer assertions. Set 1 during local development or npm test to make stale continuity-bridge calls and dispatcher corrupt-state cases throw; leave unset for production behavior and never enable in production deploys
CORAL_MAX_WORKERS 10 Max concurrent workers (1–10)
CORAL_DISCUSS_MAX_EPOCHS 2 Max epochs before discussion auto-ends (1–10)
CORAL_DISCUSS_TTL_DAYS 0 Days before completed sessions are auto-pruned (0 = disabled)
CORAL_KB_GIT_SYNC 0 Enable KB git sync β€” auto push/pull with remote (1 = enabled)

Tip: Set CORAL_CLAUDE_MODEL_CAP=sonnet to cap all subagent calls at Sonnet tier for Pro plans or to conserve usage.

⚠️ Enterprise users: KB git sync is off by default. KB notes may contain knowledge derived from proprietary codebases. Enabling auto-push could leak corporate IP to an external remote. Only enable if your KB remote is authorized for the content it will receive.

Set in .claude/settings.json (persists across sessions):

{
  "env": {
    "CORAL_KB_PATH": "/path/to/my-obsidian-vault",
    "CORAL_CLAUDE_MODEL_CAP": "sonnet"
  }
}

Documentation

About

Teach Claude Code how you code - conventions, workflow, debate

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Packages

 
 
 

Contributors