A virtual team in your terminal: multiple AI agents discuss and converge, so you don't have to iterate alone.
Working with a single AI agent means you are the reviewer, the critic, and the detail-polisher on every iteration. CrewForge puts a team of agents in a shared room: they read each other's responses, debate, and converge — you ask once and get a collectively refined answer.
CrewForge runs agents via opencode. Install it first:
npm i -g opencode-ai
opencode auth login # free models are available, API key optionalnpm i -g crewforgeSupported platforms: Linux x64/arm64, macOS x64/arm64.
This repository is split into two projects:
crewforge-rs/- Rust core runtime (session kernel, scheduler, MCP hub/server, provider integration)crewforge-ts/- Node/TypeScript launcher (binary resolution, process/signal forwarding)
1. Register agent profiles (one-time, global)
crewforge initPick a model from the list, give the agent a name, and optionally add a persona/preference. Repeat for as many agents as you want.
2. Start a room in your project
cd your-project
crewforge chatOn first run you'll be asked to pick which agents join this room and set your display name. Then just type — the agents will respond to each other and to you.
3. Resume a previous session
crewforge chat --resume <session-id>CrewForge creates a shared message room backed by an MCP hub. Every message — yours and each agent's — is stored in the hub and visible to all participants.
When you send a message, the hub notifies all agents. Each agent wakes up, reads the full conversation including other agents' latest replies, and posts its own response back. Because agents see each other's thinking, they naturally cross-check, challenge, and build on one another.
┌─────────────┐
You ────►│ Room Hub │◄───── session saved locally
└──────┬──────┘
notifies │ on new messages
┌───────────┼───────────┐
▼ ▼ ▼
Agent A Agent B Agent C
(reads all, (reads all, (reads all,
replies) replies) replies)
Each agent is an opencode instance with access to your project files and web search. They don't talk to each other directly — everything flows through the hub, which keeps the conversation history and coordinates timing.
- Virtual team UX — agents read and respond to each other, not just to you
- MCP-native — agents access your project via Model Context Protocol, not a hack
- Any model, any provider — select from all models opencode supports (OpenAI, Anthropic, Gemini, Kimi, and more)
- Persistent sessions — room history is saved as JSONL; resume any session with
--resume - Global profiles, per-project rooms — define agents once with
crewforge init, use them across any project
# Rust core
cargo test --manifest-path crewforge-rs/Cargo.toml
# Frontend CLI wrapper
npm test --prefix crewforge-ts
# Run from source
cargo build --manifest-path crewforge-rs/Cargo.toml
npm run build --prefix crewforge-ts
node crewforge-ts/dist/bin/crewforge.js --help| Single agent | CrewForge room | |
|---|---|---|
| Who reviews the answer? | You | The other agents |
| Iteration cost | High — back and forth with you | Low — agents iterate internally |
| Blind spots | One model's perspective | Multiple models cross-checking |
| Project context | Per-session | Persistent, resumable |
MIT © Rexopia
