Skip to content

Latest commit

 

History

History
296 lines (215 loc) · 6.98 KB

File metadata and controls

296 lines (215 loc) · 6.98 KB

dev-council

dev-council is a terminal coding agent for a SDLC-style workflow:

SRS -> Milestones -> Tech Stack Selection -> Code -> QA -> Deployment

It is built around Ollama-compatible models, automatic skill loading, MCP tools, memory, checkpoints, and context compaction.

What It Does

  • Routes simple requests directly to the coding agent.
  • Routes larger product requirements through the full SDLC pipeline.
  • Prompts for Single LLM or Consensus mode before big pipeline runs.
  • Uses live local Ollama model discovery through ollama list.
  • Pauses at the Tech Stack stage and waits for the user to pick one of 2-3 options.
  • Shows context usage after responses.
  • Supports manual /compact and automatic compaction at 80% context usage.
  • Loads skills from built-ins plus project/user skill folders.
  • Connects MCP servers over stdio, HTTP, or SSE and exposes tools to the agent.

Intent Routing

Normal text input is classified before execution:

  • Simple requests, bug fixes, questions, and single-file changes bypass the pipeline.
  • Big requirements trigger the full pipeline. The heuristic looks for product-building language such as build, create, develop, make a full, implement a system, and longer product descriptions.

Example simple request:

Fix the typo in README.md

Example pipeline request:

Build a full stack SaaS dashboard with auth, admin views, APIs, and deployment setup.

Model Modes

Use /model to switch modes at any time.

Single LLM mode:

/model
1
<model number>

Consensus mode:

/model
2
<number of models>
<comma-separated model numbers>

Config keys:

  • llm_mode: single or consensus
  • active_model: the selected single model
  • consensus_models: selected consensus voters
  • model: the active execution model, kept compatible with existing provider code

Commands

The final public help surface is intentionally small:

  • /model — Switch LLM or consensus models.
  • /compact — Manually trigger context compaction.
  • /skills — List available agent skills.
  • /mcp — List connected MCP servers and discovered tools.
  • /memory — Search or list agent memory.
  • /context — Show current context usage.
  • /pipeline — Run the full SRS → Milestones → Tech Stack → Code → QA → Deployment pipeline.

Additional internal/developer commands still exist for compatibility, but /help only documents implemented user-facing features.

Pipeline

For big requirements, the pipeline runs:

  1. SRS generation into SDLC/srs.md
  2. Milestone task generation into SDLC/tasks.json
  3. Tech stack option generation
  4. User selection of exactly one stack
  5. Code implementation
  6. QA report into SDLC/qa_report.md
  7. Deployment plan into SDLC/deployment_plan.md

Stage artifacts are sanitized before they are written. The generator is instructed to return only the final artifact, and common model prefaces or wrappers are stripped so saved files do not include extra prose such as planning chatter or fenced wrappers. The milestone stage requests raw JSON for tasks.json, and the prompt rendering preserves literal JSON examples without interpreting them as template fields. The built-in task system now uses the same SDLC/tasks.json file, so milestone tasks are stored once instead of being mirrored to a second file.

The Tech Stack stage displays 2-3 options with:

  • name
  • frontend
  • backend
  • database
  • deployment target
  • one-line rationale

The agent does not continue to Code until a stack option is selected.

Skills

Skills are discovered from:

  • built-in SDLC/coding skills
  • .agents/skills/*/SKILL.md
  • .codex/skills/*/SKILL.md
  • .dev-council/skills/*.md
  • user-level equivalents under the home directory

Skill metadata is injected into the system prompt. Full skill content is loaded only when a skill is selected or invoked.

List skills:

/skills

MCP

MCP servers are configured through either a project .mcp.json file or the user-level config at:

~/.dev-council/mcp.json

Config loading order:

  1. user config: ~/.dev-council/mcp.json
  2. nearest project config: .mcp.json

If both define the same server name, the project .mcp.json wins.

Connected tools are registered as:

mcp__<server>__<tool>

Useful commands:

/mcp
/mcp reload
/mcp add demo <command> [args...]

Project .mcp.json Example

Create .mcp.json in the repo root:

{
  "mcpServers": {
    "git": {
      "type": "stdio",
      "command": "uvx",
      "args": ["mcp-server-git", "--repository", "."]
    }
  }
}

Then reload MCP inside the CLI:

/mcp reload
/mcp

Expected output:

  • the git server is listed
  • connected tools appear under the server
  • tool names look like mcp__git__<tool>

User Config Example

To make a server available across projects, put it in ~/.dev-council/mcp.json:

{
  "mcpServers": {
    "filesystem": {
      "type": "stdio",
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-filesystem",
        "C:/Users/Prasanna/OneDrive/Desktop"
      ]
    }
  }
}

You can also add a stdio server from the CLI:

/mcp add demo uvx mcp-server-git --repository .
/mcp reload

This writes to the user-level MCP config.

Remote MCP Example

HTTP/SSE-style servers can be configured with a URL and optional headers:

{
  "mcpServers": {
    "remote": {
      "type": "sse",
      "url": "http://localhost:8080/sse",
      "headers": {
        "Authorization": "Bearer <token>"
      }
    }
  }
}

Context Compaction

Manual compaction:

/compact

Automatic compaction triggers before the next LLM call when context usage exceeds 80%. The CLI prints a notice like:

⚠️ Context compaction triggered automatically (usage: 84%)

Responses include a footer:

[Context: 67% used | ~14,200 / 21,000 tokens]

Install

pip install -r requirements.txt

Run locally:

python dev_council.py

The CLI opens with a large dev-council banner. Press Ctrl+C at any prompt or during a running turn to exit cleanly.

Or install the CLI entrypoint:

pip install .
dev-council

Requirements

  • Python >=3.11, <3.13 per pyproject.toml
  • Ollama installed for local model discovery
  • At least one local Ollama model available through ollama list

Project Layout

dev_council.py      main CLI and pipeline orchestration
agent.py            core agent loop
providers.py        Ollama-compatible model transport
compaction.py       manual and automatic context compaction
context.py          system prompt builder
tools.py            built-in file, shell, web, memory, task, and plan tools
mcp/                MCP client and tool registration
skill/              skill loading and built-in SDLC skills
memory/             persistent memory system
task/               task storage and helpers
checkpoint/         file checkpoints and rewind support
tests/              runtime and integration-style tests