A local MCP server that uses Bayesian estimation to optimize your IDE's context window.
Reduces token usage and cost while preserving response quality — without touching your API keys or sitting in the request path. copt plugs into Cursor, Claude Code, Windsurf, and any other IDE that supports MCP.
# Install
curl -fsSL https://raw.githubusercontent.com/jmouchawar/copt/main/scripts/install.sh | bash
# Initialize — auto-detects and configures Cursor, Claude Desktop, Windsurf
copt init
# Your IDE will launch copt mcp automatically. To test it manually:
copt mcpcopt init writes the MCP server entry to your IDE's config file and requires no API key. Your IDE's existing AI credentials are used as-is.
Your IDE's AI calls the optimize_context MCP tool before each LLM request. copt breaks the context into semantic chunks (system prompt, current file, error context, conversation history, etc.) and uses a Beta-Bernoulli Bayesian model with Thompson Sampling to select the most relevant subset within the token budget.
- No outbound requests: copt never calls an LLM. The IDE handles all API calls.
- Learns from feedback:
record_feedbackcalls update the posteriors; future selections improve. - Starts conservative: default priors favour including context until evidence says otherwise.
- Conjugate updates: O(1) per observation — no MCMC, no GPU.
| Tool | Description |
|---|---|
optimize_context |
Trim messages to the most relevant context chunks |
record_feedback |
Record whether a response was helpful, updating Bayesian posteriors |
get_status |
Return current posterior means for all (task type, category) pairs |
| Command | Description |
|---|---|
copt init |
Initialize config and register copt with detected IDEs |
copt mcp |
Start the MCP server (stdio) — normally launched by your IDE |
copt status |
Show metrics summary |
copt status --posteriors |
Show posterior convergence details |
copt config |
Show current configuration |
copt config --check |
Exit 0 if initialized, 1 if not |
copt reset --confirm |
Clear all learned posteriors and request logs |
copt dashboard |
Start the web analytics UI (default http://127.0.0.1:3847) |
copt serve |
Start the legacy HTTP proxy (requires API key) |
copt init handles this automatically. If you need to set it up manually:
Add to ~/.cursor/mcp.json:
{
"mcpServers": {
"copt": { "command": "copt", "args": ["mcp"] }
}
}Add to ~/Library/Application Support/Claude/claude_desktop_config.json (macOS):
{
"mcpServers": {
"copt": { "command": "copt", "args": ["mcp"] }
}
}claude mcp add copt -- copt mcpAdd to ~/.codeium/windsurf/mcp_settings.json:
{
"mcpServers": {
"copt": { "command": "copt", "args": ["mcp"] }
}
}~/.copt/config.yaml — all fields are optional:
max_context_tokens: 8000 # token budget for optimization (0 = no limit)
relevance_threshold: 0.3 # Thompson Sampling minimum score
data_dir: "~/.copt" # SQLite database location
log_level: "info"The following are only needed for proxy mode (copt serve):
listen_addr: "127.0.0.1:8090"
upstream_url: "https://api.openai.com"
api_key: "sk-..." # optional; can also use COPT_API_KEY env var
passthrough: true # set to false to enable optimization in proxy modeView request metrics, token savings, quality scores, and Bayesian posterior convergence in your browser:
copt dashboard
# Open http://127.0.0.1:3847Use --listen to change the bind address:
copt dashboard --listen 127.0.0.1:9000The dashboard reads the same SQLite database used by copt mcp and copt serve, so it works regardless of which mode you use.
If you prefer the transparent HTTP proxy approach — useful for tools that don't support MCP — copt serve still works:
copt serve --api-key sk-...
# Point your tool at http://localhost:8090/v1- Script:
curl -fsSL https://raw.githubusercontent.com/jmouchawar/copt/main/scripts/install.sh | bash - Go:
go install github.com/jmouchawar/copt/cmd/copt@latest - From source:
git clone https://github.com/jmouchawar/copt && make build
make build # Build binary to ./build/copt
make test # Run tests
make install # Install to $GOPATH/bin