Production-oriented, local-first AI companion built as a Bun + Turbo monorepo.
Code Name: Titan
By Radoslav Sandov
Companion is designed to run in two realities:
- Local-first for privacy and zero cloud cost
- Hybrid/cloud-first for higher capability and team/enterprise deployment
The codebase is functional but still evolving. This repository now includes a concrete production roadmap, compliance readiness plan, security controls, and extensibility guides in Docs/.
- Architecture and roast:
Docs/ROAST.md - Delivery roadmap:
Docs/PRODUCTION_ROADMAP.md - Compliance readiness (SOC 2 / ISO 27001 / PCI DSS):
Docs/COMPLIANCE_READINESS.md - Security baseline:
Docs/SECURITY.md - Verification proof guide:
Docs/PROOF.md - Extensibility and pipelines:
Docs/EXTENSIBILITY_GUIDE.md - Architecture patterns and diagrams:
Docs/ARCHITECTURE_PATTERNS.md - Real usage examples:
Docs/EXAMPLES.md - Usage guide:
Docs/USAGE_GUIDE.md - Skills guide:
Docs/SKILLS_GUIDE.md - Development guide:
Docs/DEVELOPMENT_GUIDE.md - Engineering standards:
Docs/ENGINEERING_STANDARDS.md - Database architecture and migrations:
Docs/DATABASE_ARCHITECTURE.md
- Python 3.x
- Bun
>= 1.1.0 - Ollama
>= 0.17.xfor local models - Optional cloud key(s) for non-local mode (Anthropic configured by default)
cp .env.example .env
# edit .env as needed
cp companion.yaml.example companion.yaml
# edit companion.yaml as needed
bun install
bun run pull # pulls default local model
ollama pull nomic-embed-text:latest
ollama pull qwen3:1.7bRun services in two terminals:
bun run server
bun run tuiTUI working directory control:
- Type
/wd /absolute/pathin the TUI input to set where tools generate/edit code. - Type
/wdto print the currently active working directory.
Configured in companion.yaml:
local: fully local model routingbalanced: mixed local/cloudcloud: cloud-first for maximum capability
Mode behavior is policy-based, not provider-biased:
localuses local aliases only.balanceduses hybrid alias remapping (local + cloud where configured).cloudprefers cloud aliases and falls back only when missing config.- If cloud credentials fail at runtime (401/403), Companion falls back to
localalias for continuity.
Provider-switch guarantee:
- Cloud aliases are config-driven and can point to Anthropic, OpenAI, Gemini, or any supported provider.
- Orchestration logic does not hardcode a specific cloud vendor.
- Runtime auth fallback is applied on provider auth errors so a bad cloud key does not hard-stop a session.
Set default mode under:
mode:
default: balancedCreate a session:
curl -s -X POST http://localhost:3000/sessions \
-H "Authorization: Bearer dev-secret" \
-H "Content-Type: application/json" \
-d '{"title":"ops","goal":"check system load"}'Send a message:
curl -s -X POST http://localhost:3000/sessions/<SESSION_ID>/messages \
-H "Authorization: Bearer dev-secret" \
-H "Content-Type: application/json" \
-d '{"content":"What is the current system load?","stream":true}'Run a task in a specific folder (explicit working_dir):
curl -s -X POST http://localhost:3000/sessions/<SESSION_ID>/messages \
-H "Authorization: Bearer dev-secret" \
-H "Content-Type: application/json" \
-d '{
"content":"Create a Bun + Hono todo app with SQLite and tests in this directory.",
"working_dir":"/absolute/path/where/code-should-be-generated",
"stream":true
}'More examples: Docs/EXAMPLES.md.
Read latest audit events:
curl -s http://localhost:3000/audit/events?limit=100 \
-H "Authorization: Bearer dev-secret"Companion can now propose acquiring a new skill when it detects repeated missing capability.
Flow:
- Orchestrator evaluates whether existing registered tools/skills can solve the task.
- If not, it asks for confirmation to acquire a new skill.
- On
yes, Companion scaffoldsskills/<new-skill>/skill.yaml. - The new skill is loaded, registered, and made available to worker agents in the same running session.
- On
no, proposal is cancelled and normal execution continues.
Skill tooling included:
skill_of_skills: recommends matching skills for a task.create_skill_template: scaffolds a new skill from parameters.research_web_resource: researches a website resource with focused extraction.research_file_source: researches local/uploaded/link-based file sources.
apps/
server/ HTTP + WS orchestration API
tui/ Terminal UX client
packages/
agents/ orchestration and agent loop
config/ typed YAML config loader
core/ shared domain types + event bus
db/ persistence and migrations
llm/ provider adapters
memory/ vector memory and recall
skills/ skill loading and registration
tools/ tool registry and sandbox execution
- TUI has been split into modular components for maintainability.
- TUI transport now uses typed SDK/repository layers under
apps/tui/src/sdkfor API and session operations. - TUI runtime orchestration is separated into controller/facade/factory layers (
hooks,services,factories) to preventApp.tsxgod-object growth. - TUI now shows structured active execution state (
agent, stage, tool) with explicit status labels instead of dot/ellipsis thinking loaders. - TUI Capabilities pane now includes
Auditevents and braille-shift loading animation for visible operational state. - Server now applies security response headers by default.
- Startup checks are centralized for sandbox/model readiness.
- Server runtime is now split by concern under
apps/server/src/{bootstrap,constants,middleware,routes,services,ws}. - Server session processing now uses dedicated strategy/repository modules for summary model selection and session chat persistence.
- Health and telemetry endpoints are available at
/healthand/metrics. - Persistent audit events are written through
@companion/db(DB-only runtime mode). - Mutating session APIs are idempotent when clients provide
x-idempotency-key. - MCP server catalog is supported via
mcpconfig and inspectable usingmcp_serverstool. - Vector persistence is managed by
@companion/db(createVectorStore) so memory storage uses the same driver boundary as sessions/messages. - Shared enum-like constants now exist in
packages/corefor key literals. - Agent orchestration roles, intent routes, and workflow tracks are externally defined in YAML (
companion.yamlunderorchestrator.*) and validated by@companion/config. - Multi-lane workflow orchestration is available for product-delivery and operations tracks.
Sandbox execution mode clarity:
docker/docker-compose.ymlsetsCOMPANION_SANDBOX_RUNTIME=directby default.- Reason: the server container does not mount a Docker/Podman socket nor ship a container engine client for nested runtime orchestration.
- This means
run_shellandrun_testsexecute directly in the server container context when using default Compose. - To force containerized sandbox execution, run the server on the host (or provide a runtime socket/client in your container setup), build
companion-sandbox:latest, and set:COMPANION_SANDBOX_RUNTIME=docker(orpodman)sandbox.allow_direct_fallback: false
bun run lint
bun run typecheck
bun run testCurrent status:
lint: passingtypecheck: passing across all workspacestest: passing across all workspaces
End-to-end verification matrix:
- Baseline gates:
bun run lint && bun run typecheck && bun run test - Mode remapping and loop tests:
bun --cwd packages/agents run test - Runtime health (server):
curl -s -H 'Authorization: Bearer dev-secret' http://localhost:3000/health
Lint policy note:
biome.jsondisablescomplexity.useLiteralKeysandstyle.noNonNullAssertionto keep lint actionable for this codebase while preserving strict compile and test gates.
- GitHub issue templates:
.github/ISSUE_TEMPLATE/bug_report.yml.github/ISSUE_TEMPLATE/feature_request.yml.github/ISSUE_TEMPLATE/security_request.yml
- Dependabot updates:
.github/dependabot.yml - CI and dependency review workflows:
.github/workflows/ci.yml.github/workflows/dependency-review.yml
- Readiness proof workflow:
.github/workflows/proof.yml - Git pre-commit gate:
.githooks/pre-commit(format + lint + typecheck + tests) - Hook bootstrap script:
scripts/setup-hooks.sh(wired viabun run prepare) - VS Code standards:
.vscode/settings.jsonand.vscode/extensions.json
Runtime and compliance posture snapshot:
bun run proof:runtimeStrict runtime gate (fails on warnings):
bun run proof:runtime -- --strictProvider readiness report:
bun run proof:providersDatabase portability and audit retention proof:
bun run proof:dbStrict provider gate (fails if any configured provider is missing keys or unreachable):
bun run proof:providers -- --strictStrict database proof gate:
bun run proof:db -- --strictProvider key matrix:
ANTHROPIC_API_KEYforanthropicOPENAI_API_KEYforopenaialiasesGROK_API_KEYforgrokaliases (xAI)GEMINI_API_KEYforgemini
Copilot note:
- GitHub Copilot does not provide a standard static API key flow like the vendors above.
- Use
copilotonly when you have a compatible token/proxy flow; otherwise preferollama,anthropic,openai,gemini, orgrok.
Credential acquisition and verification guide:
Docs/PROVIDER_KEYS_GUIDE.md
Companion supports nearest working-directory overrides so one repo/folder can change behavior without editing root config.
Override search order (nearest parent wins):
companion.override.yamlcompanion.override.yml.companion/companion.yaml
Example override file:
mode:
default: cloud
orchestrator:
max_rounds: 2
tools:
web_fetch:
timeout_seconds: 10Build standalone TUI executable:
bun run build:tui:exeInstall globally (default target: ~/.local/bin/companion):
bun run install:cliUninstall:
bun run uninstall:cliAfter install, run from any folder:
companionWebhook routes are built into the server (no separate SDK package):
- Slack:
POST /integrations/slack/events - Telegram:
POST /integrations/telegram/webhook
Required config/env:
- Slack:
SLACK_ENABLED=true,SLACK_BOT_TOKEN,SLACK_SIGNING_SECRET, and trusted allowlists (trusted_user_ids,trusted_channel_ids,trusted_team_ids) - Telegram:
TELEGRAM_ENABLED=true,TELEGRAM_BOT_TOKEN,TELEGRAM_SECRET_TOKEN, and trusted allowlists (trusted_user_ids,trusted_chat_ids) - Optional second-factor style gate for both:
required_passphrase
Slack URL verification requests are handled automatically (challenge response). Incoming messages create/reuse per-channel sessions and can post assistant replies back using configured bot tokens.
Detailed guides:
Docs/INTEGRATIONS_GUIDE.mdDocs/CONFIGURATION_GUIDE.mdDocs/PROVIDER_KEYS_GUIDE.mdDocs/USAGE_GUIDE.md
Integration telemetry endpoints:
GET /integrations/telemetry/configGET /integrations/telemetry/stats
Integration smoke check:
bun run webhook:smoke- Build status:
https://github.com/thecharge/companion/actions/workflows/ci.yml - Proof status:
https://github.com/thecharge/companion/actions/workflows/proof.yml - Repo size:
https://img.shields.io/github/repo-size/thecharge/companion - Top language:
https://img.shields.io/github/languages/top/thecharge/companion - Open issues:
https://img.shields.io/github/issues/thecharge/companion - Latest release:
https://img.shields.io/github/v/release/thecharge/companion
Use files under docker/ and configure secrets via env vars.
Sandbox runtime behavior:
- If a container runtime is available but sandbox image is missing, Companion now falls back to direct host execution when
sandbox.allow_direct_fallback: true. - Build image for full isolation:
podman build -t companion-sandbox:latest docker/sandbox(or Docker equivalent).
Recommendation:
- Keep container-first for enterprise compliance, patching, and SBOM workflows
- Optionally add separate binaries later (
companion-server,companion-tui) for developer UX
Detailed tradeoffs: Docs/ROAST.md.
MIT (see LICENSE).