Cognis is a Java 21 autonomous agent runtime focused on practical execution: tool use, scheduling, memory, payment guardrails, and auditable operations for mobile-first workflows.
Cognis started from ideas inspired by NanoClaw/OpenClaw, then evolved into a Java-first MCP + mobile architecture.
Cognis is positioned as a trusted autonomous operator:
- Guardrailed actions (payments policy + confirmation thresholds)
- Accountable execution (audit trail + dashboard metrics)
- Mobile-native interaction (WebSocket protocol with typing/streaming/ack)
- Local-first deployment (Docker, file-backed state, no managed cloud dependency required)
clawmobile (mobile)
|
| WebSocket + HTTP
v
+-----------------------------+
| cognis-app (composition) |
| - GatewayServer (Undertow) |
| - CLI commands |
+--------------+--------------+
|
v
+----------------------------------------------------+
| cognis-core |
| |
| Agent Layer |
| ├── AgentOrchestrator (LLM loop, trace, heartbeat)|
| ├── AgentTool (spawn/await/steer/kill) |
| ├── AgentPool (semaphore concurrency cap)|
| ├── SubagentRegistry (persistent run tracking) |
| ├── TaskQueue (DAG dependency executor) |
| ├── CoordinatorTool (goal → task graph → LLM) |
| ├── TraceContext (traceId/spanId chain) |
| └── ZombieReaper (stale run detection) |
| |
| Infrastructure |
| ├── TopicMessageBus (pub/sub fan-out) |
| ├── SharedMemoryStore (namespaced agent memory) |
| ├── ProviderRouter (LLM provider fallback) |
| ├── ToolRegistry (tool dispatch) |
| └── Workflow/Cron/Payments/Observability |
+----------------------------------------------------+
|
v
File-backed stores
(.cognis/*, memory/*, uploads/*)
Cognis uses a single-orchestrator + async subagent pool model. The parent LLM loop drives orchestration by calling agent tools; the framework handles concurrency, tracing, and fault recovery.
AgentOrchestrator (root)
│ TraceContext.root() → traceId propagated to all descendants
│
├─ AgentTool.spawn("research X") ──► AgentPool.submit() ──► child run (virtual thread)
│ child.traceId == root.traceId
├─ AgentTool.spawn("research Y") ──► AgentPool.submit() ──► child run (virtual thread)
│
├─ AgentTool.await_all([runA, runB]) ← blocks until both complete
│
└─ CoordinatorTool.decompose(goal)
│ planner LLM → JSON task graph [{id, prompt, role, dependsOn[]}]
└─ TaskQueue.submit(tasks)
├─ tasks with no deps → spawned immediately (parallel)
└─ tasks with dependsOn → CompletableFuture.allOf(...).thenCompose(spawn)
zero polling, event-driven chaining
Key properties:
MAX_DEPTH = 2: child agents can spawn grandchildren; deeper nesting is blockedAgentPool: semaphore-bounded — rejects spawns when at capacity (default 10 concurrent)SubagentRegistry: all runs persisted to.cognis/subagents/runs.jsonfor audit and recoveryZombieReaper: scheduled every 60s — marks RUNNING runs with stale heartbeats as FAILEDTraceContext:traceIdshared across entire spawn tree;spanIdunique per run; emitted in all observability events
TopicMessageBus
├── publish("cron.workflow", msg) → fan-out to all "cron.workflow" subscribers (virtual threads)
├── publish("channel.whatsapp", msg)
└── subscribe("channel.whatsapp", handler) → returns subscriptionId for later unsubscribe
Verticals subscribe to topics they care about. Adding a new notification channel (e.g. email, push)
requires only a new subscribe() call — no changes to existing vertical code.
Agents within the same spawn tree share facts via SharedMemoryStore without passing raw context strings:
field-intake agent → sharedMemory.write(parentRunId, "beneficiary_count", "47 at Juba site 3")
supply-matching agent ← getSummary(parentRunId) injected into system prompt automatically
cognis-core: domain logic, providers, tools, workflow, payments, observability, gateway primitivescognis-cli: command-line shell around core runtimecognis-app: executable app and wiring (providers, tools, gateway, stores)cognis-dashboard: React/Vite operations dashboard for metrics + audit trail
- Migration plan:
docs/MIGRATION_PLAN.md - Operations runbook:
docs/OPERATIONS.md - MCP server bootstrap guide:
docs/MCP_SERVER.md - Contributing guide:
CONTRIBUTING.md - Mobile test client (React Native):
https://github.com/rmukubvu/clawmobile
- Java 21 runtime (virtual threads for all subagent spawning)
- Multi-provider routing + fallback chains
openrouter,openai,anthropic,bedrock,bedrock_openai,openai_codex,github_copilot
- Multi-agent orchestration
agenttool:spawn,await,await_all,steer,kill,status,create,chat,listcoordinatortool: decomposes a goal via a planner LLM into a parallel task graph, executes viaTaskQueueTaskQueue: DAG dependency resolution (Kahn's topological sort) +CompletableFuturechaining, zero pollingAgentPool: semaphore-based concurrency cap (configurable, default 10 concurrent subagent runs)ZombieReaper: background daemon that terminates stalled subagent runs via heartbeat liveness checkTraceContext:traceId/spanId/parentSpanIdpropagated through every spawn chain and emitted in all audit events
- Shared memory
SharedMemoryStore: namespaced key-value store so parallel subagents share facts without string serialisation
- Pub/sub message bus
TopicMessageBus: topic-based fan-out with virtual-thread listener dispatch andsubscribe/unsubscribe
- Tooling
filesystem,shell,web,cron,message,memory,profile,notify,payments,workflow,vision(when configured)
- Typed result handling
AgentResultcarriesAgentStatus(SUCCESS,MAX_ITERATIONS,TOOL_ERROR) — callers no longer string-match timeout messages
- Mobile gateway
- HTTP upload/transcribe/files + WebSocket chat protocol
- Memory and context
- SQLite-backed conversation history (default), session summary, profile, long-term memory
- Payments guardrails
- policy limits, merchant/category allowlists, confirmation threshold, quiet hours, status and ledger tracking
- Observability
- append-only audit events with full trace IDs + derived dashboard metrics
- Dashboard
- KPI cards, execution snapshot, audit filtering, CSV export
.
├── cognis-app/
├── cognis-cli/
├── cognis-core/
├── cognis-dashboard/
├── docker/
├── docker-compose.yml
├── Dockerfile
└── .env.example
- Java 21
- Maven 3.9+
- Docker + Docker Compose (optional, recommended for quick start)
- Node 20+ (for
cognis-dashboard)
- Create environment file:
cd /path/to/cognis
cp .env.example .envIf .env.example does not show in Finder, enable hidden files with Cmd+Shift+. or run ls -la.
- Optional helper: open the local env setup page to generate your
.envvalues.
- File location:
docs/env-setup.html - Open it directly in your browser (double-click in Finder or drag into a browser tab).
- Set at least one provider credential in
.env:
OPENROUTER_API_KEY=...
# or OPENAI_API_KEY=...
# or ANTHROPIC_API_KEY=...
# or Bedrock via IAM credentials/role:
# AWS_REGION=us-east-1
# AWS_ACCESS_KEY_ID=...
# AWS_SECRET_ACCESS_KEY=...
# AWS_SESSION_TOKEN=... # only for temporary credentials
# AWS_PROFILE=... # optional local profile alternative
# or Bedrock OpenAI-compatible endpoint (bearer token):
# COGNIS_PROVIDER=bedrock_openai
# AWS_BEARER_TOKEN=...
# BEDROCK_OPENAI_API_BASE=https://bedrock-runtime.us-east-1.amazonaws.com/openai/v1- Start Cognis gateway + MCP server + dashboard:
docker compose up --buildGateway default URL:
- HTTP:
http://127.0.0.1:8787 - WebSocket:
ws://127.0.0.1:8787/ws?client_id=<your-client-id> - MCP tools endpoint:
http://127.0.0.1:8791/mcp/tools - Operations dashboard:
http://127.0.0.1:4173
Persistent data:
./docker-data->/home/cognis/.cognisinside container
# Run gateway in background
docker compose up -d --build
# Tail logs
docker compose logs -f cognis cognis-mcp-server cognis-dashboard
# Tail only SMS/MCP flow
docker compose logs -f cognis-mcp-server | rg "twilio|/mcp/call|ERROR|WARN"
# Tail tool + task execution path
docker compose logs -f cognis | rg "task_|tool_|ERROR|WARN"
# Stop
docker compose down
# Run one-off CLI prompt in container
docker compose run --rm cognis agent "hello from docker"Dashboard audit quick filter:
- Open
http://127.0.0.1:4173. - In Audit Trail, set
ScopetoTool events only. - Optionally set
Typetotool_succeededand searchtwilio.send_sms.
End-to-end flow using the ClawMobile app connected to a local Cognis gateway.
- Open chat list and verify Cognis appears.
- Edit/connect the Cognis agent (
ws://localhost:8787/ws+ client id). - Open a Cognis conversation with starter prompts.
- Ask for a reminder and receive both ack + follow-up notification.
- Review the cleaner settings menu layout.
- Open Spending Controls from Settings and adjust limits.
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
cd /path/to/cognis
mvn test
mvn -pl cognis-app -am package
java -jar cognis-app/target/cognis-app-0.1.0-SNAPSHOT.jar gateway --port 8787CLI examples:
java -jar cognis-app/target/cognis-app-0.1.0-SNAPSHOT.jar onboard
java -jar cognis-app/target/cognis-app-0.1.0-SNAPSHOT.jar status
java -jar cognis-app/target/cognis-app-0.1.0-SNAPSHOT.jar agent "Plan my day"Cognis now includes a separate Java MCP-style integration server module for external commerce/ride/communications providers.
mvn -pl cognis-mcp-server -am package
java -jar cognis-mcp-server/target/cognis-mcp-server-0.1.0-SNAPSHOT.jarDefault URL: http://127.0.0.1:8791
Endpoints:
GET /healthzGET /mcp/toolsPOST /mcp/call
Docker Compose starts this server automatically as cognis-mcp-server and wires Cognis to it.
In Cognis agent runtime, the generic mcp tool uses COGNIS_MCP_BASE_URL to discover and call MCP tools dynamically.
Recommended skill-driven pattern:
- Use
mcpactionlist_toolsto discover capabilities. - Use
profile/memoryto persist aliases (for examplecontacts.wife.sms=+1...). - Use
mcpactioncall_toolwith resolved values (for exampletwilio.send_sms).
![]() |
![]() |
See docs/MCP_SERVER.md for architecture and provider env vars.
WhatsApp integration is not required to run or test Cognis.
Primary path:
- Use the React Native app (
https://github.com/rmukubvu/clawmobile) as the client over WebSocket tows://127.0.0.1:8787/ws?client_id=<id>.
Quick local validation paths:
- 5-minute smoke test script:
./scripts/smoke-test.sh
- CLI path:
java -jar cognis-app/target/cognis-app-0.1.0-SNAPSHOT.jar agent "hello"
- Raw WebSocket path (no mobile app required):
websocat ws://127.0.0.1:8787/ws?client_id=smoke-test- send:
{"type":"message","content":"hello","msg_id":"m1"}
- Persistence check for default SQLite conversation store:
sqlite3 ~/.cognis/workspace/.cognis/conversations.db "select created_at,prompt from conversation_turns order by created_at desc limit 5;"
Smoke test options:
COGNIS_SMOKE_RUN_CORE_TESTS=0 ./scripts/smoke-test.shto skip targeted core tests.COGNIS_SMOKE_BUILD_APP=0 ./scripts/smoke-test.shto skip app compile checks.
Docker entrypoint writes /home/cognis/.cognis/config.json from env (unless COGNIS_WRITE_CONFIG=false).
Important environment variables:
OPENROUTER_API_KEYOPENAI_API_KEYANTHROPIC_API_KEYAWS_REGION(orAWS_DEFAULT_REGION) for Bedrock- Optional static Bedrock credentials:
AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY,AWS_SESSION_TOKEN - Optional local profile for Bedrock:
AWS_PROFILE AWS_BEARER_TOKENforbedrock_openai- Optional
BEDROCK_OPENAI_API_BASE(default is region-derived Bedrock/openai/v1endpoint) COGNIS_PROVIDER(default:openrouter)COGNIS_MODEL(default:anthropic/claude-opus-4-5)WEB_SEARCH_API_KEY(used bywebtool search action)COGNIS_GATEWAY_PORT(default:8787)COGNIS_CONVERSATION_STORE(sqliteorfile, default:sqlite)COGNIS_CONVERSATION_SQLITE_PATH(optional custom SQLite path when store issqlite; default:<workspace>/.cognis/conversations.db)COGNIS_WRITE_CONFIG(default:true)
See .env.example.
Optional helper:
- Open
docs/env-setup.htmlin a browser to generate a ready-to-paste.envwith provider presets.
- Reference: AWS Bedrock OpenAI Chat Completions
- Inspired by James Ward
GET /healthz
Example:
curl http://127.0.0.1:8787/healthzPOST /upload-> stores file in uploadsPOST /transcribe-> transcribes provided audio fileGET /files/{filename}-> reads stored upload
GET /payments/policyPUT /payments/policy(also acceptsPOST)GET /payments/status
Set policy example:
curl -X PUT http://127.0.0.1:8787/payments/policy \
-H 'Content-Type: application/json' \
-d '{
"currency": "USD",
"max_per_tx": 50,
"max_daily": 120,
"max_monthly": 500,
"require_confirmation_over": 20,
"allowed_merchants": ["amazon", "ticketmaster"],
"allowed_categories": ["shopping", "tickets"],
"timezone": "UTC"
}'GET /dashboard/summaryGET /audit/events?limit=100
Example:
curl http://127.0.0.1:8787/dashboard/summary
curl 'http://127.0.0.1:8787/audit/events?limit=50'Connect:
ws://127.0.0.1:8787/ws?client_id=<client-id>[&token=<token>]
Inbound (client -> server):
{"type":"ping"}{"type":"message","content":"hello","msg_id":"m1","metadata":{"k":"v"}}
Outbound (server -> client):
{"type":"pong"}{"type":"ack","msg_id":"m1"}{"type":"typing","chat_id":"<client>","is_typing":true|false}{"type":"text_delta","chat_id":"<client>","message_id":"<id>","content":"..."}{"type":"message","chat_id":"<client>","id":"<id>","content":"..."}(fallback when no deltas){"type":"notification","chat_id":"<client>","content":"..."}{"type":"daily_brief"|"goal_checkin"|"workflow_result","chat_id":"<client>","content":"..."}
Start dashboard:
cd /path/to/cognis/cognis-dashboard
npm install
npm run devOpen the printed Vite URL (usually http://localhost:5173 or http://localhost:5174).
Optional backend override:
VITE_COGNIS_BASE_URL=http://127.0.0.1:8787 npm run devDashboard reads:
/dashboard/summary/audit/events?limit=300
Under workspace (/home/cognis/.cognis/workspace in Docker by default):
.cognis/cron/jobs.json.cognis/payments/ledger.json.cognis/observability/audit-events.jsonmemory/memories.json.cognis/conversations.db(default conversation history store)memory/history.json(only whenCOGNIS_CONVERSATION_STORE=file)memory/session-summary.txtprofile.jsonuploads/
- Default backend is SQLite (
COGNIS_CONVERSATION_STORE=sqlite). - Optional file backend remains available (
COGNIS_CONVERSATION_STORE=file). - SQLite default path:
<workspace>/.cognis/conversations.db. - There is no automatic import from
memory/history.jsoninto SQLite. Keep file mode enabled if you need to continue using existing file history without migration.
webtool includes SSRF protections and blocks loopback/private/link-local targets.- Payment operations are policy constrained and auditable.
- CORS is limited to local development origins by default (
localhost/127.0.0.1over HTTP).
cd /path/to/cognis
mvn testCore-only test run:
mvn test -pl cognis-coreDashboard checks:
cd /path/to/cognis/cognis-dashboard
npm run typecheck
npm run buildA concise positioning statement:
- Trusted autonomous operator with spend guardrails and accountable task execution.
The platform currently exposes the required primitives for:
- measurable execution quality,
- controllable spending,
- auditable outcomes,
- and mobile-native agent interaction.









