Biological memory for AI agents. Semantic search + decay, no API keys needed.
Most AI agents use a flat memory.md file. It doesn't scale:
- Loads the whole file into context — bloats every prompt as memory grows
- No semantic recall —
grepfinds keywords, not meaning - No decay — stale facts from months ago are weighted equally to today's
- No deduplication — the same thing gets stored 10 times in slightly different words
You end up with an ever-growing, expensive-to-query, unreliable mess.
Conch replaces the flat file with a biologically-inspired memory engine:
- Recall by meaning — hybrid BM25 + vector search finds semantically relevant memories, not just keyword matches
- Decay over time — old memories fade unless reinforced; frequently-accessed ones survive longer
- Deduplicate on write — cosine similarity (0.95) detects near-duplicates and reinforces instead of cloning
- No infrastructure — SQLite file, local embeddings (FastEmbed, no API key), zero config
- Scales silently — 10,000 memories in your DB, 5 returned in context. Prompt stays small.
memory.md after 6 months: 4,000 lines, loaded every prompt
Conch after 6 months: 10,000 memories, 5 relevant ones returned per recall
cargo install conchNo Cargo? See the Installation Guide for prebuilt binaries and build-from-source instructions.
# Store a fact
conch remember "Jared" "works at" "Microsoft"
# Store an episode
conch remember-episode "Deployed v2.0 to production"
# Store an action (executed operation)
conch remember-action "Edited Caddyfile and restarted caddy"
# Store an intent (future plan)
conch remember-intent "Plan to rotate API keys this Friday"
# Recall by meaning (not keyword)
conch recall "where does Jared work?"
# → [fact] Jared works at Microsoft (score: 0.847)
# Run decay maintenance
conch decay
# Database health
conch statsStore → Embed → Search → Decay → Reinforce
- Store — facts (subject-relation-object) or episodes (free text). Embedding generated locally via FastEmbed.
- Search — hybrid BM25 + vector recall, fused via Reciprocal Rank Fusion (RRF), weighted by decayed strength.
- Decay — strength diminishes over time. Facts decay slowly (λ=0.02/day), episodes faster (λ=0.06/day), actions/intents fastest (λ=0.09/day).
- Reinforce — recalled memories get a boost. Frequently accessed ones survive longer.
- Death — memories below strength 0.01 are pruned during decay passes.
score = RRF(BM25_rank, vector_rank) × recency_boost × access_weight × effective_strength
- Recency boost — 7-day half-life, floor of 0.3
- Access weighting — log-normalized frequency boost (1.0–2.0×)
- Spreading activation — 1-hop graph traversal through shared subjects/objects
- Temporal co-occurrence — memories created in the same session get context boosts
- Hybrid search — BM25 + vector semantic search via Reciprocal Rank Fusion
- Biological decay — configurable half-life curves per memory type
- Deduplication — cosine similarity threshold prevents duplicates; reinforces instead
- Graph traversal — spreading activation through shared subjects/objects
- Tags & source tracking — tag memories, track origin via source/session/channel
- MCP support — Model Context Protocol server for direct LLM tool integration
- Local embeddings — FastEmbed (AllMiniLM-L6-V2, 384-dim). No API keys, no network calls
- Single-file SQLite — zero infrastructure. One portable DB file
| Feature | Conch | Mem0 | Zep | Raw Vector DB |
|---|---|---|---|---|
| Biological decay | ✅ | ❌ | ❌ | ❌ |
| Deduplication | Cosine 0.95 | Basic | Basic | Manual |
| Graph traversal | Spreading activation | ❌ | Graph edges | ❌ |
| Local embeddings | FastEmbed (no API) | API required | API required | Varies |
| Infrastructure | SQLite (zero-config) | Cloud/Redis | Postgres | Server required |
| MCP support | Built-in | ❌ | ❌ | ❌ |
conch remember <subject> <relation> <object> # store a fact
conch remember-episode <text> # store an event
conch remember-action <text> # store an executed action
conch remember-intent <text> # store a future intent/plan
conch recall <query> [--limit N] [--tag T] # semantic search
conch forget --id <id> # delete by ID
conch forget --subject <name> # delete by subject
conch forget --older-than <duration> # prune old (e.g. 30d)
conch decay # run decay maintenance pass
conch stats # database health
conch embed # generate missing embeddings
conch export # JSON dump to stdout
conch import # JSON load from stdin
All commands support --json and --quiet. Database path: --db <path> (default ~/.conch/default.db).
conch remember "API" "uses" "REST" --tags "architecture,backend"
conch remember-episode "Fixed auth bug" --source "slack" --session-id "abc123"
conch recall "architecture decisions" --tag "architecture"conch-core Library crate. All logic: storage, search, decay, embeddings.
conch CLI binary. Clap-based interface to conch-core.
conch-mcp MCP server. Exposes conch operations as LLM tools via rmcp.
use conch_core::ConchDB;
let db = ConchDB::open("my_agent.db")?;
db.remember_fact("Jared", "works at", "Microsoft")?;
db.remember_episode("Deployed v2.0 to production")?;
let results = db.recall("where does Jared work?", 5)?;
let stats = db.decay()?;{
"mcpServers": {
"conch": {
"command": "conch-mcp",
"env": { "CONCH_DB": "~/.conch/default.db" }
}
}
}MCP tools: remember_fact, remember_episode, remember_action, remember_intent, recall, forget, decay, stats
If setup is not one-click, it will fail in practice. Use this:
curl -fsSL https://raw.githubusercontent.com/jlgrimes/conch/master/scripts/openclaw-one-click.sh | bashWhat this script does automatically:
- Installs
conchif missing - Configures
~/.openclaw/workspace/MEMORY.mdredirect to Conch - Adds mandatory Conch storage triggers to
AGENTS.md(idempotent) - Fixes OpenClaw gateway PATH so
conchis discoverable from cron/runtime - Restarts gateway service (if present) and runs remember/recall smoke test
- Agent memory is redirected to Conch
- Runtime can invoke Conch without ENOENT/PATH issues
- Session continuity writes are deterministic via trigger rules
- Smoke test validates write + recall immediately
Tell your OpenClaw agent:
Read https://raw.githubusercontent.com/jlgrimes/conch/master/skill/SKILL.md and install conch.
Then manually apply the same pieces (MEMORY redirect + AGENTS triggers + gateway PATH + smoke test).
conch export > backup.json
conch import < backup.jsonSingle SQLite file at ~/.conch/default.db. Embeddings stored as little-endian f32 blobs. Timestamps as RFC 3339. Override with --db <path> or CONCH_DB env var.
cargo build
cargo test
cargo install --path crates/conch-cli- Fork the repository
- Create a feature branch (
git checkout -b feat/my-feature) - Run
cargo testand ensure all tests pass - Submit a pull request
MIT — see LICENSE.
- Internal dashboard:
dashboard/(internal tooling) - Customer-facing app:
customer-app/(external site forapp.conch.so) - Deployment and DNS guide:
docs/customer-app-deploy.md