High-performance MCP memory server for multi-agent systems — SQLite-backed with hybrid search.
agent-memory-store gives your AI agents a shared, searchable, persistent memory — powered by SQLite with native FTS5 full-text search and optional semantic embeddings. No external services required.
Every time you start a new session with Claude Code, Cursor, or any MCP-compatible agent, it starts from zero. It doesn't know your project uses Fastify instead of Express. It doesn't know you decided on JWT two weeks ago. It doesn't know the staging deploy is on ECS.
agent-memory-store gives agents a shared, searchable memory that survives across sessions. Agents write what they learn, search what they need, and build on each other's work — just like a team with good documentation, except it happens automatically.
Agents read and write chunks through MCP tools. Search combines BM25 ranking (via SQLite FTS5) with semantic vector similarity (via local embeddings), merged through Reciprocal Rank Fusion for best-of-both-worlds retrieval.
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Agent A │ │ Agent B │ │ Agent C │
└──────┬──────┘ └──────┬──────┘ └──────┬──────┘
│ │ │
└────────────────┬──────────────────┘
│ MCP tools
┌──────────▼──────────┐
│ agent-memory-store │
│ hybrid search │
│ BM25 + semantic │
└──────────┬──────────┘
│
┌──────────▼──────────┐
│ .agent-memory-store/ │
│ └── store.db │
└───────────────────────┘
- Zero-install usage via
npx - Hybrid search — BM25 full-text (FTS5) + semantic vector similarity + Reciprocal Rank Fusion
- SQLite-backed — single
store.dbfile, WAL mode, native performance - Local embeddings — 384-dim vectors via
all-MiniLM-L6-v2, no API keys needed - Tag and agent filtering — find chunks by who wrote them or what they cover
- TTL-based expiry — chunks auto-delete after a configurable number of days
- Session state — key/value store for pipeline progress, flags, and counters
- MCP-native — works with Claude Code, opencode, Cursor, and any MCP-compatible client
- Zero external database dependencies — uses Node.js built-in SQLite (
node:sqlite)
- Node.js >= 22.5 (required for native
node:sqlitewith FTS5 support)
No installation needed:
npx agent-memory-storeBy default, memory is stored in .agent-memory-store/store.db inside the directory where the server starts — so each project gets its own isolated store automatically.
To use a custom path:
AGENT_STORE_PATH=/your/project/.agent-memory-store npx agent-memory-storeAdd to your project's claude.mcp.json (or ~/.claude/claude.mcp.json for global):
{
"mcpServers": {
"agent-memory-store": {
"command": "npx",
"args": ["-y", "agent-memory-store"]
}
}
}Or using the Claude Code CLI:
claude mcp add agent-memory-store --command "npx" --args "-y agent-memory-store"Add to your opencode.json:
{
"$schema": "https://opencode.ai/config.json",
"mcp": {
"agent-memory-store": {
"type": "local",
"command": ["npx", "-y", "agent-memory-store"],
"enabled": true
}
}
}Add to your MCP settings file:
{
"servers": {
"agent-memory-store": {
"command": "npx",
"args": ["-y", "agent-memory-store"]
}
}
}If you need to store memory outside the project directory, set AGENT_STORE_PATH in the environment block.
Claude Code:
{
"mcpServers": {
"agent-memory-store": {
"command": "npx",
"args": ["-y", "agent-memory-store"],
"env": {
"AGENT_STORE_PATH": "/absolute/path/to/.agent-memory-store"
}
}
}
}opencode:
{
"$schema": "https://opencode.ai/config.json",
"mcp": {
"agent-memory-store": {
"type": "local",
"command": ["npx", "-y", "agent-memory-store"],
"enabled": true,
"environment": {
"AGENT_STORE_PATH": "/absolute/path/to/.agent-memory-store"
}
}
}
}| Variable | Default | Description |
|---|---|---|
AGENT_STORE_PATH |
./.agent-memory-store |
Custom path to the storage directory. Omit to use project default. |
Add this to your agent's system prompt (or CLAUDE.md / AGENTS.md):
## Memory
You have persistent memory via agent-memory-store MCP tools.
**Before acting on any task:**
1. `search_context` with 2–3 queries related to the task. Check for prior decisions, conventions, and relevant outputs.
2. `get_state("project_tags")` to load the tag vocabulary. If empty, this is a new project — ask the user about stack, conventions, and structure, then persist them with `write_context` and `set_state`.
**After completing work:**
1. `write_context` to persist decisions (with rationale), outputs (with file paths), and discoveries (with impact).
2. Use short, lowercase tags consistent with the vocabulary: `auth`, `config`, `decision`, `output`, `discovery`.
3. Set `importance: "critical"` for decisions other agents depend on, `"high"` for outputs, `"medium"` for background context.
**Before every write:**
1. `search_context` for the same topic first. If a chunk exists, `delete_context` it, then write the updated version. One chunk per topic.
**Rules:**
- Never guess a fact that might be in memory — search first, it costs <10ms.
- Never store secrets — write references to where they live, not the values.
- `set_state` is for mutable values (current phase, counters). `write_context` is for searchable knowledge (decisions, outputs). Don't mix them.
- Use `search_mode: "semantic"` when exact terms don't match (e.g., searching "autenticação" when the chunk says "auth").Copy, paste, done. This is enough for any agent to use memory effectively.
Want to go deeper? The
skills/SKILL.mdfile is a comprehensive skill that teaches agents advanced patterns: cold start bootstrap for new projects, multi-agent pipeline handoffs, tag vocabulary management, deduplication workflows, and when to use each search mode. Install it in your project's skill directory if your agents run multi-step pipelines or need to coordinate across sessions.
| Tool | When to use |
|---|---|
search_context |
Start of every task — retrieve relevant prior knowledge before acting |
write_context |
After decisions, discoveries, or outputs that other agents will need |
read_context |
Read a specific chunk by ID |
list_context |
Inventory the memory store (metadata only, no body) |
delete_context |
Remove outdated or incorrect chunks |
get_state |
Read a pipeline variable (progress, flags, counters) |
set_state |
Write a pipeline variable |
query string Search query. Use specific, canonical terms.
tags string[] (optional) Narrow to chunks matching any of these tags.
agent string (optional) Narrow to chunks written by a specific agent.
top_k number (optional) Max results to return. Default: 6.
min_score number (optional) Minimum relevance score. Default: 0.1.
search_mode string (optional) "hybrid" (default), "bm25", or "semantic".
Search modes:
| Mode | How it works | Best for |
|---|---|---|
hybrid |
BM25 + semantic similarity merged via Reciprocal Rank Fusion | General use (default) |
bm25 |
FTS5 keyword matching only | Exact term lookups, canonical tags |
semantic |
Vector cosine similarity only | Finding conceptually related chunks |
topic string Short, specific title. ("Auth — JWT decision", not "decision")
content string Chunk body in markdown. Include rationale, not just conclusions.
agent string (optional) Agent ID writing this chunk.
tags string[] (optional) Canonical tags for future retrieval.
importance string (optional) low | medium | high | critical. Default: medium.
ttl_days number (optional) Auto-expiry in days. Omit for permanent storage.
key string State variable name.
value any (set_state only) Any JSON-serializable value.
src/
index.js MCP server — tool registration and transport
store.js Public API — searchChunks, writeChunk, readChunk, etc.
db.js SQLite layer — node:sqlite with FTS5, WAL mode
search.js Hybrid search — FTS5 BM25 + vector similarity + RRF
embeddings.js Local embeddings — @huggingface/transformers (all-MiniLM-L6-v2)
bm25.js Pure JS BM25 — kept as fallback reference
migrate.js Filesystem → SQLite migration (automatic, one-time)
All data lives in a single SQLite database at .agent-memory-store/store.db:
- chunks table — id, topic, agent, tags (JSON), importance, content, embedding (BLOB), timestamps, expiry
- chunks_fts — FTS5 virtual table synced via triggers for full-text search
- state table — key/value pairs for pipeline variables
WAL mode is enabled for concurrent read performance. No manual flush needed.
-
BM25 (FTS5) — SQLite's native full-text search ranks chunks by term frequency and inverse document frequency. Fast, deterministic, great for exact keyword matches.
-
Semantic similarity — Query and chunks are embedded into 384-dimensional vectors using
all-MiniLM-L6-v2(runs locally via ONNX Runtime). Cosine similarity finds conceptually related chunks even when exact terms don't match. -
Reciprocal Rank Fusion — Both ranked lists are merged using RRF with weights (BM25: 0.4, semantic: 0.6). Documents appearing in both lists get boosted.
The embedding model (~23MB) is downloaded automatically on first use and cached in ~/.cache/huggingface/. If the model fails to load, the system falls back to BM25-only search transparently.
Benchmarked on Apple Silicon (Node v25, darwin arm64, BM25 mode):
| Operation | 1K chunks | 10K chunks | 50K chunks | 100K chunks | 250K chunks |
|---|---|---|---|---|---|
| write | 0.17 ms | 0.19 ms | 0.23 ms | 0.21 ms | 0.25 ms |
| read | 0.01 ms | 0.05 ms | 0.21 ms | 0.22 ms | 0.85 ms |
| search (BM25) | ~5 ms† | ~10 ms† | ~60 ms† | ~110 ms† | ~390 ms† |
| list | 0.2 ms | 0.3 ms | 0.3 ms | 0.3 ms | 1.1 ms |
| state get/set | 0.03 ms | 0.03 ms | 0.07 ms | 0.05 ms | 0.03 ms |
† Search times from isolated run (no model loading interference). During warmup, first queries may be slower.
Key insights:
- list is O(1) in practice — pagination caps results at 100 rows by default, so list time stays flat regardless of corpus size (0.2–1.1 ms at any scale)
- write is stable at ~0.2 ms/op — FTS5 triggers and embedding backfill are non-blocking; inserts stay constant
- read is a single index lookup — sub-millisecond up to 50K chunks, still <1 ms at 250K
- search scales linearly with FTS5 corpus — this is inherent to BM25 full-text scan; for typical agent memory usage (≤25K chunks), search stays under 30 ms
- state ops are O(1) — key/value store backed by a B-tree primary key, constant at all scales
git clone https://github.com/vbfs/agent-memory-store
cd agent-memory-store
npm install
npm startRun tests:
npm testRun benchmark:
node benchmark.jsSee CONTRIBUTING.md for guidelines.
-
summarize_contexttool — LLM-powered chunk consolidation -
prune_contexttool — remove chunks by age, agent, or importance -
Hybrid scoring: BM25 + local embedding reranking— shipped in v0.1.0 -
SQLite-backed storage— shipped in v0.1.0 - Web UI for browsing the memory store
- Multi-project workspace support
MIT — see LICENSE.