Your AI conversations compile themselves into a searchable knowledge base.
Adapted from Karpathy's LLM Knowledge Base architecture, but instead of clipping web articles, the raw data is your own conversations with Claude Code. When a session ends (or auto-compacts mid-session), Claude Code hooks capture the conversation transcript and spawn a background process that uses the claude CLI to extract the important stuff - decisions, lessons learned, patterns, gotchas - and appends it to a daily log. You then compile those daily logs into structured, cross-referenced knowledge articles organized by concept. Retrieval uses a simple index file instead of RAG - no vector database, no embeddings, just markdown.
This is a Rust rewrite of claude-memory-compiler. Single static binary, no Python runtime, no uv sync.
cargo install claude-journalDownload a prebuilt binary from the latest release:
| Platform | Target |
|---|---|
| macOS (Apple Silicon) | journal-aarch64-apple-darwin.tar.gz |
| macOS (Intel) | journal-x86_64-apple-darwin.tar.gz |
| Linux (ARM64) | journal-aarch64-unknown-linux-gnu.tar.gz |
| Linux (x86_64) | journal-x86_64-unknown-linux-gnu.tar.gz |
Extract and move to somewhere on your $PATH:
tar xzf journal-*.tar.gz
sudo mv journal /usr/local/bin/git clone https://github.com/RocketArminek/journal.git
cd journal
cargo install --path .journal initThe init wizard will:
- Prompt for knowledge base directory name, timezone, and auto-compile hour
- Create the full directory structure (
daily/,knowledge/,state/, etc.) - Write the AGENTS.md schema into your knowledge base
- Merge SessionStart, SessionEnd, and PreCompact hooks into
.claude/settings.json - Update
.gitignoreto exclude generated content
From there, your conversations start accumulating. After the configured hour (default 6 PM), the next session flush automatically triggers compilation of that day's logs into knowledge articles.
Conversation -> SessionEnd/PreCompact hooks -> flush extracts knowledge
-> daily/YYYY-MM-DD.md -> compile -> knowledge/concepts/, connections/, qa/
-> SessionStart hook injects index into next session -> cycle repeats
- Hooks capture conversations automatically (session end + pre-compaction safety net)
journal flushcalls theclaudeCLI to decide what's worth saving, and after the configured hour triggers end-of-day compilation automaticallyjournal compileturns daily logs into organized concept articles with cross-references (triggered automatically or run manually)journal queryanswers questions using index-guided retrieval (no RAG needed at personal scale)journal lintruns 7 health checks (broken links, orphans, contradictions, staleness)
Each new Claude Code session gets the knowledge base index and the last 30 lines of today's daily log injected as context. This means sessions are aware of decisions and patterns from earlier in the day, even before compilation happens. Compiled wiki articles become available after the auto-compile hour.
The compile hour (default: 18 = 6 PM) controls when daily logs get synthesized into structured wiki articles. Set it to after your typical workday ends so the compiler has a full day of context to work with — this produces better, more comprehensive articles than compiling after every session. Each compilation calls the claude CLI, so batching also saves tokens.
You can always run journal compile manually at any time if you want articles sooner.
journal compile # compile new/changed daily logs
journal compile --all # force recompile everything
journal compile --file daily/2026-04-01.md # compile a specific log
journal compile --dry-run # show what would be compiled
journal query "question" # ask the knowledge base
journal query "question" --file-back # ask + save answer as a Q&A article
journal lint # all 7 health checks
journal lint --structural-only # skip LLM checks (faster, free)The knowledge base is plain markdown with [[wikilinks]], which makes it a natural fit for Obsidian. Open the KB directory as an Obsidian vault to get a navigable knowledge graph, backlink tracking, and full-text search across all your compiled articles.
Karpathy's insight: at personal scale (50-500 articles), the LLM reading a structured index.md outperforms vector similarity. The LLM understands what you're really asking; cosine similarity just finds similar words. RAG becomes necessary at ~2,000+ articles when the index exceeds the context window.
See AGENTS.md for the complete technical reference: article formats, hook architecture, compilation rules, cross-referencing conventions, and customization options.