Skip to content

Releases: riponcm/projectmem

v0.1.1 — First Stable Release · Cross-Project Memory Verified

15 May 05:54

Choose a tag to compare

projectmem v0.1.1 — First Stable Release

The local-first memory layer for AI coding agents. projectmem captures what your AI learned (and what didn't work) so the next session — and the next project — starts experienced instead of from zero. 100% local. No cloud. No telemetry. No account.

After months of dogfooding, 46 documented lessons, a 22-item polish-pass, four MCP clients verified end-to-end, and three language ecosystems put through a real cross-project memory test cycle — v0.1.1 is the version we're comfortable putting on PyPI as a stable release.

pip install projectmem
cd your-project
pjm init

Two commands and your AI is no longer amnesiac.


What projectmem does, in one sentence

It gives your AI persistent memory — about your bugs, your decisions, the approaches you've already tried, and the libraries that have bitten you before — accessible through 14 native MCP tools that work with Claude Desktop, Cursor, Antigravity, and Codex out of the box.

What's in this release

The intelligence layer

  • Pre-commit warnings — fires before a commit if you're about to repeat a failed approach, touch a high-churn file, or revisit an unresolved issue. The only AI memory tool that prevents mistakes instead of just remembering them.
  • Real-time file watcher — auto-starts on pjm init. Detects rapid edits to the same file (debugging sessions) and logs churn events automatically. Battery-aware, gitignore-aware.
  • Smart context injection (pjm wrap) — wraps your AI agent with a token-budgeted memory block before the session starts. Inject into CLAUDE.md, .cursorrules, or clipboard.
  • Prevention Score (pjm score) — quantifiable A+ → F letter grade backed by debugging hours saved, tokens prevented, and dollars protected. CTO-readable ROI.

Cross-project memory — verified across 3 languages

A library gotcha learned in one project surfaces automatically in any other project on the same stack, with source-project attribution. Three wiring gaps closed in this release:

  • Auto-promote wiring restored. Failed attempts and gotcha-prefixed decisions now consistently propagate to the machine-wide store. Wired through storage.append_event so every write surface — MCP, CLI, git hooks — promotes uniformly.
  • Language parity. Self-curating library cache at ~/.projectmem/global/.promotable.json means Go, Rust, Java, Ruby, and mobile projects accumulate cross-project knowledge the same way JavaScript and Python ones did. A Go user's gin lessons now propagate exactly like a React user's vite ones.
  • Signal filter. Project-setup decisions stay local. Only deliberate lessons — failed attempts, or decisions/notes opening with gotcha: / lesson: / warning: / caution: / pitfall: / avoid: / don't / do not / never / bug: — reach the global store. Signal-to-noise went from 14% to 100% in the real test cycle.

End-to-end verification across proj-reactproj-next (JS inheritance with attribution), proj-python (negative — vite gotcha correctly bounded by stack), and proj-go (gin promotion under the new library cache + signal filter).

Cross-client compatibility

Four MCP clients tested end-to-end. 124 of 124 cells green.

Client Status Setup
Antigravity ✅ All checks pass Native — cwd honored, no workaround
Claude Desktop ✅ All checks pass Use Auto mode (not Plan); pass root via --root in args
Cursor ✅ All checks pass Same --root workaround for cwd-ignored bug
Codex ✅ All checks pass TOML config at ~/.codex/config.toml; medium reasoning required

Each client ships with a documented setup block, captured during real verification.

Operational hardening

  • MCP stdio integrity — write tools no longer corrupt the JSON-RPC stream. Five consecutive log_issue / record_attempt / record_fix / add_decision / add_note calls survive cleanly in every client.
  • MCP project-root discovery — server walks up the tree to find .projectmem/ (like git does for .git/). Plus --root flag and PROJECTMEM_ROOT env var for explicit pinning.
  • No silent issue misattributionrecord_attempt after record_fix never latches onto a stale open issue. Marker file + 5-minute time-fence + explicit --issue flag.
  • AI workflow alignment — three surfaces (MCP instructions=, CLAUDE.md, AI_INSTRUCTIONS.md) now mirror each other. No drift between what each client tells the AI.
  • Setup → Maintenance state machine — AI detects placeholder vs real content via concrete signals, populates memory through MCP write tools (not direct file edits).

100% local. No exceptions.

  • Memory lives at <your-project>/.projectmem/ (local to each project) and ~/.projectmem/global/ (machine-wide).
  • Nothing uploads anywhere. No accounts. No telemetry. No "anonymous usage data."
  • Team sharing is explicit: pjm global export > team-gotchas.json → commit → teammate runs pjm global import. You choose what to share and when.

Install

pip install projectmem

Requires Python 3.10+. MIT licensed.

Quick start

cd your-project
pjm init                           # creates .projectmem/, installs git hooks, starts watcher
# Open your AI client (Claude Desktop / Cursor / Antigravity / Codex)
# Ask anything about the project — memory is now wired

For the multi-project / monorepo setup, AI client configuration per tool, and the full feature tour, see the User Guide.

Upgrade notes

This is your first chance to install projectmem from real PyPI — there's nothing to upgrade from. If you previously installed from TestPyPI or from source, pip install --upgrade projectmem replaces it cleanly. Memory files (.projectmem/events.jsonl and global store) load without migration.

Acknowledgements

Built in the open, on a single MacBook, over six months of dogfooding. Every bug in the launch matrix was a real bug found by real use — not synthetic test data. Special thanks to the four MCP client teams whose work made this possible: Anthropic (Claude Desktop), Anysphere (Cursor), Google (Antigravity), and OpenAI (Codex).

The 46 lessons logged during this release are publicly tracked in report/LESSONS_AND_FIXES.md. Read them if you want to see how the sausage actually gets made.


Star the repo · Try it now · Read the guide · Report a bug