LangChain integration for SuperLocalMemory V3 — local-first AI memory with mathematical foundations.
SuperLocalMemoryChatHistory— Drop-inBaseChatMessageHistoryfor conversation persistence across sessionsSuperLocalMemoryRetriever—BaseRetrieverfor RAG-style memory augmentation with 4-channel retrieval- Direct Python API — No subprocess calls, no CLI dependency at runtime
- Privacy-first — All data stays on your device (Mode A: zero cloud)
pip install langchain-superlocalmemoryRequires SuperLocalMemory V3 to be installed:
pip install "superlocalmemory[search]"
# or
npm install -g superlocalmemory && slm setupfrom langchain_superlocalmemory import SuperLocalMemoryChatHistory
from langchain_core.runnables.history import RunnableWithMessageHistory
# Create history for a session
history = SuperLocalMemoryChatHistory(session_id="my-project")
# Use with RunnableWithMessageHistory
chain_with_memory = RunnableWithMessageHistory(
chain,
lambda session_id: SuperLocalMemoryChatHistory(session_id=session_id),
input_messages_key="input",
history_messages_key="history",
)from langchain_superlocalmemory import SuperLocalMemoryRetriever
retriever = SuperLocalMemoryRetriever(k=5, score_threshold=0.3)
docs = retriever.invoke("authentication middleware patterns")
for doc in docs:
print(f"{doc.page_content} (score: {doc.metadata['score']:.2f})")| Mode | Privacy | Performance | Use Case |
|---|---|---|---|
a |
Maximum (zero cloud) | 74.8% LoCoMo | EU AI Act compliant |
b |
High (local Ollama) | Higher | Private + LLM assist |
c |
Standard (cloud LLM) | 87.7% LoCoMo | Maximum accuracy |
# EU AI Act compliant (default)
history = SuperLocalMemoryChatHistory(session_id="s1", mode="a")
# Full power
history = SuperLocalMemoryChatHistory(session_id="s1", mode="c")MIT
Part of Qualixar | Author: Varun Pratap Bhardwaj