Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
23 changes: 22 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,6 +81,27 @@ showcasing its capabilities in **information extraction**, **temporal and cross-

## 🚀 Getting Started

### ⭐️ MemOS online API
The easiest way to use MemOS. Equip your agent with memory **in minutes**!

Sign up and get started on[`MemOS dashboard`](https://memos-dashboard.openmem.net/cn/quickstart/?source=landing).


### Self-Hosted Server
1. Get the repository.
```bash
git clone https://github.com/MemTensor/MemOS.git
cd MemOS
pip install -r ./docker/requirements.txt
```

2. Configure `docker/.env.example` and copy to `MemOS/.env`
3. Start the service.
```bash
uvicorn memos.api.server_api:app --host 0.0.0.0 --port 8001 --workers 8
```

### Local SDK
Here's a quick example of how to create a **`MemCube`**, load it from a directory, access its memories, and save it.

```python
Expand All @@ -102,7 +123,7 @@ for item in mem_cube.act_mem.get_all():
mem_cube.dump("tmp/mem_cube")
```

What about **`MOS`** (Memory Operating System)? It's a higher-level orchestration layer that manages multiple MemCubes and provides a unified API for memory operations. Here's a quick example of how to use MOS:
**`MOS`** (Memory Operating System) is a higher-level orchestration layer that manages multiple MemCubes and provides a unified API for memory operations. Here's a quick example of how to use MOS:

```python
from memos.configs.mem_os import MOSConfig
Expand Down
69 changes: 50 additions & 19 deletions docker/.env.example
Original file line number Diff line number Diff line change
@@ -1,29 +1,60 @@
# MemOS Environment Variables Configuration
TZ=Asia/Shanghai

# Path to memory storage (e.g. /tmp/data_test)
MOS_CUBE_PATH=
MOS_CUBE_PATH="/tmp/data_test" # Path to memory storage (e.g. /tmp/data_test)
MOS_ENABLE_DEFAULT_CUBE_CONFIG="true" # Enable default cube config (true/false)

# OpenAI Configuration
OPENAI_API_KEY= # Your OpenAI API key
OPENAI_API_BASE= # OpenAI API base URL (default: https://api.openai.com/v1)
OPENAI_API_KEY="sk-xxx" # Your OpenAI API key
OPENAI_API_BASE="http://xxx" # OpenAI API base URL (default: https://api.openai.com/v1)

# MemOS Feature Toggles
MOS_ENABLE_DEFAULT_CUBE_CONFIG= # Enable default cube config (true/false)
MOS_ENABLE_SCHEDULER= # Enable background scheduler (true/false)
# MemOS Chat Model Configuration
MOS_CHAT_MODEL=gpt-4o-mini
MOS_CHAT_TEMPERATURE=0.8
MOS_MAX_TOKENS=8000
MOS_TOP_P=0.9
MOS_TOP_K=50
MOS_CHAT_MODEL_PROVIDER=openai

# Neo4j Configuration
NEO4J_URI= # Neo4j connection URI (e.g. bolt://localhost:7687)
NEO4J_USER= # Neo4j username
NEO4J_PASSWORD= # Neo4j password
MOS_NEO4J_SHARED_DB= # Shared Neo4j database name (if using multi-db)
# graph db
# neo4j
NEO4J_BACKEND=xxx
NEO4J_URI=bolt://xxx
NEO4J_USER=xxx
NEO4J_PASSWORD=xxx
MOS_NEO4J_SHARED_DB=xxx
NEO4J_DB_NAME=xxx

# tetxmem reog
MOS_ENABLE_REORGANIZE=false

# MemOS User Configuration
MOS_USER_ID= # Unique user ID
MOS_SESSION_ID= # Session ID for current chat
MOS_MAX_TURNS_WINDOW= # Max number of turns to keep in memory
MOS_USER_ID=root
MOS_SESSION_ID=default_session
MOS_MAX_TURNS_WINDOW=20

# MemRader Configuration
MEMRADER_MODEL=gpt-4o-mini
MEMRADER_API_KEY=sk-xxx
MEMRADER_API_BASE=http://xxx:3000/v1
MEMRADER_MAX_TOKENS=5000

#embedding & rerank
EMBEDDING_DIMENSION=1024
MOS_EMBEDDER_BACKEND=universal_api
MOS_EMBEDDER_MODEL=bge-m3
MOS_EMBEDDER_API_BASE=http://xxx
MOS_EMBEDDER_API_KEY=EMPTY
MOS_RERANKER_BACKEND=http_bge
MOS_RERANKER_URL=http://xxx
# Ollama Configuration (for embeddings)
#OLLAMA_API_BASE=http://xxx

# Ollama Configuration (for local embedding models)
OLLAMA_API_BASE= # Ollama API base URL (e.g. http://localhost:11434)
# milvus for pref mem
MILVUS_URI=http://xxx
MILVUS_USER_NAME=xxx
MILVUS_PASSWORD=xxx

# Embedding Configuration
MOS_EMBEDDER_BACKEND= # Embedding backend: openai, ollama, etc.
# pref mem
ENABLE_PREFERENCE_MEMORY=true
RETURN_ORIGINAL_PREF_MEM=true
23 changes: 0 additions & 23 deletions evaluation/.env-example
Original file line number Diff line number Diff line change
Expand Up @@ -22,26 +22,3 @@ SUPERMEMORY_API_KEY="sm_xxx"
MEMOBASE_API_KEY="xxx"
MEMOBASE_PROJECT_URL="http://***.***.***.***:8019"

# eval settings
PRE_SPLIT_CHUNK=false

# Configuration Only For Scheduler
# RabbitMQ Configuration
MEMSCHEDULER_RABBITMQ_HOST_NAME=rabbitmq-cn-***.cn-***.amqp-32.net.mq.amqp.aliyuncs.com
MEMSCHEDULER_RABBITMQ_USER_NAME=***
MEMSCHEDULER_RABBITMQ_PASSWORD=***
MEMSCHEDULER_RABBITMQ_VIRTUAL_HOST=memos
MEMSCHEDULER_RABBITMQ_ERASE_ON_CONNECT=true
MEMSCHEDULER_RABBITMQ_PORT=5672

# OpenAI Configuration
MEMSCHEDULER_OPENAI_API_KEY=sk-***
MEMSCHEDULER_OPENAI_BASE_URL=http://***.***.***.***:3000/v1
MEMSCHEDULER_OPENAI_DEFAULT_MODEL=gpt-4o-mini

# Graph DB Configuration
MEMSCHEDULER_GRAPHDBAUTH_URI=bolt://localhost:7687
MEMSCHEDULER_GRAPHDBAUTH_USER=neo4j
MEMSCHEDULER_GRAPHDBAUTH_PASSWORD=***
MEMSCHEDULER_GRAPHDBAUTH_DB_NAME=neo4j
MEMSCHEDULER_GRAPHDBAUTH_AUTO_CREATE=true
5 changes: 1 addition & 4 deletions evaluation/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,10 +16,7 @@ This repository provides tools and scripts for evaluating the `LoCoMo`, `LongMem
```

## Configuration

1. Copy the `.env-example` file to `.env`, and fill in the required environment variables according to your environment and API keys.

2. Copy the `configs-example/` directory to a new directory named `configs/`, and modify the configuration files inside it as needed. This directory contains model and API-specific settings.
Copy the `.env-example` file to `.env`, and fill in the required environment variables according to your environment and API keys.

## Setup MemOS
### local server
Expand Down
51 changes: 0 additions & 51 deletions evaluation/configs-example/mem_cube_config.json

This file was deleted.

51 changes: 0 additions & 51 deletions evaluation/configs-example/mos_memos_config.json

This file was deleted.

42 changes: 16 additions & 26 deletions evaluation/scripts/utils/client.py
Original file line number Diff line number Diff line change
Expand Up @@ -250,33 +250,23 @@ def search(self, query, user_id, top_k):
preference_note = json.loads(response.text)["data"]["preference_note"]
for i in text_mem_res:
i.update({"memory": i.pop("memory_value")})
explicit_pref_string = "Explicit Preference:"
implicit_pref_string = "\n\nImplicit Preference:"
explicit_idx = 0
implicit_idx = 0
for pref in pref_mem_res:
if pref["preference_type"] == "explicit_preference":
explicit_pref_string += f"\n{explicit_idx + 1}. {pref['preference']}"
explicit_idx += 1
if pref["preference_type"] == "implicit_preference":
implicit_pref_string += f"\n{implicit_idx + 1}. {pref['preference']}"
implicit_idx += 1

return {
"text_mem": [{"memories": text_mem_res}],
"pref_string": explicit_pref_string + implicit_pref_string + preference_note,
}

explicit_prefs = [
p["preference"]
for p in pref_mem_res
if p.get("preference_type", "") == "explicit_preference"
]
implicit_prefs = [
p["preference"]
for p in pref_mem_res
if p.get("preference_type", "") == "implicit_preference"
]

pref_parts = []
if explicit_prefs:
pref_parts.append(
"Explicit Preference:\n"
+ "\n".join(f"{i + 1}. {p}" for i, p in enumerate(explicit_prefs))
)
if implicit_prefs:
pref_parts.append(
"Implicit Preference:\n"
+ "\n".join(f"{i + 1}. {p}" for i, p in enumerate(implicit_prefs))
)

pref_string = "\n".join(pref_parts) + preference_note

return {"text_mem": [{"memories": text_mem_res}], "pref_string": pref_string}
except Exception as e:
if attempt < max_retries - 1:
time.sleep(2**attempt)
Expand Down
2 changes: 1 addition & 1 deletion src/memos/log.py
Original file line number Diff line number Diff line change
Expand Up @@ -187,7 +187,7 @@ def close(self):
},
"handlers": {
"console": {
"level": "DEBUG",
"level": selected_log_level,
"class": "logging.StreamHandler",
"stream": stdout,
"formatter": "no_datetime",
Expand Down
Loading