Memoh(/ˈmemoʊ/) is an always-on, containerized AI agent orchestrator. Create multiple AI bots, each running in its own isolated container with persistent memory, and interact with them across Telegram, Discord and so on. Bots can execute commands, edit files, browse the web, call external tools via MCP, and remember everything — like giving each bot its own computer and brain.
One-click install (requires Docker):
curl -fsSL https://memoh.sh | shSilent install with all defaults: curl -fsSL ... | sh -s -- -y
Or manually:
git clone --depth 1 https://github.com/memohai/Memoh.git
cd Memoh
cp conf/app.docker.toml config.toml
# Edit config.toml
docker compose up -dInstall a specific version:
curl -fsSL https://memoh.sh | MEMOH_VERSION=v0.6.0 shUse CN mirror for slow image pulls:
curl -fsSL https://memoh.sh | USE_CN_MIRROR=true shDo not run the whole installer with
sudo. The installer will usesudo dockerinternally if Docker requires it. On macOS or if your user is in thedockergroup,sudois not required for Docker either.
Visit http://localhost:8082 after startup. Default login: admin / admin123
See DEPLOYMENT.md for custom configuration and production setup.
Documentation entry points:
- About Memoh
- Providers & Models
- Bot Setup
- Sessions & Discuss Mode
- Channels
- Skills
- Supermarket
- Slash Commands
Memoh is built for always-on continuity — an AI that stays online, and a memory that stays yours.
- Lightweight & Fast: Built with Go as home/studio infrastructure, runs efficiently on edge devices.
- Containerized by default: Each bot gets an isolated container with its own filesystem, network, and tools.
- Hybrid split: Cloud inference for frontier model capability, local-first memory and indexing for privacy.
- Multi-user first: Explicit sharing and privacy boundaries across users and bots.
- Full graphical configuration: Configure bots, channels, MCP, skills, and all settings through a modern web UI — no coding required.
- 🤖 Multi-Bot & Multi-User: Create multiple bots that chat privately, in groups, or with each other. Bots distinguish individual users in group chats, remember each person's context, and support cross-platform identity binding.
- 📦 Containerized: Each bot runs in its own isolated containerd container with a dedicated filesystem and network — like having its own computer. Supports snapshots, data export/import, and versioning.
- 🗂️ Persistent File System: Every bot has a writable home directory that survives restarts, upgrades, and migrations. Bots can read, write, and organize files freely; you can browse, upload, download, and edit them visually through the web UI's file manager.
- 🧠 Memory Engineering: LLM-driven fact extraction, hybrid retrieval (dense + sparse + BM25), provider-based long-term memory, memory compaction, and separate session-level context compaction. Pluggable backends: Built-in (off / sparse / dense), Mem0, OpenViking.
- 💬 Broad Channel Coverage: Telegram, Discord, Lark (Feishu), QQ, Matrix, Misskey, DingTalk, WeCom, WeChat, WeChat Official Account, Email (Mailgun / SMTP / Gmail OAuth), and built-in Web UI.
- 🔧 MCP (Model Context Protocol): Full MCP support (HTTP / SSE / Stdio / OAuth). Connect external tool servers for extensibility; each bot manages its own independent MCP connections.
- 🌐 Browser Automation: Headless Chromium/Firefox via Playwright — navigate, click, fill forms, screenshot, read accessibility trees, manage tabs.
- 🎭 Skills, Supermarket & Subagents: Define bot behavior through modular skills, install curated skills and MCP templates from Supermarket, and delegate complex tasks to sub-agents with independent context.
- 💭 Sessions & Discuss Mode: Use chat, discuss, schedule, heartbeat, and subagent sessions with slash-command control and session status inspection.
- ⏰ Automation: Cron-based scheduled tasks and periodic heartbeat for autonomous bot activity.
- 🖥️ Web UI: Modern dashboard (Vue 3 + Tailwind CSS) — streaming chat, tool call visualization, file manager, visual configuration for all settings. Dark/light theme, i18n.
- 🔐 Access Control: Priority-based ACL rules with presets, allow/deny effects, and scope by channel identity, channel type, or conversation.
- 🧪 Multi-Model: OpenAI-compatible, Anthropic, Google, OpenAI Codex, GitHub Copilot, and Edge TTS providers. Per-bot model assignment, provider OAuth, and automatic model import.
- 🎙️ Speech & Transcription: Bots can speak through 10+ TTS providers (Edge, OpenAI, ElevenLabs, Deepgram, Azure, Google, MiniMax, Volcengine, Alibaba, OpenRouter) and listen — voice messages received from Telegram, Discord, etc. are auto-transcribed via STT models (OpenAI / OpenRouter), and bots can transcribe any audio file on demand through a built-in tool.
- 🚀 One-Click Deploy: Docker Compose with automatic migration, containerd setup, and CNI networking.
Memoh ships with a fully self-hosted memory engine out of the box — no external API, no SaaS dependency. Every bot remembers what you've told it across sessions, days, and platforms; in group chats, each user's memories are kept separately so the bot doesn't mix you up with the rest.
Three modes, switchable per bot from the web UI:
| Mode | Backend | When to use |
|---|---|---|
| Off | Plain file storage, no vector search | Small bots, debugging, or when you want minimal moving parts |
| Sparse | Neural sparse vectors via a local model + BM25 | Zero API cost, runs entirely on your machine, strong recall for short factual memories |
| Dense | Embedding model + Qdrant vector DB | Best semantic recall — finds memories by meaning, not just keywords |
Under the hood:
- LLM-driven fact extraction — every conversation turn is parsed, deduplicated, and stored as structured memories rather than raw transcripts.
- Hybrid retrieval — dense vectors, sparse vectors and BM25 are combined and re-ranked, so both "what was that API key" (lexical) and "the project I told you about last week" (semantic) hit reliably.
- Memory compaction — redundant or stale entries are periodically merged by an LLM, keeping the index small and recall sharp.
- Inspect & edit anything — browse, search, manually create/edit memories, rebuild the whole index, and visualize the vector manifold (Top-K distribution & CDF curves) from the web UI.
If you'd rather plug into an existing memory service, Memoh also supports Mem0 (SaaS) and OpenViking (self-hosted or SaaS) as drop-in alternatives — same bot binding, same chat experience, just a different backend.
See the documentation for full setup details.
![]() |
![]() |
| Chat | Container |
![]() |
![]() |
| Providers | File Manager |
![]() |
![]() |
| Scheduled Tasks | Token Usage |
- Twilight AI — A lightweight, idiomatic AI SDK for Go — inspired by Vercel AI SDK. Provider-agnostic (OpenAI, Anthropic, Google), with first-class streaming, tool calling, MCP support, and embeddings.
- 🌐 Website
- 📚 Documentation — setup, concepts, and guides
- 🤝 Cooperation — business@memoh.net
- 💬 Telegram Group — community chat & support
- 🛒 Supermarket — curated skills & MCP templates
LICENSE: AGPLv3
Made with ❤️ by MemohAI Team,
Copyright (C) 2026 MemohAI (memoh.ai). All rights reserved.






