A multi-platform AI agent bot for Slack, Telegram, and Discord — based on pi-mom, with the goal of merging improvements back upstream.
This project is a forked and extended version of the mom package from badlogic/pi-mono by Mario Zechner, licensed under MIT.
- Original project: pi-mom (22K+ stars)
- Base version: forked from pi-mom v0.57.1 (synchronized with
@mariozechner/*packages) - Primary motivation: Internal services urgently needed a multi-platform bot — this fork enables rapid iteration while preparing changes to contribute back upstream
| Aspect | Description |
|---|---|
| Current Status | Temporary standalone fork for urgent internal deployment |
| Ultimate Goal | Merge all improvements back into pi-mono monorepo |
| Unique Value | Multi-platform support (Slack + Telegram + Discord) to be contributed upstream |
Our internal services urgently needed a multi-platform bot, and we couldn't wait for upstream release cycles. This fork allows us to:
- Ship fast: Deploy to production immediately while internal demand is high
- Iterate freely: Test multi-platform adapters (Slack, Telegram, Discord) without monorepo constraints
- Contribute back: All work here is intended to be merged into pi-mono —
mamais not a replacement formom
"This is not a separate product — it's a temporary fork for urgent internal needs, and all improvements will be contributed back to pi-mono."
We actively track the upstream pi-mom and plan to:
- ✅ Submit PRs for platform adapters (Telegram, Discord)
- ✅ Contribute cross-platform abstractions
- ✅ Keep dependencies synchronized with pi-mono releases
- ✅ Document what we learn from production use
- Multi-platform — Slack, Telegram, and Discord adapters out of the box
- Persistent sessions — session behavior is adapted per platform instead of forcing one thread model everywhere
- Concurrent conversations — Slack threads, Discord replies/threads, and Telegram reply chains can run independently
- Sandbox execution — run agent commands on host or inside a Docker container
- Persistent memory — workspace-level and channel-level
MEMORY.mdfiles - Skills — drop custom CLI tools into
skills/directories - Event system — schedule one-shot or recurring tasks via JSON files
- Multi-provider — configure any provider/model supported by
pi-ai
| Platform | User Interaction Structure | sessionKey Rule |
Default Session Model | Special Handling Needed | Notes |
|---|---|---|---|---|---|
| Slack | channel top-level + thread replies | top-level: channelId; thread: channelId:threadTs |
channel keeps one persistent session; thread forks from channel into its own session | High | channel -> thread inherits context via fork; thread -> channel does not merge back automatically |
| Discord | normal messages, replies, thread channels | channelId:threadTsOrMsgId |
replies / thread channels naturally map to isolated sessions | Low | no aliasing layer needed; session identity is determined directly from the Discord event |
| Telegram | private chats, group replies | private chat: chatId; group reply chain: chatId:replyToIdOrMsgId |
private chats use one long session; groups split by reply chain | Medium | Telegram has no native thread model; group sessions are modeled from reply chains |
- Node.js >= 20
- One of the platform integrations below
npm install -g @geminixiang/mamaOr run directly after cloning:
npm install
npm run build- Create a Slack app with Socket Mode enabled (setup guide).
- Add the following OAuth Bot Token Scopes:
app_mentions:read,channels:history,channels:read,chat:writefiles:read,files:write,groups:history,groups:readim:history,im:read,im:write,users:readassistant:write— required for native "Thinking" status indicator
- Enable the Home Tab and Agent mode:
- App Home → Show Tabs — toggle Home Tab on
- App Home → Agents & AI Apps — toggle Agent or Assistant on
- Subscribe to Bot Events:
app_home_opened,app_mentionassistant_thread_context_changed,assistant_thread_startedmessage.channels,message.groups,message.im
- Enable Interactivity (Settings → Interactivity & Shortcuts → toggle on).
- Copy the App-Level Token (
xapp-…) and Bot Token (xoxb-…).
Or import this App Manifest directly (Settings → App Manifest → paste JSON):
Example App Manifest
{
"display_information": {
"name": "mama"
},
"features": {
"app_home": {
"home_tab_enabled": true,
"messages_tab_enabled": false,
"messages_tab_read_only_enabled": false
},
"bot_user": {
"display_name": "mama",
"always_online": false
}
},
"oauth_config": {
"scopes": {
"bot": [
"app_mentions:read",
"assistant:write",
"channels:history",
"channels:read",
"chat:write",
"files:read",
"files:write",
"groups:history",
"groups:read",
"im:history",
"im:read",
"im:write",
"users:read"
]
}
},
"settings": {
"event_subscriptions": {
"bot_events": [
"app_home_opened",
"app_mention",
"assistant_thread_context_changed",
"assistant_thread_started",
"message.channels",
"message.groups",
"message.im"
]
},
"interactivity": {
"is_enabled": true
},
"org_deploy_enabled": false,
"socket_mode_enabled": true,
"token_rotation_enabled": false
}
}export MOM_SLACK_APP_TOKEN=xapp-...
export MOM_SLACK_BOT_TOKEN=xoxb-...
mama [--sandbox=host|docker:<container>] <working-directory>The bot responds when @mentioned in any channel or via DM.
- Top-level channel messages — share one persistent channel session.
- Thread replies — fork from the channel session into an isolated thread session.
- Thread memory — inherited at fork time only; thread changes do not merge back into the channel automatically.
- Message @BotFather →
/newbotto create a bot and get the Bot Token. - Optionally disable privacy mode (
/setprivacy → Disable) so the bot can read group messages without being@mentioned.
export MOM_TELEGRAM_BOT_TOKEN=123456:ABC-...
mama [--sandbox=host|docker:<container>] <working-directory>- Private chats — every message is forwarded to the bot automatically.
- Group chats — the bot only responds when
@mentionedby username. - Reply chains — replying to a previous message continues the same session.
- Say
stopor/stopto cancel a running task.
- Go to the Discord Developer Portal → New Application.
- Under Bot, enable Message Content Intent (required to read message text).
- Under OAuth2 → URL Generator, select scopes
bot+ permissionsSend Messages,Read Message History,Attach Files. Invite the bot to your server with the generated URL. - Copy the Bot Token.
export MOM_DISCORD_BOT_TOKEN=MTI...
mama [--sandbox=host|docker:<container>] <working-directory>- Server channels — the bot responds when
@mentioned. - DMs — every message is forwarded automatically.
- Threads — messages inside a Discord thread share a single session.
- Reply chains — replying to a message continues that session.
- Say
stopor/stopto cancel a running task.
| Option | Default | Description |
|---|---|---|
--sandbox=host |
✓ | Run commands directly on host |
--sandbox=docker:<name> |
Run commands inside a Docker container | |
--sandbox=firecracker:<vm-id>:<path> |
Run commands inside a Firecracker microVM | |
--download <channel-id> |
Download channel history to stdout and exit (Slack only) |
mama --download C0123456789Create settings.json in your working directory to override defaults:
{
"provider": "anthropic",
"model": "claude-sonnet-4-5",
"thinkingLevel": "off",
"sessionScope": "thread",
"logFormat": "console",
"logLevel": "info",
"sentryDsn": "https://examplePublicKey@o0.ingest.sentry.io/0"
}| Field | Default | Description |
|---|---|---|
provider |
anthropic |
AI provider (env: MOM_AI_PROVIDER) |
model |
claude-sonnet-4-5 |
Model name (env: MOM_AI_MODEL) |
thinkingLevel |
off |
off / low / medium / high |
sessionScope |
thread |
thread (per thread/reply chain) or channel |
logFormat |
console |
console (colored stdout) or json (GCP Cloud Logging) |
logLevel |
info |
trace / debug / info / warn / error |
sentryDsn |
unset | Sentry DSN (preferred over env SENTRY_DSN) |
When sentryDsn is set, mama sends Sentry events with sensitive prompt/tool content redacted before upload.
Set logFormat: "json" to send structured logs directly to Cloud Logging via API — no Ops Agent or log file configuration needed.
Requirements:
- VM service account has
roles/logging.logWriter GOOGLE_CLOUD_PROJECTenv var is set
GOOGLE_CLOUD_PROJECT=<your-project-id> mama <working-directory>settings.json:
{
"logFormat": "json",
"logLevel": "info"
}Logs appear in Cloud Logging under Log name: mama. Console output (stdout) is unaffected and continues to work alongside Cloud Logging.
<working-directory>/
├── settings.json # AI provider/model/Sentry config
├── MEMORY.md # Global memory (all channels)
├── SYSTEM.md # Installed packages / env changes log
├── skills/ # Global skills (CLI tools)
├── events/ # Scheduled event files
└── <channel-id>/
├── MEMORY.md # Channel-specific memory
├── log.jsonl # Full message history
├── attachments/ # Downloaded user files
├── scratch/ # Agent working directory
├── skills/ # Channel-specific skills
└── sessions/
├── current # Pointer for the channel-level session
├── 2026-04-05T18-04-31-010Z_1d92b3ad.jsonl
└── <thread-ts>.jsonl # Fixed-path thread session
# Create a container (mount your working directory to /workspace)
docker run -d --name mama-sandbox \
-v /path/to/workspace:/workspace \
alpine:latest sleep infinity
# Start mama with Docker sandbox
mama --sandbox=docker:mama-sandbox /path/to/workspaceFirecracker provides lightweight VM isolation with the security benefits of a hypervisor. Unlike Docker containers, Firecracker runs a full Linux kernel, providing stronger isolation.
- SSH access to the Firecracker VM
- SSH key-based authentication configured
- Host workspace must be mounted at
/workspaceinside the VM
--sandbox=firecracker:<vm-id>:<host-path>[:<ssh-user>[:<ssh-port>]]
| Parameter | Default | Description |
|---|---|---|
vm-id |
- | VM identifier (hostname or IP) |
host-path |
- | Working directory on the host |
ssh-user |
root |
SSH username |
ssh-port |
22 |
SSH port |
# Basic usage (VM at 192.168.1.100, default ssh user root:22)
mama --sandbox=firecracker:192.168.1.100:/home/user/workspace /home/user/workspace
# Custom SSH user
mama --sandbox=firecracker:192.168.1.100:/home/user/workspace:ubuntu /home/user/workspace
# Custom SSH port
mama --sandbox=firecracker:192.168.1.100:/home/user/workspace:root:2222 /home/user/workspace-
Start a Firecracker VM with your preferred method (fc-agent, firecracker-ctl, or manual)
-
Configure SSH access inside the VM:
# Inside the VM - allow password-less SSH for mama sudo systemctl enable ssh sudo sed -i 's/^#*PermitRootLogin.*/PermitRootLogin yes/' /etc/ssh/sshd_config sudo sed -i 's/^#*PubkeyAuthentication.*/PubkeyAuthentication yes/' /etc/ssh/sshd_config sudo systemctl restart ssh
-
Mount your workspace at
/workspaceinside the VM:# Option A: 9pfs (recommended, from host) sudo mount -t 9p -o trans=virtio,version=9p2000.L host0 /workspace # Option B: NFS sudo mount -t nfs <host-ip>:/path/to/workspace /workspace
-
Test SSH connectivity from host:
ssh root@192.168.1.100 "echo works"
The host path is mounted as /workspace inside the Firecracker VM. All bash commands will execute inside the VM.
Drop JSON files into <working-directory>/events/ to trigger the agent:
// Immediate — triggers as soon as mama sees the file
{"type": "immediate", "channelId": "C0123456789", "text": "New deployment finished"}
// One-shot — triggers once at a specific time
{"type": "one-shot", "channelId": "C0123456789", "text": "Daily standup reminder", "at": "2025-12-15T09:00:00+08:00"}
// Periodic — triggers on a cron schedule
{"type": "periodic", "channelId": "C0123456789", "text": "Check inbox", "schedule": "0 9 * * 1-5", "timezone": "Asia/Taipei"}Create reusable CLI tools by adding a directory with a SKILL.md:
skills/
└── my-tool/
├── SKILL.md # name + description frontmatter, usage docs
└── run.sh # the actual script
SKILL.md frontmatter:
---
name: my-tool
description: Does something useful
---
Usage: {baseDir}/run.sh <args>npm run dev # watch mode
npm test # run tests
npm run build # production build| Package | mama Version | pi-mom Synced Version |
|---|---|---|
@mariozechner/pi-agent-core |
^0.57.1 |
✅ Synchronized |
@mariozechner/pi-ai |
^0.57.1 |
✅ Synchronized |
@mariozechner/pi-coding-agent |
^0.57.1 |
✅ Synchronized |
@anthropic-ai/sandbox-runtime |
^0.0.40 |
MIT — see LICENSE.
Note: This project inherits the MIT license from pi-mom and aims to keep its contributions compatible with the upstream ecosystem.