A feature-rich Discord bot for interacting with LLMs (Large Language Models) through thread-based conversations. Supports any OpenAI-compatible API including Ollama, OpenAI, OpenRouter, and more.
- Thread-Based Conversations - Each
/askcommand creates a dedicated thread with full context - Multi-Provider Support - Works with Ollama, OpenAI, Claude, and any OpenAI-compatible API
- Smart Context Management - Automatic token counting and context window management
- Role-Based Access Control - Discord role-based permissions and model access
- Rate Limiting - Configurable request and token limits per role
- Redis-Backed - Fast, persistent storage with automatic TTL
- Hot-Reload - Update configuration without restarting
- Interactive Buttons - Regenerate, copy, clear context, change settings
- Usage Tracking - Monitor token usage with configurable retention
- Docker & Docker Compose (recommended)
- Go 1.23+ (for local development)
- Discord Bot Token (Get one here)
- Ollama (optional, for local LLMs)
- Go to https://discord.com/developers/applications
- Click "New Application" and name it
- Go to "Bot" section β "Add Bot"
- Enable Intents:
- β Message Content Intent
- β Server Members Intent
- Copy the bot token
- Go to "OAuth2" β "URL Generator"
- Select scopes:
bot,applications.commands - Select bot permissions (or use integer
274878295040):- Read Messages/View Channels
- Send Messages
- Send Messages in Threads
- Create Public Threads
- Manage Threads
- Embed Links
- Read Message History
- Use Slash Commands
- Copy the URL and open in browser to invite bot
- Enable Developer Mode in Discord (Settings β Advanced β Developer Mode)
- Right-click your server icon β Copy Server ID
# Clone or download this repository
cd discord-prompter
# Copy example config
cp config/config.example.yaml config/config.yaml
# Edit config with your guild ID
nano config/config.yaml
# Replace YOUR_GUILD_ID_HERE with your actual guild ID
# Create environment file
cp .env.example .env
nano .env
# Add your DISCORD_TOKEN# Start services
docker-compose up -d
# View logs
docker-compose logs -f discord-prompter
# Stop services
docker-compose downThat's it! The bot should now be online in your Discord server.
Start a conversation:
/ask prompt:Explain Docker networking
Use a specific model:
/ask prompt:Write a Python function model:gpt-4o
Use a custom system prompt:
/ask prompt:Write a story system_prompt:creative
List available models:
/models
List system prompts:
/prompts
Check your usage:
/usage
Once a conversation thread is created, you can:
- Reply normally - Just send messages in the thread
- π Regenerate - Re-run the last prompt
- π Copy - Copy the response to clipboard
- ποΈ Clear Context - Reset conversation history
- βοΈ Settings - Change model or system prompt mid-conversation
/reload - Reload configuration without restart (requires reload_config permission)
redis:
address: "redis:6379"
password_env: REDIS_PASSWORD
providers:
- name: ollama-local
base_url: http://host.docker.internal:11434/v1
models:
- id: llama3.2
display_name: "Llama 3.2"
guilds:
- id: "YOUR_GUILD_ID"
enabled_models:
- ollama-local/llama3.2
rbac:
roles:
- discord_role: "Member"
permissions:
- use_models
allowed_models:
- ollama-local/llama3.2See config/config.example.yaml for full configuration options.
# Required
DISCORD_TOKEN=your_bot_token_here
REDIS_PASSWORD=your_redis_password
# Optional (for cloud LLM providers)
OPENAI_API_KEY=sk-...
OPENROUTER_API_KEY=sk-or-v1-...
ANTHROPIC_API_KEY=sk-ant-...Configure permissions per Discord role:
rbac:
roles:
- discord_role: "Admin"
permissions:
- unlimited_tokens
- reload_config
allowed_models: ["*"] # All models
- discord_role: "Member"
permissions:
- use_models
allowed_models:
- ollama-local/llama3.2Available Permissions:
use_models- Can use AI modelsmanage_prompts- Can manage system promptsunlimited_rate- Bypass rate limitsunlimited_tokens- Bypass token limitsreload_config- Can reload configurationview_all_usage- Can view all users' usage
rate_limits:
default:
requests_per_minute: 10
requests_per_hour: 100
roles:
Admin:
requests_per_minute: 0 # 0 = unlimitedtoken_limits:
default:
tokens_per_period: 50000
period_hours: 24 # Daily reset
roles:
Pro:
tokens_per_period: 200000
period_hours: 168 # Weekly reset# Start Redis
docker run -d -p 6379:6379 redis:7-alpine
# Build and run
make build
make run
# Or just
go run ./cmd/bot --config config/config.yaml# All tests
make test
# Specific package
go test -v ./internal/config/...
# With coverage
go test -cover ./...# Method 1: Use /reload command in Discord (requires permission)
/reload
# Method 2: Send SIGHUP signal
docker-compose kill -s SIGHUP discord-promptercurl -fsSL https://ollama.com/install.sh | shollama pull llama3.2
ollama pull codellama
ollama pull deepseek-coderIn config/config.yaml:
providers:
- name: ollama-local
base_url: http://host.docker.internal:11434/v1 # Docker
# base_url: http://localhost:11434/v1 # Local dev
api_key_env: "" # No auth needed
models:
- id: llama3.2
display_name: "Llama 3.2"
context_window: 8192discord-prompter/
βββ cmd/bot/ # Main application
βββ internal/
β βββ bot/ # Discord bot logic
β βββ config/ # Configuration loading
β βββ conversation/ # Thread management & context
β βββ llm/ # LLM client & registry
β βββ rbac/ # Role-based access control
β βββ ratelimit/ # Rate & token limiting
β βββ storage/ # Redis storage layer
βββ config/ # Configuration files
βββ scripts/lua/ # Redis Lua scripts
βββ docker-compose.yaml # Docker setup
Bot not responding:
- Check
DISCORD_TOKENis correct - Verify bot has required intents enabled
- Check bot has permissions in channel
Commands not appearing:
- Wait 5-10 minutes for Discord to register commands
- Try kicking and re-inviting the bot
- Check bot has "Use Slash Commands" permission
Rate limit errors:
- Check your role in
config.yaml - Verify role name matches exactly (case-sensitive)
- Admins can bypass with
unlimited_ratepermission
LLM connection errors:
- Ollama: Verify it's running:
curl http://localhost:11434/v1/models - Docker: Use
host.docker.internalinstead oflocalhost - API Keys: Check environment variables are set
Redis connection failed:
- Check Redis is running:
docker-compose ps - Verify
REDIS_PASSWORDmatches in both services - Check Redis is healthy:
docker-compose logs redis
- Use local Ollama for unlimited, fast responses
- Set appropriate
max_context_tokensfor your models - Configure
conversation_ttl_hoursto clean up old threads - Use role-based
token_limitsto manage costs
Contributions welcome! Please:
- Fork the repository
- Create a feature branch
- Write tests for new features
- Ensure all tests pass:
make test - Submit a pull request
MIT License - see LICENSE file for details
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Built with discordgo
- Token counting via tiktoken-go
- Logging with zerolog