-
Notifications
You must be signed in to change notification settings - Fork 4
Configuration
How to configure GolemCore Bot.
See also: Dashboard, Tools, MCP, Webhooks, Model Routing
There are three main configuration surfaces:
| Surface | File | Best For |
|---|---|---|
| Runtime config | preferences/runtime-config.json |
LLM providers, model router, tools, security, auto mode, memory |
| User preferences | preferences/settings.json |
Language, timezone, per-user tier and model overrides, webhook config |
| Spring properties |
application.properties / env vars |
Workspace paths, feature flags, plugin runtime, update behavior |
The bot stores all state under a base path:
- Spring property:
bot.storage.local.base-path - Docker or JAR env var:
STORAGE_PATH
In Docker, you almost always want this mounted:
docker run -d \
-e STORAGE_PATH=/app/workspace \
-v golemcore-bot-data:/app/workspace \
-p 8080:8080 \
golemcore-bot:latestThe easiest way to configure the bot is via:
http://localhost:8080/dashboard
See Dashboard for the UI flow.
Runtime config is persisted to the workspace:
- File:
preferences/runtime-config.json - Dashboard API:
GET /api/settings/runtime,PUT /api/settings/runtime
Secrets can be provided as plain strings in JSON and are wrapped internally as Secret.
Configure provider credentials under llm.providers.
- Provider key is the model prefix in
provider/model -
apiTypeselects the wire protocol used by the adapter - Supported
apiTypevalues:openai,anthropic,gemini
{
"llm": {
"providers": {
"openai": {
"apiKey": "sk-proj-...",
"apiType": "openai",
"baseUrl": null,
"requestTimeoutSeconds": 300
},
"anthropic": {
"apiKey": "sk-ant-...",
"apiType": "anthropic"
},
"google": {
"apiKey": "AIza...",
"apiType": "gemini"
}
}
}
}Notes:
- use lowercase
apiType -
baseUrlis optional - these same provider profiles are used by the dashboard
Model Catalogfor live discovery via/api/models/discover/{provider}
Model selection now has three layers:
-
llm.providersdefines provider profiles and credentials -
models/models.jsondefines model capability metadata -
modelRoutermaps routing and tier slots to concrete models
{
"modelRouter": {
"routingModel": "openai/gpt-5.2-codex",
"routingModelReasoning": "none",
"balancedModel": "openai/gpt-5.1",
"balancedModelReasoning": "none",
"smartModel": "openai/gpt-5.1",
"smartModelReasoning": "none",
"codingModel": "openai/gpt-5.2",
"codingModelReasoning": "none",
"deepModel": "openai/gpt-5.2",
"deepModelReasoning": "none",
"dynamicTierEnabled": true,
"temperature": 0.7
}
}Notes:
-
routingModelis used for internal routing and classification flows -
*Reasoningvalues depend on the selected model entry inmodels/models.json - in the dashboard, this is split across
LLM Providers,Model Catalog, andModel Router -
/api/models/availablereturns models grouped by provider and filtered to provider profiles that are API-ready - when a discovered model id conflicts across providers, the catalog can persist a provider-scoped id such as
provider/model
See Model Routing for the full flow.
Core tool enablement flags live under tools:
{
"tools": {
"filesystemEnabled": true,
"shellEnabled": true,
"skillManagementEnabled": true,
"skillTransitionEnabled": true,
"tierEnabled": true,
"goalManagementEnabled": true,
"shellEnvironmentVariables": []
}
}Official integrations are now loaded through the plugin runtime and keep their own configuration under:
preferences/plugins/<owner>/<plugin>.json
This includes browser, Brave Search, Tavily Search, Firecrawl, Perplexity Sonar, weather, mail, LightRAG, and voice providers.
The browse tool is provided by the official golemcore/browser plugin and uses Playwright.
Configuration lives in preferences/plugins/golemcore/browser.json:
{
"enabled": true,
"headless": true,
"timeoutMs": 30000,
"userAgent": "..."
}Docker requirements:
docker run -d \
--shm-size=256m \
--cap-add=SYS_ADMIN \
...Input and tool safety settings live under security:
{
"security": {
"sanitizeInput": true,
"detectPromptInjection": true,
"detectCommandInjection": true,
"maxInputLength": 10000,
"allowlistEnabled": true,
"toolConfirmationEnabled": false,
"toolConfirmationTimeoutSeconds": 60
}
}{
"rateLimit": {
"enabled": true,
"userRequestsPerMinute": 20,
"userRequestsPerHour": 100,
"userRequestsPerDay": 500,
"channelMessagesPerSecond": 30,
"llmRequestsPerMinute": 60
}
}{
"compaction": {
"enabled": true,
"maxContextTokens": 50000,
"keepLastMessages": 20,
"preserveTurnBoundaries": true,
"detailsEnabled": true,
"detailsMaxItemsPerCategory": 50,
"summaryTimeoutMs": 15000
}
}Field notes:
-
preserveTurnBoundarieskeeps compaction split-safe -
detailsEnabledpersists structured compaction diagnostics -
detailsMaxItemsPerCategorycaps stored file and tool detail lists -
summaryTimeoutMsis the hard timeout for LLM summarization
{
"turn": {
"maxLlmCalls": 200,
"maxToolExecutions": 500,
"deadline": "PT1H",
"autoRetryEnabled": true,
"autoRetryMaxAttempts": 2,
"autoRetryBaseDelayMs": 600,
"queueSteeringEnabled": true,
"queueSteeringMode": "one-at-a-time",
"queueFollowUpMode": "one-at-a-time"
}
}Field notes:
-
autoRetry*controls resilient retries for transient failures -
queueSteeringEnabledlets steering messages bypass normal follow-up handling -
queueSteeringModeandqueueFollowUpModesupportone-at-a-timeorall
{
"memory": {
"enabled": true,
"softPromptBudgetTokens": 1800,
"maxPromptBudgetTokens": 3500,
"workingTopK": 6,
"episodicTopK": 8,
"semanticTopK": 6,
"proceduralTopK": 4,
"promotionEnabled": true,
"promotionMinConfidence": 0.75,
"decayEnabled": true,
"decayDays": 30,
"retrievalLookbackDays": 21,
"codeAwareExtractionEnabled": true
}
}{
"telegram": {
"enabled": false,
"token": "123456:ABC-DEF...",
"authMode": "invite_only",
"allowedUsers": []
}
}{
"voice": {
"enabled": false,
"telegramRespondWithVoice": false,
"telegramTranscribeIncoming": false,
"sttProvider": "golemcore/elevenlabs",
"ttsProvider": "golemcore/elevenlabs"
}
}Notes:
- the
voicesection primarily routes STT and TTS providers and Telegram voice behavior - provider-specific secrets and endpoints live in plugin settings such as:
preferences/plugins/golemcore/elevenlabs.jsonpreferences/plugins/golemcore/whisper.json
- the dashboard resolves available voice providers from
/api/plugins/voice/providers
{
"autoMode": {
"enabled": false,
"tickIntervalSeconds": 1,
"taskTimeLimitMinutes": 10,
"autoStart": true,
"maxGoals": 3,
"modelTier": "default",
"notifyMilestones": true
}
}Notes:
- auto mode is schedule-driven
-
tickIntervalSecondsremains in runtime config, but the backend currently polls due schedules every second - schedules themselves are stored separately in
auto/schedules.json
RAG is configured through the official golemcore/lightrag plugin rather than the core runtime schema.
Store this in preferences/plugins/golemcore/lightrag.json:
{
"enabled": false,
"url": "http://localhost:9621",
"apiKey": "",
"queryMode": "hybrid",
"timeoutSeconds": 10,
"indexMinLength": 50
}{
"mcp": {
"enabled": true,
"defaultStartupTimeout": 30,
"defaultIdleTimeout": 5
}
}See MCP.
User preferences are stored separately in:
preferences/settings.json
This includes language, timezone, notifications, per-user tier and model overrides, and webhook configuration.
Model capabilities are stored in:
models/models.json
On first run, the bot copies a bundled models.json into the workspace. The dashboard Model Catalog can edit, reload, and enrich this file with live provider discovery.
Important behavior:
- model ids may be plain (
gpt-5.1) or provider-scoped (openai/gpt-5.1) - provider-scoped ids are useful when the same raw id exists under multiple provider profiles
-
/api/models/discover/{provider}reads live models from the configured provider API and helps seed catalog entries
See Model Routing.
Some settings are still controlled via Spring properties, typically via env vars in Docker:
- workspace paths:
STORAGE_PATH,TOOLS_WORKSPACE - dashboard toggle:
DASHBOARD_ENABLED - plugin runtime:
BOT_PLUGINS_ENABLED,BOT_PLUGINS_DIRECTORY,BOT_PLUGINS_AUTO_START,BOT_PLUGINS_AUTO_RELOAD,BOT_PLUGINS_POLL_INTERVAL - plugin marketplace source:
BOT_PLUGINS_MARKETPLACE_REPOSITORY_DIRECTORY,BOT_PLUGINS_MARKETPLACE_REPOSITORY_URL,BOT_PLUGINS_MARKETPLACE_BRANCH - plugin marketplace HTTP fallback:
BOT_PLUGINS_MARKETPLACE_API_BASE_URL,BOT_PLUGINS_MARKETPLACE_RAW_BASE_URL,BOT_PLUGINS_MARKETPLACE_REMOTE_CACHE_TTL - self-update controls:
BOT_UPDATE_ENABLED,UPDATE_PATH,BOT_UPDATE_MAX_KEPT_VERSIONS,BOT_UPDATE_CHECK_INTERVAL - allowed providers in model picker:
BOT_MODEL_SELECTION_ALLOWED_PROVIDERS - tool result truncation:
bot.auto-compact.max-tool-result-chars - plan mode feature flag:
bot.plan.enabled
The plugin runtime and marketplace are controlled by bot.plugins.*.
Important properties:
bot.plugins.enabledbot.plugins.directorybot.plugins.auto-startbot.plugins.auto-reloadbot.plugins.poll-intervalbot.plugins.marketplace.repository-directorybot.plugins.marketplace.repository-urlbot.plugins.marketplace.branchbot.plugins.marketplace.api-base-urlbot.plugins.marketplace.raw-base-urlbot.plugins.marketplace.remote-cache-ttl
Behavior:
- if
repository-directoryis configured and present, marketplace metadata and artifacts are read from that repository checkout - otherwise the backend falls back to the configured remote repository and GitHub HTTP sources
- installed marketplace artifacts are written into the plugin runtime directory and then reloaded into the running bot
Core self-update is controlled by bot.update.*.
bot.update.enabledbot.update.updates-pathbot.update.max-kept-versionsbot.update.check-interval
Default layout:
workspace/
├── auto/
├── memory/
├── models/
├── preferences/
├── sessions/
├── skills/
└── usage/
GET /api/system/healthGET /api/system/configGET /api/system/diagnostics
GolemCore Bot -- Apache License 2.0 | GitHub | Issues | Discussions
Getting Started
Core Concepts
Features
Reference
Development