Skip to content

Configuration

Alex Kuleshov edited this page Mar 9, 2026 · 10 revisions

Configuration

How to configure GolemCore Bot.

See also: Dashboard, Tools, MCP, Webhooks, Model Routing


Where Configuration Lives

There are three main configuration surfaces:

Surface File Best For
Runtime config preferences/runtime-config.json LLM providers, model router, tools, security, auto mode, memory
User preferences preferences/settings.json Language, timezone, per-user tier and model overrides, webhook config
Spring properties application.properties / env vars Workspace paths, feature flags, plugin runtime, update behavior

Workspace Base Path

The bot stores all state under a base path:

  • Spring property: bot.storage.local.base-path
  • Docker or JAR env var: STORAGE_PATH

In Docker, you almost always want this mounted:

docker run -d \
  -e STORAGE_PATH=/app/workspace \
  -v golemcore-bot-data:/app/workspace \
  -p 8080:8080 \
  golemcore-bot:latest

Dashboard

The easiest way to configure the bot is via:

  • http://localhost:8080/dashboard

See Dashboard for the UI flow.


Runtime Config (preferences/runtime-config.json)

Runtime config is persisted to the workspace:

  • File: preferences/runtime-config.json
  • Dashboard API: GET /api/settings/runtime, PUT /api/settings/runtime

Secrets can be provided as plain strings in JSON and are wrapped internally as Secret.

LLM Providers

Configure provider credentials under llm.providers.

  • Provider key is the model prefix in provider/model
  • apiType selects the wire protocol used by the adapter
  • Supported apiType values: openai, anthropic, gemini
{
  "llm": {
    "providers": {
      "openai": {
        "apiKey": "sk-proj-...",
        "apiType": "openai",
        "baseUrl": null,
        "requestTimeoutSeconds": 300
      },
      "anthropic": {
        "apiKey": "sk-ant-...",
        "apiType": "anthropic"
      },
      "google": {
        "apiKey": "AIza...",
        "apiType": "gemini"
      }
    }
  }
}

Notes:

  • use lowercase apiType
  • baseUrl is optional
  • these same provider profiles are used by the dashboard Model Catalog for live discovery via /api/models/discover/{provider}

Model Configuration

Model selection now has three layers:

  1. llm.providers defines provider profiles and credentials
  2. models/models.json defines model capability metadata
  3. modelRouter maps routing and tier slots to concrete models
{
  "modelRouter": {
    "routingModel": "openai/gpt-5.2-codex",
    "routingModelReasoning": "none",
    "balancedModel": "openai/gpt-5.1",
    "balancedModelReasoning": "none",
    "smartModel": "openai/gpt-5.1",
    "smartModelReasoning": "none",
    "codingModel": "openai/gpt-5.2",
    "codingModelReasoning": "none",
    "deepModel": "openai/gpt-5.2",
    "deepModelReasoning": "none",
    "dynamicTierEnabled": true,
    "temperature": 0.7
  }
}

Notes:

  • routingModel is used for internal routing and classification flows
  • *Reasoning values depend on the selected model entry in models/models.json
  • in the dashboard, this is split across LLM Providers, Model Catalog, and Model Router
  • /api/models/available returns models grouped by provider and filtered to provider profiles that are API-ready
  • when a discovered model id conflicts across providers, the catalog can persist a provider-scoped id such as provider/model

See Model Routing for the full flow.

Tools

Core tool enablement flags live under tools:

{
  "tools": {
    "filesystemEnabled": true,
    "shellEnabled": true,
    "skillManagementEnabled": true,
    "skillTransitionEnabled": true,
    "tierEnabled": true,
    "goalManagementEnabled": true,
    "shellEnvironmentVariables": []
  }
}

Official integrations are now loaded through the plugin runtime and keep their own configuration under:

  • preferences/plugins/<owner>/<plugin>.json

This includes browser, Brave Search, Tavily Search, Firecrawl, Perplexity Sonar, weather, mail, LightRAG, and voice providers.

Browser Tool

The browse tool is provided by the official golemcore/browser plugin and uses Playwright.

Configuration lives in preferences/plugins/golemcore/browser.json:

{
  "enabled": true,
  "headless": true,
  "timeoutMs": 30000,
  "userAgent": "..."
}

Docker requirements:

docker run -d \
  --shm-size=256m \
  --cap-add=SYS_ADMIN \
  ...

Security

Input and tool safety settings live under security:

{
  "security": {
    "sanitizeInput": true,
    "detectPromptInjection": true,
    "detectCommandInjection": true,
    "maxInputLength": 10000,
    "allowlistEnabled": true,
    "toolConfirmationEnabled": false,
    "toolConfirmationTimeoutSeconds": 60
  }
}

Rate Limiting

{
  "rateLimit": {
    "enabled": true,
    "userRequestsPerMinute": 20,
    "userRequestsPerHour": 100,
    "userRequestsPerDay": 500,
    "channelMessagesPerSecond": 30,
    "llmRequestsPerMinute": 60
  }
}

Compaction

{
  "compaction": {
    "enabled": true,
    "maxContextTokens": 50000,
    "keepLastMessages": 20,
    "preserveTurnBoundaries": true,
    "detailsEnabled": true,
    "detailsMaxItemsPerCategory": 50,
    "summaryTimeoutMs": 15000
  }
}

Field notes:

  1. preserveTurnBoundaries keeps compaction split-safe
  2. detailsEnabled persists structured compaction diagnostics
  3. detailsMaxItemsPerCategory caps stored file and tool detail lists
  4. summaryTimeoutMs is the hard timeout for LLM summarization

Turn Budget

{
  "turn": {
    "maxLlmCalls": 200,
    "maxToolExecutions": 500,
    "deadline": "PT1H",
    "autoRetryEnabled": true,
    "autoRetryMaxAttempts": 2,
    "autoRetryBaseDelayMs": 600,
    "queueSteeringEnabled": true,
    "queueSteeringMode": "one-at-a-time",
    "queueFollowUpMode": "one-at-a-time"
  }
}

Field notes:

  1. autoRetry* controls resilient retries for transient failures
  2. queueSteeringEnabled lets steering messages bypass normal follow-up handling
  3. queueSteeringMode and queueFollowUpMode support one-at-a-time or all

Memory

{
  "memory": {
    "enabled": true,
    "softPromptBudgetTokens": 1800,
    "maxPromptBudgetTokens": 3500,
    "workingTopK": 6,
    "episodicTopK": 8,
    "semanticTopK": 6,
    "proceduralTopK": 4,
    "promotionEnabled": true,
    "promotionMinConfidence": 0.75,
    "decayEnabled": true,
    "decayDays": 30,
    "retrievalLookbackDays": 21,
    "codeAwareExtractionEnabled": true
  }
}

Telegram

{
  "telegram": {
    "enabled": false,
    "token": "123456:ABC-DEF...",
    "authMode": "invite_only",
    "allowedUsers": []
  }
}

Voice

{
  "voice": {
    "enabled": false,
    "telegramRespondWithVoice": false,
    "telegramTranscribeIncoming": false,
    "sttProvider": "golemcore/elevenlabs",
    "ttsProvider": "golemcore/elevenlabs"
  }
}

Notes:

  • the voice section primarily routes STT and TTS providers and Telegram voice behavior
  • provider-specific secrets and endpoints live in plugin settings such as:
    • preferences/plugins/golemcore/elevenlabs.json
    • preferences/plugins/golemcore/whisper.json
  • the dashboard resolves available voice providers from /api/plugins/voice/providers

Auto Mode

{
  "autoMode": {
    "enabled": false,
    "tickIntervalSeconds": 1,
    "taskTimeLimitMinutes": 10,
    "autoStart": true,
    "maxGoals": 3,
    "modelTier": "default",
    "notifyMilestones": true
  }
}

Notes:

  • auto mode is schedule-driven
  • tickIntervalSeconds remains in runtime config, but the backend currently polls due schedules every second
  • schedules themselves are stored separately in auto/schedules.json

RAG

RAG is configured through the official golemcore/lightrag plugin rather than the core runtime schema.

Store this in preferences/plugins/golemcore/lightrag.json:

{
  "enabled": false,
  "url": "http://localhost:9621",
  "apiKey": "",
  "queryMode": "hybrid",
  "timeoutSeconds": 10,
  "indexMinLength": 50
}

MCP

{
  "mcp": {
    "enabled": true,
    "defaultStartupTimeout": 30,
    "defaultIdleTimeout": 5
  }
}

See MCP.


User Preferences (preferences/settings.json)

User preferences are stored separately in:

  • preferences/settings.json

This includes language, timezone, notifications, per-user tier and model overrides, and webhook configuration.


Models (models/models.json)

Model capabilities are stored in:

  • models/models.json

On first run, the bot copies a bundled models.json into the workspace. The dashboard Model Catalog can edit, reload, and enrich this file with live provider discovery.

Important behavior:

  • model ids may be plain (gpt-5.1) or provider-scoped (openai/gpt-5.1)
  • provider-scoped ids are useful when the same raw id exists under multiple provider profiles
  • /api/models/discover/{provider} reads live models from the configured provider API and helps seed catalog entries

See Model Routing.


Spring Properties (bot.*)

Some settings are still controlled via Spring properties, typically via env vars in Docker:

  • workspace paths: STORAGE_PATH, TOOLS_WORKSPACE
  • dashboard toggle: DASHBOARD_ENABLED
  • plugin runtime: BOT_PLUGINS_ENABLED, BOT_PLUGINS_DIRECTORY, BOT_PLUGINS_AUTO_START, BOT_PLUGINS_AUTO_RELOAD, BOT_PLUGINS_POLL_INTERVAL
  • plugin marketplace source: BOT_PLUGINS_MARKETPLACE_REPOSITORY_DIRECTORY, BOT_PLUGINS_MARKETPLACE_REPOSITORY_URL, BOT_PLUGINS_MARKETPLACE_BRANCH
  • plugin marketplace HTTP fallback: BOT_PLUGINS_MARKETPLACE_API_BASE_URL, BOT_PLUGINS_MARKETPLACE_RAW_BASE_URL, BOT_PLUGINS_MARKETPLACE_REMOTE_CACHE_TTL
  • self-update controls: BOT_UPDATE_ENABLED, UPDATE_PATH, BOT_UPDATE_MAX_KEPT_VERSIONS, BOT_UPDATE_CHECK_INTERVAL
  • allowed providers in model picker: BOT_MODEL_SELECTION_ALLOWED_PROVIDERS
  • tool result truncation: bot.auto-compact.max-tool-result-chars
  • plan mode feature flag: bot.plan.enabled

Plugin Runtime and Marketplace

The plugin runtime and marketplace are controlled by bot.plugins.*.

Important properties:

  • bot.plugins.enabled
  • bot.plugins.directory
  • bot.plugins.auto-start
  • bot.plugins.auto-reload
  • bot.plugins.poll-interval
  • bot.plugins.marketplace.repository-directory
  • bot.plugins.marketplace.repository-url
  • bot.plugins.marketplace.branch
  • bot.plugins.marketplace.api-base-url
  • bot.plugins.marketplace.raw-base-url
  • bot.plugins.marketplace.remote-cache-ttl

Behavior:

  • if repository-directory is configured and present, marketplace metadata and artifacts are read from that repository checkout
  • otherwise the backend falls back to the configured remote repository and GitHub HTTP sources
  • installed marketplace artifacts are written into the plugin runtime directory and then reloaded into the running bot

Self-Update

Core self-update is controlled by bot.update.*.

  • bot.update.enabled
  • bot.update.updates-path
  • bot.update.max-kept-versions
  • bot.update.check-interval

Storage Layout

Default layout:

workspace/
├── auto/
├── memory/
├── models/
├── preferences/
├── sessions/
├── skills/
└── usage/

Diagnostics

  • GET /api/system/health
  • GET /api/system/config
  • GET /api/system/diagnostics

Clone this wiki locally