@@ -67,6 +67,54 @@ Use OpenAI directly:
67672 . Paste in the ** OpenAI API Key** field
68683 . Click ** Test** to verify
6969
70+ ### Inference
71+
72+ Gopher resolves inference settings per loop, which means the main evolution loop, tools, backtests,
73+ Monte Carlo validation, and compression can each use different providers or models when needed.
74+
75+ ** Inference providers:**
76+ - ** Gopher Credits**
77+ - ** OpenRouter**
78+ - ** OpenAI**
79+ - ** Ollama (local)**
80+ - ** Basilica (OpenAI-compatible)**
81+ - ** Custom (OpenAI-compatible)**
82+
83+ ** Per-loop overrides:**
84+ You can set overrides per loop (provider, model ID, base URL, API key) in the desktop app:
85+ ** Settings → Advanced → Inference Overrides** .
86+
87+ ** Base URL conventions:**
88+ - Use the OpenAI-compatible ** root** URL only (no ` /chat/completions ` ).
89+ - When using Gopher Credits, the base URL is normalized to ` .../api/v1/gopher ` .
90+
91+ ** Environment variables:**
92+ - ` BART_GOPHER_CODE `
93+ - ` BART_GOPHER_API_URL `
94+ - ` OPENROUTER_API_KEY `
95+ - ` OPENAI_API_KEY `
96+
97+ #### Basilica (OpenAI-compatible)
98+
99+ Basilica deployments expose an OpenAI-compatible API. See:
100+ - https://docs.basilica.ai/introduction
101+ - https://docs.basilica.ai/inference
102+
103+ Setup steps:
104+ 1 . Deploy a model with Basilica (vLLM or SGLang) and copy the deployment URL.
105+ 2 . Configure Gopher with:
106+ - ** Base URL** : ` ${DEPLOYMENT_URL}/v1 `
107+ - ** API Key** : ` not-needed ` (Basilica handles auth for the deployment URL)
108+ 3 . Example model IDs (Hugging Face IDs):
109+ - ` Qwen/Qwen2.5-7B-Instruct `
110+ - ` meta-llama/Llama-3.1-8B-Instruct `
111+ - ` mistralai/Mistral-7B-Instruct-v0.3 `
112+ - ` deepseek-ai/DeepSeek-V2-Chat `
113+
114+ ![ Settings - General] ( /img/screenshots/desktop/03-settings-general.png )
115+
116+ ![ Settings - Advanced (Inference Overrides)] ( /img/screenshots/desktop/06-settings-advanced.png )
117+
70118## Model Settings
71119
72120### Loop Model
0 commit comments