Skip to content

Commit eed9ece

Browse files
authored
Merge pull request #4 from gopher-lab/docs/inference-basilica-pr
Docs: inference providers, Basilica, and screenshots
2 parents b97fe45 + 5f665a7 commit eed9ece

10 files changed

Lines changed: 87 additions & 1 deletion

File tree

docs/docs/guides/configuration.md

Lines changed: 48 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -67,6 +67,54 @@ Use OpenAI directly:
6767
2. Paste in the **OpenAI API Key** field
6868
3. Click **Test** to verify
6969

70+
### Inference
71+
72+
Gopher resolves inference settings per loop, which means the main evolution loop, tools, backtests,
73+
Monte Carlo validation, and compression can each use different providers or models when needed.
74+
75+
**Inference providers:**
76+
- **Gopher Credits**
77+
- **OpenRouter**
78+
- **OpenAI**
79+
- **Ollama (local)**
80+
- **Basilica (OpenAI-compatible)**
81+
- **Custom (OpenAI-compatible)**
82+
83+
**Per-loop overrides:**
84+
You can set overrides per loop (provider, model ID, base URL, API key) in the desktop app:
85+
**Settings → Advanced → Inference Overrides**.
86+
87+
**Base URL conventions:**
88+
- Use the OpenAI-compatible **root** URL only (no `/chat/completions`).
89+
- When using Gopher Credits, the base URL is normalized to `.../api/v1/gopher`.
90+
91+
**Environment variables:**
92+
- `BART_GOPHER_CODE`
93+
- `BART_GOPHER_API_URL`
94+
- `OPENROUTER_API_KEY`
95+
- `OPENAI_API_KEY`
96+
97+
#### Basilica (OpenAI-compatible)
98+
99+
Basilica deployments expose an OpenAI-compatible API. See:
100+
- https://docs.basilica.ai/introduction
101+
- https://docs.basilica.ai/inference
102+
103+
Setup steps:
104+
1. Deploy a model with Basilica (vLLM or SGLang) and copy the deployment URL.
105+
2. Configure Gopher with:
106+
- **Base URL**: `${DEPLOYMENT_URL}/v1`
107+
- **API Key**: `not-needed` (Basilica handles auth for the deployment URL)
108+
3. Example model IDs (Hugging Face IDs):
109+
- `Qwen/Qwen2.5-7B-Instruct`
110+
- `meta-llama/Llama-3.1-8B-Instruct`
111+
- `mistralai/Mistral-7B-Instruct-v0.3`
112+
- `deepseek-ai/DeepSeek-V2-Chat`
113+
114+
![Settings - General](/img/screenshots/desktop/03-settings-general.png)
115+
116+
![Settings - Advanced (Inference Overrides)](/img/screenshots/desktop/06-settings-advanced.png)
117+
70118
## Model Settings
71119

72120
### Loop Model

docs/docs/guides/quickstart.md

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -34,6 +34,7 @@ Use your own API key from a provider:
3434
|----------|-------------|---------|
3535
| **OpenRouter** | Access to 100+ models | [openrouter.ai/keys](https://openrouter.ai/keys) |
3636
| **OpenAI** | GPT-4, GPT-4o models | [platform.openai.com/api-keys](https://platform.openai.com/api-keys) |
37+
| **Basilica** | OpenAI-compatible deployment URL | [docs.basilica.ai](https://docs.basilica.ai/inference) |
3738

3839
**Gopher Credits** are recommended for simplicity - inference is handled automatically with optimized models.
3940

@@ -47,9 +48,16 @@ Use your own API key from a provider:
4748
- **For Gopher Credits**: Paste your Gopher Key in the **Gopher Key** field
4849
- **For OpenRouter**: Paste in the **OpenRouter API Key** field
4950
- **For OpenAI**: Paste in the **OpenAI API Key** field
51+
- **For Basilica**: Add a custom model with base URL `${DEPLOYMENT_URL}/v1` and API key `not-needed`
5052
4. Click **Test** to verify the connection
5153
5. A green checkmark indicates success
5254

55+
Advanced users can set per-loop inference overrides in **Settings → Advanced → Inference Overrides**.
56+
57+
![Main dashboard](/img/screenshots/desktop/01-main-dashboard.png)
58+
59+
![Settings - General](/img/screenshots/desktop/03-settings-general.png)
60+
5361
### CLI
5462

5563
Run the interactive setup wizard:

docs/docs/guides/troubleshooting.md

Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -94,6 +94,29 @@ Gopher Keys require the Gopher inference API (`gotrader.gopher-ai.com`), not Ope
9494

9595
The `--base-url` flag only applies to non-Gopher keys. Gopher keys always use the Gopher API.
9696

97+
### "Base URL must not include /chat/completions"
98+
99+
Gopher expects OpenAI-compatible **root** URLs only. If you include `/chat/completions`, requests will fail.
100+
101+
**Solution:**
102+
- Use the root URL (e.g., `https://api.openai.com/v1`).
103+
- For custom providers, remove `/chat/completions` and keep the base at `/v1`.
104+
105+
### "Gopher API origin override not working"
106+
107+
If you override the Gopher API origin, it must be the **origin only** (no `/chat/completions`).
108+
109+
**Solution:**
110+
- Set `BART_GOPHER_API_URL` to the API origin (e.g., `https://gotrader.gopher-ai.com`).
111+
- Restart the app/CLI after changing env vars.
112+
113+
### "Test model fails but inference runs"
114+
115+
The **Test** button validates the selected provider with the resolved inference settings:
116+
- **Gopher Credits**: Ensure your Gopher Key is set and valid.
117+
- **OpenRouter/OpenAI/Custom**: Ensure the provider key is set for that provider.
118+
- **Per-loop overrides**: If you set a per-loop override with a different provider, that provider’s key must be configured.
119+
97120
### "Rate limit exceeded"
98121

99122
You've made too many API requests in a short period.

docs/docs/reference/models.md

Lines changed: 8 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,7 @@ Gopher supports multiple LLM providers and models for strategy generation and ba
1414
| **OpenRouter** | Access 100+ models with one key | Yes |
1515
| **OpenAI** | GPT-4, GPT-4o models | Yes |
1616
| **Ollama** | Local models | No |
17+
| **Basilica** | OpenAI-compatible deployments | No (deployment URL) |
1718
| **Custom** | Any OpenAI-compatible API | Varies |
1819

1920
## Gopher Credits (Recommended)
@@ -161,9 +162,14 @@ Add any OpenAI-compatible API endpoint.
161162
- **Model ID**: The model identifier
162163
- **Display Name**: Friendly name
163164
- **Provider**: Select "Custom"
164-
- **Base URL**: API endpoint (e.g., `https://api.example.com/v1`)
165+
- **Base URL**: OpenAI-compatible root URL only (e.g., `https://api.example.com/v1`)
165166
- **API Key**: Your key for this endpoint
166167

168+
Note: Do not include `/chat/completions` in the base URL. For per-loop overrides, see
169+
[Configuration → Inference](/guides/configuration#inference).
170+
171+
![Settings - Models](/img/screenshots/desktop/04-settings-models.png)
172+
167173
### Compatible Services
168174

169175
Many services offer OpenAI-compatible APIs:
@@ -172,6 +178,7 @@ Many services offer OpenAI-compatible APIs:
172178
- **Anyscale**: [anyscale.com](https://anyscale.com)
173179
- **Perplexity**: [perplexity.ai](https://perplexity.ai)
174180
- **Groq**: [groq.com](https://groq.com)
181+
- **Basilica**: [docs.basilica.ai](https://docs.basilica.ai/inference)
175182
- **Local LLMs**: LM Studio, vLLM, text-generation-webui
176183

177184
## Model Selection Tips
149 KB
Loading
146 KB
Loading
153 KB
Loading
152 KB
Loading
133 KB
Loading
164 KB
Loading

0 commit comments

Comments
 (0)