Bug type
Behavior bug (incorrect output/state without crash)
Summary
The AskOnce feature, listed as "Multi-Model Concurrent Queries", does not work. In the Web UI, the /askonce command returns a response from only one model (usually the currently active one) instead of querying all configured providers simultaneously. The plugin is enabled in the config, but has no user-facing effect.
Steps to reproduce
Install OpenClaw Zero Token version 2026.4.5 on Ubuntu 24.04.
Configure multiple web models via ./onboard.sh webauth (e.g., ChatGPT, Claude, Gemini, DeepSeek).
Add the AskOnce plugin to openclaw.json:
json
"plugins": {
"enabled": true,
"entries": {
"@openclaw/askonce": {
"enabled": true
}
}
}
Start the server with ./server.sh start.
In the Web UI (http://127.0.0.1:3001), type /askonce What is Python?.
Observe that only one model responds instead of all configured ones.
Expected behavior
When using the /askonce command, the system should broadcast the user's prompt to all currently configured and authorized web models (e.g., ChatGPT, Claude, Gemini, DeepSeek, Kimi, Qwen, etc.) and display each model's individual response in the chat, allowing side-by-side comparison.
Actual behavior
Only a single model responds to the /askonce command (usually the one that is currently active or set as default). The other configured models are not queried. No error message is shown — the feature simply does not broadcast the request. The plugin itself is loaded without errors and the server starts normally.
OpenClaw version
OpenClaw Zero Token 2026.4.5
Operating system
Ubuntu 24.04 LTS
Install method
No response
Model
deepseek, gpt-4, sonet,kimi
Provider / routing chain
deepseek
Config file / key location
No response
Additional provider/model setup details
No response
Logs, screenshots, and evidence
Impact and severity
No response
Additional information
No response
Bug type
Behavior bug (incorrect output/state without crash)
Summary
The AskOnce feature, listed as "Multi-Model Concurrent Queries", does not work. In the Web UI, the /askonce command returns a response from only one model (usually the currently active one) instead of querying all configured providers simultaneously. The plugin is enabled in the config, but has no user-facing effect.
Steps to reproduce
Install OpenClaw Zero Token version 2026.4.5 on Ubuntu 24.04.
Configure multiple web models via ./onboard.sh webauth (e.g., ChatGPT, Claude, Gemini, DeepSeek).
Add the AskOnce plugin to openclaw.json:
json
"plugins": {
"enabled": true,
"entries": {
"@openclaw/askonce": {
"enabled": true
}
}
}
Start the server with ./server.sh start.
In the Web UI (http://127.0.0.1:3001), type /askonce What is Python?.
Observe that only one model responds instead of all configured ones.
Expected behavior
When using the /askonce command, the system should broadcast the user's prompt to all currently configured and authorized web models (e.g., ChatGPT, Claude, Gemini, DeepSeek, Kimi, Qwen, etc.) and display each model's individual response in the chat, allowing side-by-side comparison.
Actual behavior
Only a single model responds to the /askonce command (usually the one that is currently active or set as default). The other configured models are not queried. No error message is shown — the feature simply does not broadcast the request. The plugin itself is loaded without errors and the server starts normally.
OpenClaw version
OpenClaw Zero Token 2026.4.5
Operating system
Ubuntu 24.04 LTS
Install method
No response
Model
deepseek, gpt-4, sonet,kimi
Provider / routing chain
deepseek
Config file / key location
No response
Additional provider/model setup details
No response
Logs, screenshots, and evidence
Impact and severity
No response
Additional information
No response