chore: use openai as default model.#16
Conversation
There was a problem hiding this comment.
Pull request overview
Updates JudgeAgent’s default LLM provider to match the primary testing environment.
Changes:
- Switch
JudgeAgentdefaultmodel_providerfromclaudetoopenai. - Update DeepSeek’s configured default model identifier.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| model_provider: Literal["openai", "deepseek", "gemini", "claude"] = "openai", | ||
| ref: str = "HEAD", |
There was a problem hiding this comment.
The default model_provider is now openai, but the CLI help text still says claude (default) (see main() usage message later in this file). Update the help/usage text (and any other in-file documentation referring to the default) so it matches the new default provider.
| "deepseek": { | ||
| "name": "DeepSeek", | ||
| "default_model": "deepseek/deepseek-chat-v3-0324", | ||
| "default_model": "deepseek/deepseek-chat", | ||
| "key_env_name": "OPENROUTER_API_KEY", | ||
| "constructor": _openrouter_common, |
There was a problem hiding this comment.
This PR also changes the DeepSeek default_model from a pinned version (deepseek/deepseek-chat-v3-0324) to the unversioned deepseek/deepseek-chat, but the PR title/description only mention switching the default provider to OpenAI. Please either update the PR description to include this change or revert it; if keeping it, consider pinning a specific DeepSeek model version for reproducibility (consistent with the version-pinned Claude entry).
Updated the default model provider in JudgeAgent from 'claude' to 'openai' to align with the current primary testing environment.