Skip to content

Fix Gemini and update the configs to the new settings.json#196

Closed
teron131 wants to merge 2 commits intoautomazeio:mainfrom
teron131:patch-1
Closed

Fix Gemini and update the configs to the new settings.json#196
teron131 wants to merge 2 commits intoautomazeio:mainfrom
teron131:patch-1

Conversation

@teron131
Copy link
Copy Markdown

@teron131 teron131 commented Jan 28, 2026

Updated configuration settings for Factory CLI, replacing legacy 'config.json' with 'settings.json' and modifying model display names and properties.

Summary by CodeRabbit

  • Documentation
    • Updated Factory CLI configuration setup and public config schema: field names normalized (camelCase), provider guidance revised, and legacy config handling clarified.
  • New Features
    • Added Gemini 3 Flash as an available model option.

✏️ Tip: You can customize this high-level summary in your review settings.

Updated configuration settings for Factory CLI, replacing legacy 'config.json' with 'settings.json' and modifying model display names and properties.
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Jan 28, 2026

📝 Walkthrough

Walkthrough

Documentation update to FACTORY_SETUP.md: configuration now uses ~/.factory/settings.json with camelCase keys (e.g., customModels, displayName, baseUrl, apiKey), provider values adjusted to generic-chat-completion-api for several entries, and a new "Gemini 3 Flash" model entry added.

Changes

Cohort / File(s) Summary
Configuration Documentation Update
FACTORY_SETUP.md
Reference changed from ~/.factory/config.json to ~/.factory/settings.json; keys renamed to camelCase (custom_modelscustomModels, model_display_namedisplayName, base_urlbaseUrl, api_keyapiKey); provider values updated (e.g., anthropic/openaigeneric-chat-completion-api) for several entries; added a new model entry for "Gemini 3 Flash"; updated examples, code snippets, and troubleshooting to reflect new schema and precedence of settings.json over legacy config.json.

Sequence Diagram(s)

(omitted)

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~3 minutes

Poem

🐇 I nibble bytes in moonlit stash,

CamelCase carrots in a dash,
Gemini sparkles hop on through,
Settings roasted, documentation new,
A joyful hop — the factory's fresh!

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately reflects the main changes: migrating from config.json to settings.json and fixing Gemini configuration with updated field names.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@FACTORY_SETUP.md`:
- Line 379: Update the unclear sentence under "Provider Setting": replace the
awkward phrase so it reads clearly, e.g. clarify that `provider: "openai"`
defaults to the response API and supports only OpenAI models (for example:
'`provider: "openai"` defaults to the response API and supports only OpenAI
models'), and ensure the quoted example `"provider":
"generic-chat-completion-api"` remains as the alternative.
🧹 Nitpick comments (1)
FACTORY_SETUP.md (1)

74-77: Clarify the conflict resolution behavior.

The warning mentions that configurations in both files should not conflict, but it doesn't explain what happens when they do. Consider adding specific guidance on:

  • Which file takes precedence when both exist
  • Whether settings.json completely overrides config.json or merges with it
  • Recommended migration steps for users with existing config.json files
📝 Suggested clarification
 > [!IMPORTANT]
-> `config.json` is considered legacy. While still supported for backwards compatibility, `settings.json` takes priority. Ensure that configurations in both files do not conflict.
+> `config.json` is considered legacy. While still supported for backwards compatibility, `settings.json` takes priority and will completely override `config.json` if both exist. To migrate, rename your `config.json` to `settings.json` or copy your custom models configuration to the new file.

Comment thread FACTORY_SETUP.md Outdated
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
FACTORY_SETUP.md (1)

329-338: Mention the newly added Gemini 3 Flash model in the selection list.

You added gemini-3-flash-preview in the config example, but the “Select your model” list omits it. Please add it so the doc stays complete.

✏️ Suggested addition
-   - `gemini-3-pro-preview`, `gemini-3-pro-image-preview`, `gemini-2.5-pro`, etc.
+   - `gemini-3-flash-preview`, `gemini-3-pro-preview`, `gemini-3-pro-image-preview`, `gemini-2.5-pro`, etc.
🤖 Fix all issues with AI agents
In `@FACTORY_SETUP.md`:
- Around line 376-380: The current note incorrectly implies "provider": "openai"
only supports OpenAI models; update the wording to say that "provider": "openai"
targets the OpenAI response API format (and therefore can be used with
non-OpenAI backends that implement that format), while Gemini 3 Pro specifically
requires Antigravity auth and "provider": "generic-chat-completion-api"; also
update the Qwen and GLM entries to either switch their provider value to
"openai" (if they use OpenAI response API) or add a parenthetical clarifying
they are using a compatible response-API implementation so readers won't be
confused.

Comment thread FACTORY_SETUP.md
Comment on lines 376 to 380
> [!IMPORTANT]
> **Gemini 3 Pro Configuration Requirements**:
> - **Authentication**: Gemini 3 Pro models require **Antigravity** authentication (not Gemini CLI auth)
> - **Provider Setting**: Must use `"provider": "openai"` in Factory config (Antigravity uses OpenAI API format)
> - **Provider Setting**: Use `"provider": "generic-chat-completion-api"`. The `"provider": "openai"` setting defaults to the OpenAI response API format and only supports OpenAI models.
> - **Available in**: VibeProxy v1.0.9+ with CLIProxyAPI 6.5.1+
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Clarify provider guidance to match non‑OpenAI examples.

The note says "provider": "openai" only supports OpenAI models, but the same doc uses "provider": "openai" for Qwen and GLM entries later. This reads as contradictory and could mislead readers. Consider rewording to indicate it targets the OpenAI response API format (and can be used for compatible backends).

✏️ Suggested clarification
-> - **Provider Setting**: Use `"provider": "generic-chat-completion-api"`. The `"provider": "openai"` setting defaults to the OpenAI response API format and only supports OpenAI models.
+> - **Provider Setting**: Use `"provider": "generic-chat-completion-api"`. The `"provider": "openai"` setting defaults to the OpenAI response API format and is intended for OpenAI‑compatible models/endpoints.
🤖 Prompt for AI Agents
In `@FACTORY_SETUP.md` around lines 376 - 380, The current note incorrectly
implies "provider": "openai" only supports OpenAI models; update the wording to
say that "provider": "openai" targets the OpenAI response API format (and
therefore can be used with non-OpenAI backends that implement that format),
while Gemini 3 Pro specifically requires Antigravity auth and "provider":
"generic-chat-completion-api"; also update the Qwen and GLM entries to either
switch their provider value to "openai" (if they use OpenAI response API) or add
a parenthetical clarifying they are using a compatible response-API
implementation so readers won't be confused.

@teron131 teron131 closed this by deleting the head repository Feb 23, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant