Skip to content

feat: dynamic model loading and metadata-driven providers#1093

Open
AAClause wants to merge 9 commits intomasterfrom
feat/dynamicModels
Open

feat: dynamic model loading and metadata-driven providers#1093
AAClause wants to merge 9 commits intomasterfrom
feat/dynamicModels

Conversation

@AAClause
Copy link
Member

@AAClause AAClause commented Mar 14, 2026

  • Add dynamic_model_loader for JSON metadata (model-metadata format)
  • Engines use MODELS_JSON_URL and _postprocess_models for provider-specific logic
  • ProviderAIModel: supported_parameters, reasoning_capable, web_search_capable,
    audio/document/vision from architecture.modality
  • BaseEngine: _filter_params_for_model, get_reasoning_ui_spec, get_audio_output_spec
  • ResponsesAPIEngine base for OpenAI/xAI; migrate xAI to Responses API for web search
  • OpenAI: route gpt-audio + audio to Chat Completions; document support for Responses API
  • stream_options.include_usage: conditional (Chat Completions only; omit for OpenRouter/Responses API)
  • Provider-injected UI specs (ReasoningUISpec, AudioOutputUISpec) via provider_ui_spec
  • Reasoning/audio params: reasoning_params_helper, conversation_params, view_utils
  • DB migrations: 002 audio, 003 reasoning, 004 web_search_mode on messages/blocks
  • Message.reasoning/audio_data/audio_format; content_utils for reasoning display
  • Config: model_metadata_cache_ttl_seconds; show_reasoning_blocks preference

Summary by CodeRabbit

  • New Features

    • Reasoning/thinking mode with effort levels and token budgets; web-search tooling for capable models; audio transcription and audio-output support; attachment compatibility checks.
  • Improvements

    • Streamed responses now emit typed chunks (reasoning, citation, content); display formatting separates reasoning from content; richer model capability detection and parameter filtering; centralized UI visibility for advanced parameters.
  • Tests

    • New and updated tests covering reasoning, model loader, streaming chunk handling, and attachment compatibility.

@AAClause AAClause changed the title feat(dynamicModels): dynamic model loading and metadata-driven providers feat: dynamic model loading and metadata-driven providers Mar 14, 2026
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 14, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

Walkthrough

Adds reasoning and web-search controls, converts streaming to typed (type, data) tuples, introduces dynamic model metadata loading (MODELS_JSON_URL) and provider UI specs, propagates new generation parameters through models, DB, presenters, views, engines, and updates streaming/formatting helpers and tests.

Changes

Cohort / File(s) Summary
Streaming & Completion Handler
basilisk/completion_handler.py
Tracks reasoning state, emits/consumes typed stream chunks (`("content"
Reasoning & Generation Params (config/models/params)
basilisk/config/conversation_profile.py, basilisk/conversation/conversation_model.py, basilisk/conversation/conversation_params.py
Adds frequency/presence penalties, seed, top_k, stop and reasoning fields (reasoning_mode, reasoning_budget_tokens, reasoning_effort, reasoning_adaptive) and helpers to sync reasoning params between profile and block; validation updated.
Content Utilities & Display
basilisk/conversation/content_utils.py, tests/conversation/test_content_utils.py
New utilities to parse/format legacy think and display <think> blocks and corresponding tests.
DB Layer (persist/load new params)
basilisk/conversation/database/models.py, basilisk/conversation/database/manager.py
Adds DB columns (frequency_penalty, presence_penalty, seed, top_k, stop_json, web_search_mode) and modular loaders for blocks, attachments, citations.
Provider Model & Dynamic Loading
basilisk/provider_ai_model.py, basilisk/provider_engine/dynamic_model_loader.py, basilisk/config/main_config.py
ProviderAIModel gains modality/capability flags and supported_parameters; new dynamic model loader with caching and TTL (config field model_metadata_cache_ttl_seconds).
Base Engine & Responses API
basilisk/provider_engine/base_engine.py, basilisk/provider_engine/responses_api_engine.py, basilisk/provider_engine/provider_ui_spec.py
Adds model JSON loading hooks, param filtering, web-search/reasoning/audio UI spec dataclasses, and ResponsesAPIEngine base with attachment and streaming tuple handling.
Provider Engines (per-provider updates)
basilisk/provider_engine/.../*.py
Multiple engines (Anthropic, Gemini, DeepSeek, MistralAI, OpenAI, OpenRouter, xAI, Ollama, LegacyOpenAI) updated to use MODELS_JSON_URL/_postprocess_models, emit typed stream tuples, expose get_reasoning_ui_spec/get_web_search_tool_definitions, and adapt completion params/streaming signatures.
Presenters & View Helpers
basilisk/presenters/*, basilisk/presenters/reasoning_params_helper.py, basilisk/presenters/view_utils.py
New ParameterVisibilityState, reasoning param extraction helpers, view utilities for web-search, presenter changes to propagate reasoning/web_search/audio settings and format responses for display.
Views & Accessibility/UI Controls
basilisk/views/*, basilisk/views/accessible.py, basilisk/views/int_spin_ctrl.py
Refactors conversation UI into grouped sections with state-driven visibility, adds reasoning/web-search widgets, accessible helpers and integer spin control, and updates display_new_block to accept show_reasoning_blocks.
Attachment & Conversation Services
basilisk/services/attachment_service.py, basilisk/services/conversation_service.py
Attachment compatibility check generalized (check_model_attachment_compatible); conversation title generation now consumes typed stream tuples and only uses "content" chunks.
Presenters/View Presenters Tests & Other Tests
tests/...
Adds/updates tests for content utils, dynamic model loader, reasoning params, attachment compatibility, base engine features, streaming tuple expectations, and updated presenter/view tests to match new APIs.

Sequence Diagram(s)

sequenceDiagram
    participant View as UI View
    participant Presenter as Presenter
    participant Engine as Provider Engine
    participant Provider as Provider API
    participant Handler as CompletionHandler

    View->>Presenter: submit_message(reasoning_mode, web_search_mode)
    Presenter->>Presenter: build MessageBlock(with reasoning params)
    Presenter->>Engine: completion(new_block, conversation)
    Engine->>Engine: get_model(model_id)
    Engine->>Provider: send request (tools, thinking config)
    Provider-->>Engine: stream events (text / thinking / citations)
    Engine->>Handler: completion_response_with_stream yields typed chunks
    Handler->>Handler: wrap/track reasoning blocks, flush buffers
    Handler-->>Presenter: emit (chunk_type, chunk_data)
    Presenter->>View: display_new_block(show_reasoning_blocks)
Loading
sequenceDiagram
    participant Config as Profile/Config
    participant View as ConversationTab
    participant Presenter as BasePresenter
    participant Engine as ProviderEngine

    View->>Presenter: on_account_or_model_change()
    Presenter->>Engine: get_reasoning_ui_spec(model)
    Engine-->>Presenter: ReasoningUISpec(effort_options, flags)
    Presenter->>Presenter: compute ParameterVisibilityState
    Presenter-->>View: ParameterVisibilityState
    View->>View: update_parameter_controls_visibility(state)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested reviewers

  • clementb49
🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat: dynamic model loading and metadata-driven providers' is specific and clearly describes the main change: dynamic model loading from JSON metadata and metadata-driven provider behavior.
Docstring Coverage ✅ Passed Docstring coverage is 94.72% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/dynamicModels
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 27

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (5)
tests/test_conversation_profile.py (1)

333-362: 🧹 Nitpick | 🔵 Trivial

Add invalid-value coverage for the new reasoning fields.

Consider adding explicit cases for negative reasoning_budget_tokens
and unsupported reasoning_effort values to guard validation behavior.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/test_conversation_profile.py` around lines 333 - 362, Add tests that
assert validation rejects invalid values for the new reasoning fields: when
constructing ConversationProfile (the constructor) pass a negative
reasoning_budget_tokens (e.g., -1) and assert it raises ValueError with a
message mentioning reasoning_budget_tokens, and pass an unsupported
reasoning_effort string (e.g., "extreme") and assert it raises ValueError with a
message mentioning reasoning_effort; add these cases near the existing
validation tests that check behavior without a model and with a model so they
exercise the same code paths in ConversationProfile.__init__ / validation logic.
basilisk/conversation/database/models.py (1)

81-123: ⚠️ Potential issue | 🟠 Major

DBMessageBlock is missing persistence for reasoning settings.

MessageBlock now carries reasoning_mode, reasoning_budget_tokens,
reasoning_effort, and reasoning_adaptive, but this ORM model does
not map them. Those values will be dropped on save/load.

🔧 Proposed fix
 class DBMessageBlock(Base):
@@
 	stream: Mapped[bool] = mapped_column(default=False)
+	reasoning_mode: Mapped[bool] = mapped_column(default=False)
+	reasoning_budget_tokens: Mapped[int | None] = mapped_column(
+		default=None
+	)
+	reasoning_effort: Mapped[str | None] = mapped_column(default=None)
+	reasoning_adaptive: Mapped[bool] = mapped_column(default=False)
 	web_search_mode: Mapped[bool] = mapped_column(default=False)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@basilisk/conversation/database/models.py` around lines 81 - 123,
DBMessageBlock is missing ORM fields for reasoning settings so those values are
lost; add mapped_column attributes to the DBMessageBlock class for
reasoning_mode, reasoning_budget_tokens, reasoning_effort, and
reasoning_adaptive (use the same names) with appropriate SQL types and defaults
to match the in-memory MessageBlock shape (e.g., reasoning_mode: Mapped[str |
None] = mapped_column(default=None), reasoning_budget_tokens: Mapped[int | None]
= mapped_column(default=None), reasoning_effort: Mapped[str | None] =
mapped_column(default=None), reasoning_adaptive: Mapped[bool] =
mapped_column(default=False)); ensure these new fields are declared alongside
the other mapped columns in the DBMessageBlock class so they are persisted and
loaded with the model.
basilisk/provider_engine/openrouter_engine.py (1)

120-123: ⚠️ Potential issue | 🟡 Minor

Potential AttributeError if top_provider is None.

If model.get("top_provider") returns None, calling .get("max_completion_tokens") will raise an AttributeError. Consider using a safe fallback:

🛡️ Proposed fix
-						max_output_tokens=model.get("top_provider").get(
-							"max_completion_tokens"
-						)
-						or -1,
+						max_output_tokens=(model.get("top_provider") or {}).get(
+							"max_completion_tokens"
+						) or -1,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@basilisk/provider_engine/openrouter_engine.py` around lines 120 - 123, The
code accesses model.get("top_provider").get("max_completion_tokens") which will
raise AttributeError if top_provider is None; update the access in
openrouter_engine.py to safely handle a missing top_provider by using a fallback
empty dict or checking for truthiness first (e.g., assign top_provider =
model.get("top_provider") or {} and then read
top_provider.get("max_completion_tokens") or -1) so the expression always
returns an int and never calls .get on None.
basilisk/provider_engine/deepseek_engine.py (1)

69-90: ⚠️ Potential issue | 🟡 Minor

Close the reasoning fence if the stream ends inside reasoning_content.

The closing ``` is only emitted when a later delta.content arrives. If the stream ends after reasoning chunks, the stored text is left malformed and `split_reasoning_and_content()` cannot recover it cleanly.

🧩 Suggested change
 		for chunk in stream:
 			delta = chunk.choices[0].delta
 			if delta:
 				if (
 					hasattr(delta, "reasoning_content")
 					and delta.reasoning_content
 				):
 					if not reasoning_content_tag_sent:
 						reasoning_content_tag_sent = True
 						yield (
 							"content",
 							f"```think\n{delta.reasoning_content}",
 						)
 					else:
 						yield ("content", delta.reasoning_content)
 				if delta.content:
 					if reasoning_content_tag_sent:
 						reasoning_content_tag_sent = False
 						yield ("content", f"\n```\n\n{delta.content}")
 					else:
 						yield ("content", delta.content)
+		if reasoning_content_tag_sent:
+			yield ("content", "\n```")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@basilisk/provider_engine/deepseek_engine.py` around lines 69 - 90, The stream
handling in deepseek_engine.py leaves an open reasoning fence if the for-loop
over stream ends while reasoning_content_tag_sent is True; update the generator
in the function that iterates "for chunk in stream" (where
reasoning_content_tag_sent is tracked) to emit a closing fence when the stream
completes by checking reasoning_content_tag_sent after the loop and yielding
("content", "\n```") so split_reasoning_and_content() receives a
properly-terminated reasoning block.
basilisk/conversation/database/manager.py (1)

388-401: ⚠️ Potential issue | 🟠 Major

Round-trip the new reasoning request fields on MessageBlock.

ConversationTab._restore_draft_block_reasoning() now reads reasoning_mode, reasoning_adaptive, reasoning_budget_tokens, and reasoning_effort from draft blocks, but this DB layer still only persists web_search_mode. Drafts saved to SQLite will reopen with those reasoning controls reset, changing the next request after a restart.

Also applies to: 691-705

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@basilisk/conversation/database/manager.py` around lines 388 - 401, The
DBMessageBlock instantiation currently only persists web_search_mode so the new
MessageBlock reasoning fields (reasoning_mode, reasoning_adaptive,
reasoning_budget_tokens, reasoning_effort) are lost; update the DBMessageBlock
construction in manager.py (the block that creates DBMessageBlock with
conversation_id, position, model_provider, etc.) to include these four reasoning
fields (pull values from block.reasoning_mode, block.reasoning_adaptive,
block.reasoning_budget_tokens, block.reasoning_effort) and ensure the symmetric
DBMessageBlock -> MessageBlock restore path also reads those fields back; apply
the same additions to the other DBMessageBlock creation site referenced in the
review to keep drafts consistent across restarts.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@basilisk/config/conversation_profile.py`:
- Around line 259-262: The current validation in ConversationProfile (checks
around reasoning_budget_tokens and reasoning_effort) only rejects values when
model is None; update the validation so that when a model is present you also
(1) ensure reasoning_budget_tokens is either None or a non-negative integer and
raise a ValueError if negative, and (2) ensure reasoning_effort is either None
or one of the allowed enum/choices and raise a ValueError for unknown values;
replace plain error strings with translatable messages using _("...") and add
preceding # Translators: comments for each user-facing message; refer to the
symbols reasoning_budget_tokens, reasoning_effort and the surrounding validation
block in the ConversationProfile class to locate and implement these checks and
message changes.

In `@basilisk/conversation/content_utils.py`:
- Around line 15-19: The regexes _THINK_BLOCK_PATTERN and
_REASONING_BLOCK_PATTERN currently match think/reasoning blocks anywhere in the
text; change them to only match at the beginning (use explicit start-anchor ^ or
use re.match) so only a leading think/reasoning block is treated as metadata,
and when stripping matched content remove only the matched prefix rather than
all occurrences. Update the other analogous patterns/uses (the similar blocks
around the other occurrences referenced) to follow the same
anchored-match-and-strip-prefix behavior, ensuring functions that call these
patterns (e.g., the loaders/parsers that currently call search()) use match() or
anchored regexes and only drop the matched leading slice.

In `@basilisk/conversation/conversation_model.py`:
- Around line 149-153: The fields reasoning_budget_tokens and reasoning_effort
on the Pydantic model allow invalid values; change reasoning_budget_tokens to a
positive-integer constrained type (e.g., Pydantic PositiveInt or conint(gt=0))
so zero/negative values are rejected, and constrain reasoning_effort to a finite
set of allowed values (e.g., a Literal["low","medium","high"] or a small Enum
like ReasoningEffort with explicit members) so arbitrary strings are rejected;
update the field type annotations and imports accordingly and keep defaults as
None where appropriate so invalid payloads never reach provider calls.

In `@basilisk/conversation/conversation_params.py`:
- Around line 26-71: The current functions get_reasoning_params_from_block,
get_audio_params_from_block, get_reasoning_params_from_profile and
get_audio_params_from_profile silently return None for missing attributes
because they use getattr(..., None); change each to explicitly check for missing
attributes against REASONING_PARAM_KEYS / AUDIO_PARAM_KEYS (e.g., loop over
keys, use hasattr or try getattr without a default), collect any missing keys
and raise a clear error (TypeError/ValueError) listing the missing/renamed
fields so syncing fails fast, otherwise build and return the dict of values;
keep function names and keys (REASONING_PARAM_KEYS, AUDIO_PARAM_KEYS) to locate
the code.

In `@basilisk/presenters/base_conversation_presenter.py`:
- Around line 158-182: After the early return when model or engine is None,
remove redundant null checks: set has_model = advanced_mode (no "model is not
None"), keep
state.temperature_visible/top_p_visible/max_tokens_visible/stream_visible using
has_model, and remove the redundant "and engine is not None" from the
controls_visible expression; you can still call
engine.model_supports_web_search(model) and engine.get_reasoning_ui_spec(model)
because engine is guaranteed non-null by the early return. Ensure references
updated: ParameterVisibilityState, has_model, state.temperature_visible,
state.top_p_visible, state.max_tokens_visible, state.stream_visible,
state.web_search_visible, reasoning_spec, and controls_visible.

In `@basilisk/presenters/edit_block_presenter.py`:
- Around line 272-299: The _on_non_stream_finish callback mutates UI and can run
after the dialog is torn down; make the presenter class inherit
DestroyGuardMixin and protect the callback by applying the _guard_destroying
guard to _on_non_stream_finish (use the mixin's _guard_destroying decorator so
the method returns early if the presenter is destroyed), and add any needed
import for DestroyGuardMixin; ensure the same pattern is used for other
completion callbacks in this presenter.
- Around line 219-223: When parsing editor text into reasoning and content,
avoid clearing stored reasoning when the UI is hiding reasoning: call
split_reasoning_and_content_from_display(self.view.response_txt.GetValue()) as
before, but if it returns an empty or None reasoning and the presenter currently
has self.block.response.reasoning set and the view is in the "hide reasoning"
state (e.g., self.view.show_reasoning_blocks is False or equivalent), keep the
existing self.block.response.reasoning instead of overwriting it; then update
self.block.response via self.block.response.model_copy(update={"reasoning":
preserved_reasoning, "content": content}).
- Around line 122-123: The reasoning/audio params are being derived from the
pre-adjustment view state; call or use the compatibility-adjusted model from
ensure_model_compatibility() before resolving options so indices map to the
correct provider values. Specifically, after ensure_model_compatibility()
returns/sets the adjusted model (the value it returns or self.model), pass that
adjusted model into get_reasoning_params_from_view and
get_audio_params_from_view (or translate the selected indices against the
adjusted model's option lists) so the chosen effort/voice values match the
compatibility-adjusted model.

In `@basilisk/provider_engine/base_engine.py`:
- Around line 104-150: The model filter isn't excluding new runtime request keys
(e.g., reasoning, audio, web-search mode), so add those keys to the set returned
by _get_filterable_params (extend _FILTERABLE_PARAMS) so
_filter_params_for_model will drop them when model.supported_parameters doesn't
include them; update _FILTERABLE_PARAMS to include the new keys (e.g.,
"reasoning", "audio", "web_search" or the exact runtime param names used
elsewhere) and/or override _get_filterable_params in provider subclasses to
include provider-specific runtime params, ensuring _filter_params_for_model will
properly filter them.

In `@basilisk/provider_engine/dynamic_model_loader.py`:
- Around line 55-58: Replace the Python 2 style exception clauses "except
TypeError, ValueError:" with the Python 3 compatible form "except (TypeError,
ValueError):" in this module; specifically update the four occurrences found in
dynamic_model_loader.py (the try block that does "return int(val)" and the three
other try/except blocks later in the file) so they use parenthesized exception
tuples to avoid the SyntaxError and allow the module to import.

In `@basilisk/provider_engine/ollama_engine.py`:
- Around line 151-153: The public method completion_response_with_stream lacks
explicit type hints; update its signature to include precise types for the
stream parameter and the return value to prevent interface drift (e.g., annotate
stream as an AsyncIterator[bytes] or Iterable[str]/typing.TextIO depending on
implementation and return type as an AsyncGenerator[MessageBlock, None] or
Iterator[MessageBlock] as appropriate). Modify the definition of
completion_response_with_stream (and update any imports) so it uses these
explicit typing constructs and ensure new_block remains typed as MessageBlock to
satisfy the public API typing guideline.

In `@basilisk/provider_engine/openai_engine.py`:
- Around line 77-80: The advertised supported audio MIME list in
openai_engine.py includes "audio/mp4" and "audio/webm" but
_to_chat_content_part() relies on _audio_mime_to_format() which only maps to
MP3/WAV and will throw for those types; remove "audio/mp4" and "audio/webm" from
the MIME arrays so only formats that _audio_mime_to_format() can convert (e.g.,
"audio/mpeg", "audio/wav") are advertised, and make the same removal in the
second occurrence later in the file to keep both capability checks consistent
until encoding support is implemented; reference the MIME list definition and
the functions _to_chat_content_part() and _audio_mime_to_format() when making
the change.
- Around line 344-346: The stream handler in openai_engine.py currently ignores
chunks with empty choices (the final usage-only chunk) because of "if not
chunk.choices: continue", which discards token usage; change the logic in the
streaming loop that processes chunks (around the handler that inspects
chunk.choices and appends tokens to new_block) to instead detect when a chunk
has no choices but contains a "usage" field, extract that usage info and apply
it to the corresponding Message/Block (e.g., update new_block.usage or call the
same usage-aggregation method used elsewhere), and only continue when the chunk
truly has nothing relevant; keep existing behavior for normal choice chunks so
streaming text still accumulates as before. Ensure this handles both the final
empty-choices usage chunk and any other chunks that include a "usage" key.

In `@basilisk/provider_engine/provider_ui_spec.py`:
- Line 29: Wrap the user-facing UI label in the i18n translator by changing the
hardcoded effort_label constant to call _("Reasoning effort:") and add a
translators comment above it; specifically update the effort_label variable in
provider_ui_spec.py to use _("...") and add a "# Translators: Label shown next
to the reasoning effort meter" (or equivalent context) comment immediately above
the declaration so gettext tooling can pick it up.

In `@basilisk/provider_engine/responses_api_engine.py`:
- Around line 153-156: The shared Responses API builder incorrectly sets
params["max_tokens"]; change it to set params["max_output_tokens"] so the call
to client.responses.create() uses the correct OpenAI parameter name—update the
assignment where new_block.max_tokens is checked (the code that currently does
params["max_tokens"] = new_block.max_tokens) to params["max_output_tokens"] =
new_block.max_tokens so both OpenAIEngine and XAIEngine inherit the correct
behavior.

In `@basilisk/services/attachment_service.py`:
- Around line 228-236: The has_documents boolean check in attachment_service.py
contains a redundant exclusion: inside the any(...) comprehension that inspects
attachments and their a.mime_type, remove the unnecessary `and not
a.mime_type.startswith("image/")` clause because MIME types starting with
"image/" cannot simultaneously start with "application/" or "text/"; update the
expression used to compute has_documents (the any over attachments/mime_type) to
only check for non-None mime_type and startswith("application/") or
startswith("text/") and keep behavior otherwise unchanged.
- Around line 237-242: The translatable error string returned when has_images is
true but current_model.vision is false (in attachment_service.py around the
block using has_images, current_model, engine.models and vision_models) needs a
translators' context comment; add a "# Translators: ..." comment immediately
above the _(...) string literal to provide context for translators about
selecting a vision model, then keep the existing return False, _( ... ) % ",
".join(vision_models) unchanged.
- Around line 256-264: The inline conditional building the error message using
doc_models is complex and missing a translators comment; refactor by computing
the message in a separate variable (e.g., message) and then return False,
message, and add a "# Translators:" comment immediately above each translatable
string to give context (one comment explaining that the first string includes a
comma-separated list of document-capable model names inserted with %s, and one
for the fallback string). Use the existing doc_models list variable and the same
"%s" interpolation ("%s" % ", ".join(doc_models)) to insert the names.
- Around line 243-250: The code returns a translated message using a nested
ternary without translator comments; update the block where has_audio and
current_model.audio are checked (around variables has_audio,
current_model.audio, audio_models and engine.models) to: build audio_models
list, then use a simple if/else to choose the message instead of an inline
conditional, and place a "# Translators:" comment immediately above each
translatable string passed to _() describing context (e.g. that it's shown when
a model lacks audio support and optionally lists audio-capable models); ensure
the final return remains return False, _(...) with the selected string and
joined audio_models where appropriate.

In `@basilisk/services/conversation_service.py`:
- Around line 255-260: The streaming loop in conversation_service.py assumes
each yielded chunk from engine.completion_response_with_stream(response,
new_block=new_block) is a 2-tuple and unpacks it directly, which can raise on
malformed/legacy chunks; update the loop to defensively validate the chunk shape
before unpacking (e.g., check isinstance(chunk, (list,tuple)) and len(chunk) >=
2), extract chunk_type and chunk_data only after validation, and only append to
content_parts when chunk_type == "content" and isinstance(chunk_data, str),
while safely skipping or logging unexpected chunk formats to avoid breaking
title generation.

In `@basilisk/views/base_conversation.py`:
- Around line 490-492: The method get_effective_show_reasoning_blocks currently
hardcodes True so the new show_reasoning_blocks preference can't disable
reasoning output; update get_effective_show_reasoning_blocks to return the
actual preference instead of always True (e.g., read the attribute or setting
like self.show_reasoning_blocks or via the existing preference accessor such as
self.get_pref("show_reasoning_blocks", True) or
self.conversation.show_reasoning_blocks) so the flag added in this PR
(show_reasoning_blocks) can hide reasoning blocks when set False.
- Around line 698-706: The except clause using "except IndexError, TypeError:"
is Python2 syntax and must be changed to Python3 multi-exception form; update
the exception handler around the selection logic in the block that uses
self.reasoning_effort_choice.GetSelection() and state.effort_options (inside the
if guarded by state.reasoning_effort_visible and state.effort_display) to catch
both exceptions using a parenthesized tuple (e.g., except (IndexError,
TypeError):) so old_val falls back to state.effort_options[-1] on either error.

In `@basilisk/views/conversation_tab.py`:
- Around line 485-487: The draft restore routine no longer restores the
tools/search toggle; update the sequence in _restore_draft_block to also reapply
web_search_mode by either adding a call like
self._restore_draft_block_web_search_mode(draft_block) after
_restore_draft_block_reasoning or by having
_restore_draft_block_params/_restore_draft_block_reasoning set
draft_block.web_search_mode; implement a helper named
_restore_draft_block_web_search_mode that reads draft_block.web_search_mode and
sets the corresponding attribute on the conversation/block UI state so reopening
a saved draft preserves the web search mode.
- Around line 525-535: The code currently maps an unknown saved
draft_block.reasoning_effort to the last option; instead, only change the UI
selection if the saved value is supported by the provider spec: retrieve options
via self.current_engine.get_reasoning_ui_spec(self.current_model), compute val =
draft_block.reasoning_effort.lower(), and if val is in options call
self.reasoning_effort_choice.SetSelection(options.index(val)); otherwise leave
the existing/default selection untouched (do not set selection to
len(options)-1).

In `@basilisk/views/edit_block_dialog.py`:
- Around line 228-248: The _load_account_and_model method currently swallows all
exceptions with a bare except Exception and logger.debug; update it to catch
specific expected errors (e.g., AttributeError, KeyError, StopIteration, and any
known errors from config.accounts().get_accounts_by_provider or
engine.get_model) so genuine bugs aren’t hidden, and for any unexpected
exception either re-raise or log at a higher level (logger.warning or
logger.error) including exc_info=True; keep a small fallback except Exception
only to add context before re-raising if absolutely needed.

In `@basilisk/views/history_msg_text_ctrl.py`:
- Around line 212-216: Docstring says "If None, use config default" but the code
sets show_reasoning_blocks = True; change the None handling to read the actual
config default instead of hardcoding True. Replace the line "if
show_reasoning_blocks is None: show_reasoning_blocks = True" with code that
reads the default from your settings (e.g., show_reasoning_blocks =
self.config.get("show_reasoning_blocks") or similar existing config accessor)
and fall back to True only if the config value is missing; keep references to
the show_reasoning_blocks variable and update any docstring if you instead
choose the alternative.

In `@tests/conversation/test_model.py`:
- Around line 53-67: Update the test_message_block_reasoning_fields test to also
set web_search_mode=True on the MessageBlock and assert that
block.web_search_mode is True; locate the test function
(test_message_block_reasoning_fields) creating MessageBlock and add the
web_search_mode=True argument to its constructor and a corresponding assertion
for the web_search_mode attribute to ensure the regression is covered.

---

Outside diff comments:
In `@basilisk/conversation/database/manager.py`:
- Around line 388-401: The DBMessageBlock instantiation currently only persists
web_search_mode so the new MessageBlock reasoning fields (reasoning_mode,
reasoning_adaptive, reasoning_budget_tokens, reasoning_effort) are lost; update
the DBMessageBlock construction in manager.py (the block that creates
DBMessageBlock with conversation_id, position, model_provider, etc.) to include
these four reasoning fields (pull values from block.reasoning_mode,
block.reasoning_adaptive, block.reasoning_budget_tokens, block.reasoning_effort)
and ensure the symmetric DBMessageBlock -> MessageBlock restore path also reads
those fields back; apply the same additions to the other DBMessageBlock creation
site referenced in the review to keep drafts consistent across restarts.

In `@basilisk/conversation/database/models.py`:
- Around line 81-123: DBMessageBlock is missing ORM fields for reasoning
settings so those values are lost; add mapped_column attributes to the
DBMessageBlock class for reasoning_mode, reasoning_budget_tokens,
reasoning_effort, and reasoning_adaptive (use the same names) with appropriate
SQL types and defaults to match the in-memory MessageBlock shape (e.g.,
reasoning_mode: Mapped[str | None] = mapped_column(default=None),
reasoning_budget_tokens: Mapped[int | None] = mapped_column(default=None),
reasoning_effort: Mapped[str | None] = mapped_column(default=None),
reasoning_adaptive: Mapped[bool] = mapped_column(default=False)); ensure these
new fields are declared alongside the other mapped columns in the DBMessageBlock
class so they are persisted and loaded with the model.

In `@basilisk/provider_engine/deepseek_engine.py`:
- Around line 69-90: The stream handling in deepseek_engine.py leaves an open
reasoning fence if the for-loop over stream ends while
reasoning_content_tag_sent is True; update the generator in the function that
iterates "for chunk in stream" (where reasoning_content_tag_sent is tracked) to
emit a closing fence when the stream completes by checking
reasoning_content_tag_sent after the loop and yielding ("content", "\n```") so
split_reasoning_and_content() receives a properly-terminated reasoning block.

In `@basilisk/provider_engine/openrouter_engine.py`:
- Around line 120-123: The code accesses
model.get("top_provider").get("max_completion_tokens") which will raise
AttributeError if top_provider is None; update the access in
openrouter_engine.py to safely handle a missing top_provider by using a fallback
empty dict or checking for truthiness first (e.g., assign top_provider =
model.get("top_provider") or {} and then read
top_provider.get("max_completion_tokens") or -1) so the expression always
returns an int and never calls .get on None.

In `@tests/test_conversation_profile.py`:
- Around line 333-362: Add tests that assert validation rejects invalid values
for the new reasoning fields: when constructing ConversationProfile (the
constructor) pass a negative reasoning_budget_tokens (e.g., -1) and assert it
raises ValueError with a message mentioning reasoning_budget_tokens, and pass an
unsupported reasoning_effort string (e.g., "extreme") and assert it raises
ValueError with a message mentioning reasoning_effort; add these cases near the
existing validation tests that check behavior without a model and with a model
so they exercise the same code paths in ConversationProfile.__init__ /
validation logic.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: 4c6a07df-64a0-4cfc-ba5e-26d92814d897

📥 Commits

Reviewing files that changed from the base of the PR and between 1860e0f and df6f76e.

⛔ Files ignored due to path filters (3)
  • basilisk/res/alembic/versions/002_add_audio_to_messages.py is excluded by !basilisk/res/**
  • basilisk/res/alembic/versions/003_add_reasoning_to_messages.py is excluded by !basilisk/res/**
  • basilisk/res/alembic/versions/004_add_web_search_mode_to_message_blocks.py is excluded by !basilisk/res/**
📒 Files selected for processing (47)
  • basilisk/completion_handler.py
  • basilisk/config/conversation_profile.py
  • basilisk/config/main_config.py
  • basilisk/conversation/content_utils.py
  • basilisk/conversation/conversation_model.py
  • basilisk/conversation/conversation_params.py
  • basilisk/conversation/database/manager.py
  • basilisk/conversation/database/models.py
  • basilisk/presenters/attachment_panel_presenter.py
  • basilisk/presenters/base_conversation_presenter.py
  • basilisk/presenters/conversation_presenter.py
  • basilisk/presenters/conversation_profile_presenter.py
  • basilisk/presenters/edit_block_presenter.py
  • basilisk/presenters/reasoning_params_helper.py
  • basilisk/presenters/view_utils.py
  • basilisk/provider_ai_model.py
  • basilisk/provider_engine/anthropic_engine.py
  • basilisk/provider_engine/base_engine.py
  • basilisk/provider_engine/deepseek_engine.py
  • basilisk/provider_engine/dynamic_model_loader.py
  • basilisk/provider_engine/gemini_engine.py
  • basilisk/provider_engine/legacy_openai_engine.py
  • basilisk/provider_engine/mistralai_engine.py
  • basilisk/provider_engine/ollama_engine.py
  • basilisk/provider_engine/openai_engine.py
  • basilisk/provider_engine/openrouter_engine.py
  • basilisk/provider_engine/provider_ui_spec.py
  • basilisk/provider_engine/responses_api_engine.py
  • basilisk/provider_engine/xai_engine.py
  • basilisk/services/attachment_service.py
  • basilisk/services/conversation_service.py
  • basilisk/views/base_conversation.py
  • basilisk/views/conversation_profile_dialog.py
  • basilisk/views/conversation_tab.py
  • basilisk/views/edit_block_dialog.py
  • basilisk/views/history_msg_text_ctrl.py
  • tests/conversation/test_content_utils.py
  • tests/conversation/test_model.py
  • tests/presenters/test_attachment_panel_presenter.py
  • tests/presenters/test_conversation_presenter.py
  • tests/provider_engine/__init__.py
  • tests/provider_engine/test_base_engine.py
  • tests/provider_engine/test_dynamic_model_loader.py
  • tests/provider_engine/test_reasoning_params.py
  • tests/services/test_attachment_service.py
  • tests/services/test_conversation_service.py
  • tests/test_conversation_profile.py

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@basilisk/conversation/database/manager.py`:
- Line 790: The code currently runs split_reasoning_and_content on every message
(reasoning, content = split_reasoning_and_content(db_msg.content)), which can
mutate user messages containing literal ```think blocks; change it to only parse
assistant messages by checking the message role before calling the parser (e.g.,
if db_msg.role == "assistant": reasoning, content =
split_reasoning_and_content(db_msg.content) else: reasoning = None; content =
db_msg.content), keeping the same variables (split_reasoning_and_content,
db_msg.content, db_msg.role) so downstream code is unchanged.

In `@docs/branch-cleanup-review.md`:
- Line 64: Update the phrase "reasoning-storage specific code" to use a
hyphenated compound modifier: change it to "reasoning-storage-specific code" in
the docs line that currently reads "**Clean:** No token-usage or
reasoning-storage specific code" so the compound adjective is grammatically
correct.
- Around line 66-73: Update the "Recommended Actions" and "Complexity Note" so
they are consistent: either remove the suggestion to delete migrations 002 and
003 under the "feat/dynamicModels" bullet (if those migrations were already
removed) or change the "Recommended Actions" to reflect the chosen pragmatic
alternative and explain why; specifically edit the "Recommended Actions"
section's "feat/dynamicModels" line and the "Complexity Note" paragraph so both
consistently reference keeping or removing migrations 002 and 003 and clarify
the final decision regarding reasoning/audio removal from models/manager.
- Around line 22-29: Update the "Model and Manager Contamination" section to
state that DBMessage and Message no longer include the fields reasoning,
audio_data, and audio_format; remove or replace the text claiming these fields
still exist and that splitting requires coordinated changes, and instead note
that the cleanup is complete and no coordinated split is required for these
fields. Refer to DBMessage, Message, and the field names reasoning, audio_data,
and audio_format when making the edit so readers can verify the change.
- Around line 9-20: Rewrite the migration section to remove temporal ambiguity
by explicitly separating completed changes from outstanding tasks: add a
"Completed" bullet listing that feat/dynamicModels no longer contains migrations
002 and 003, and a "Remaining" bullet or "To do" listing the target state and
migration chain (feat/dynamicModels → 004 down_revision=001;
feat/reasoning-storage → 003 down_revision=004;
feat/token-usage-block-properties → 005 down_revision=003; feat/audio-output →
002 down_revision=005 and chain 001→004→003→005→002). Use past tense for
completed items and present/imperative for remaining actions so readers can
clearly see what was done versus what remains.
- Around line 14-20: The migration ordering shown (Migration chain: 001 → 004 →
003 → 005 → 002) is confusing because numbers are non-sequential; update the
document to either renumber the migrations to reflect the actual application
order (e.g., reorder the migration IDs so feat/dynamicModels,
feat/reasoning-storage, feat/token-usage-block-properties, feat/audio-output
appear in sequential numeric order) or add a clear, prominent note above the
list explaining why IDs are intentionally non-sequential (reference the branch
names feat/dynamicModels, feat/reasoning-storage,
feat/token-usage-block-properties, feat/audio-output and the specific migration
IDs 001, 004, 003, 005, 002) so readers understand the rationale.

In `@tests/ipc/test_ipc_architecture.py`:
- Around line 136-146: Replace the hard-coded sleep literals (0.15, 0.08, 0.3)
with UPPER_SNAKE_CASE constants (for example SHORT_DELAY = 0.15,
INTER_SIGNAL_DELAY = 0.08, CALLBACK_WAIT = 0.3) and use those constants in the
test code where time.sleep(...) is called (the block creating FocusSignal and
calling ipc.send_signal and the similar block at 159-169); update imports if
needed and ensure constants are defined at module scope so both tests reference
the same values for easy tuning.
🪄 Autofix (Beta)

❌ Autofix failed (check again to retry)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: 213dd3d9-e553-4deb-b1c7-fa655eac47a0

📥 Commits

Reviewing files that changed from the base of the PR and between df6f76e and 71d4afa.

⛔ Files ignored due to path filters (1)
  • basilisk/res/alembic/versions/004_add_web_search_mode_to_message_blocks.py is excluded by !basilisk/res/**
📒 Files selected for processing (4)
  • basilisk/conversation/database/manager.py
  • basilisk/conversation/database/models.py
  • docs/branch-cleanup-review.md
  • tests/ipc/test_ipc_architecture.py

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@basilisk/provider_engine/openrouter_engine.py`:
- Around line 113-127: The code assumes model["architecture"]["modality"] is
always a string when building ProviderAIModel (see the modality assignment and
ProviderAIModel construction), which could crash if architecture or modality
becomes null; update the extraction to defensively handle missing/null values
(e.g., obtain architecture = model.get("architecture") or {} and set modality =
(architecture.get("modality") or "") or explicitly coerce to string) and then
use "image" in modality, "audio" in modality, "file" in modality when
constructing ProviderAIModel to avoid TypeError if modality is None.

In `@basilisk/provider_engine/responses_api_engine.py`:
- Around line 157-162: The code mutates the caller's kwargs by using
kwargs.pop("web_search_mode"); change this to read web_search_mode =
kwargs.get("web_search_mode", False) so you do not modify the dict in get_params
(or the function containing the shown block), and keep the existing checks using
model_supports_web_search(model) and get_web_search_tool_definitions(model) to
populate params["tools"] when appropriate; then in completion() explicitly
filter out "web_search_mode" before calling params.update(kwargs) (e.g., create
a shallow copy or a filtered dict) so the original kwargs remains unchanged.
- Around line 189-215: Add the missing return type annotation to
completion_response_with_stream so it matches the subclass (Generator[tuple[str,
Any], None, None]) and refine event handling by explicitly handling/ignoring
normal lifecycle events instead of logging them as unexpected: keep the current
branches for ResponseReasoningTextDeltaEvent, ResponseTextDeltaEvent and
ResponseCompletedEvent, add explicit no-op branches for ResponseCreatedEvent and
ResponseInProgressEvent (and any other known lifecycle events in this module)
that just continue, and only log a warning in the final else for truly unknown
event types; reference completion_response_with_stream and the event classes
ResponseReasoningTextDeltaEvent, ResponseTextDeltaEvent, ResponseCompletedEvent,
ResponseCreatedEvent, ResponseInProgressEvent when making the changes.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: 9c98cb5a-dcbd-41fd-ba62-ac433e14dc03

📥 Commits

Reviewing files that changed from the base of the PR and between 71d4afa and 0dc11ba.

📒 Files selected for processing (3)
  • basilisk/provider_engine/deepseek_engine.py
  • basilisk/provider_engine/openrouter_engine.py
  • basilisk/provider_engine/responses_api_engine.py

- Replace hardcoded model lists with JSON from model-metadata URLs
- Add DynamicModelLoader; engines fetch models from provider metadata
- Add Responses API engine base; refactor OpenAI/Anthropic/xAI/Mistral/Gemini/DeepSeek
- Add image_output/audio model metadata (ProviderAIModel) for capability detection
- Block image-generation models in UI
- Add reasoning chunk handling and content_utils for legacy ```think
- Add web_search_mode migration and parameter visibility state
- Add reasoning_params_helper for view-to-block param extraction
@AAClause AAClause force-pushed the feat/dynamicModels branch from 0dc11ba to e34b0e6 Compare March 15, 2026 06:25
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 15

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
basilisk/presenters/edit_block_presenter.py (1)

151-158: ⚠️ Potential issue | 🟠 Major

Forward web_search_mode when regenerating.

The temp block stores web_search_mode, but engines like AnthropicEngine.completion() read that switch from kwargs, not from new_block. Because this start_completion() call omits it, regenerate silently disables web-search tooling even when the control is enabled.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@basilisk/presenters/edit_block_presenter.py` around lines 151 - 158, The
start_completion call in edit_block_presenter.py omits the
temp_block.web_search_mode flag so engines (e.g., AnthropicEngine.completion)
that expect web_search_mode in kwargs will have web search tooling disabled on
regenerate; update the completion_handler.start_completion invocation to pass
web_search_mode=temp_block.web_search_mode (or forward kwargs including
web_search_mode) alongside the existing parameters (engine, system_message,
conversation, new_block, stream, stop_block_index) so the engine receives the
web_search_mode switch.
basilisk/provider_engine/legacy_openai_engine.py (1)

217-223: ⚠️ Potential issue | 🟠 Major

The completion_response_with_stream method drops the final usage-only chunk.

When stream_options.include_usage is enabled, OpenAI's final streamed chunk contains usage data with choices=[]. The current code skips this chunk entirely with the continue statement at line 219, so the token usage is never extracted or set on new_block. The docstring claims this usage is captured, but the implementation does not extract it.

To fix, check if chunk.usage exists after the if not chunk.choices block and persist it to new_block before continuing.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@basilisk/provider_engine/legacy_openai_engine.py` around lines 217 - 223, In
completion_response_with_stream, the loop over stream currently skips chunks
with empty chunk.choices (dropping the final usage-only chunk); instead after
the "if not chunk.choices" check, inspect chunk.usage and, when present, set the
usage onto the response/new_block (the same object you populate later) before
continuing so token usage from the final OpenAI stream chunk is captured; update
references to stream, chunk, and new_block accordingly so usage is persisted
even when choices is empty.
basilisk/provider_engine/openai_engine.py (1)

426-430: ⚠️ Potential issue | 🟡 Minor

Use a context manager for file handling.

The file is opened without a with statement. If transcriptions.create() raises an exception, the file handle will leak.

Suggested fix
-		file = open(audio_file_path, "rb")
-		transcription = self.client.audio.transcriptions.create(
-			model="whisper-1", file=file, response_format=response_format
-		)
-		file.close()
-		return transcription
+		with open(audio_file_path, "rb") as file:
+			return self.client.audio.transcriptions.create(
+				model="whisper-1", file=file, response_format=response_format
+			)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@basilisk/provider_engine/openai_engine.py` around lines 426 - 430, Replace
the manual open/close with a context manager to avoid leaking the file handle:
in openai_engine.py change the code around audio_file_path and transcription
creation (the block that calls self.client.audio.transcriptions.create with
model="whisper-1" and response_format=response_format) to use with
open(audio_file_path, "rb") as file: and call
self.client.audio.transcriptions.create(...) inside that block so the file is
automatically closed even if transcriptions.create raises.
♻️ Duplicate comments (12)
tests/conversation/test_model.py (1)

53-67: ⚠️ Potential issue | 🟡 Minor

Extend this test to cover web_search_mode as well.

The reasoning-field test is good, but it still misses the newly added sibling flag and can miss regressions there.

✅ Suggested test extension
 		block = MessageBlock(
 			request=user_message,
 			model=ai_model,
 			reasoning_mode=True,
 			reasoning_budget_tokens=16000,
 			reasoning_effort="medium",
 			reasoning_adaptive=False,
+			web_search_mode=True,
 		)
@@
 		assert block.reasoning_effort == "medium"
 		assert block.reasoning_adaptive is False
+		assert block.web_search_mode is True
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/conversation/test_model.py` around lines 53 - 67, Extend the existing
test_message_block_reasoning_fields to also instantiate MessageBlock with the
sibling web-search flags and assert their values; specifically, when creating
the MessageBlock (in test_message_block_reasoning_fields) pass
web_search_mode=True along with web_search_budget_tokens (e.g., 8000),
web_search_effort (e.g., "low"), and web_search_adaptive=True, then add
assertions that block.web_search_mode is True and that
block.web_search_budget_tokens, block.web_search_effort, and
block.web_search_adaptive equal the values provided to detect regressions in the
web search fields.
basilisk/conversation/content_utils.py (1)

15-19: ⚠️ Potential issue | 🟠 Major

Only parse a leading reasoning block and strip just that prefix.

Current search() + sub() behavior can capture/remove non-leading think blocks from normal content.

🔧 Proposed fix
-_THINK_BLOCK_PATTERN = re.compile(r"```think\s*\n(.*?)\n```\s*", re.DOTALL)
+_THINK_BLOCK_PATTERN = re.compile(
+	r"^```think\s*\n(.*?)\n```\s*", re.DOTALL
+)
 _REASONING_BLOCK_PATTERN = re.compile(
-	rf"{re.escape(START_BLOCK_REASONING)}\s*\n(.*?)\n{re.escape(END_REASONING)}\s*",
+	rf"^{re.escape(START_BLOCK_REASONING)}\s*\n(.*?)\n{re.escape(END_REASONING)}\s*",
 	re.DOTALL,
 )
@@
-	match = _THINK_BLOCK_PATTERN.search(text)
+	match = _THINK_BLOCK_PATTERN.match(text)
@@
-	content = (_THINK_BLOCK_PATTERN.sub("", text) or "").strip()
+	content = text[match.end() :].strip()
@@
-	match = _REASONING_BLOCK_PATTERN.search(text)
+	match = _REASONING_BLOCK_PATTERN.match(text)
@@
-	content = (_REASONING_BLOCK_PATTERN.sub("", text) or "").strip()
+	content = text[match.end() :].strip()

Also applies to: 36-41, 69-74

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@basilisk/conversation/content_utils.py` around lines 15 - 19, The current
regex and removal logic for _THINK_BLOCK_PATTERN and _REASONING_BLOCK_PATTERN
can match non-leading blocks; update both patterns to require start-of-string
(prefix with ^) and replace .search()/sub() usage with .match() and slicing so
only a leading block is stripped: change _THINK_BLOCK_PATTERN and
_REASONING_BLOCK_PATTERN to use r"^..." anchors, call match =
_THINK_BLOCK_PATTERN.match(text) (and similarly for _REASONING_BLOCK_PATTERN),
and compute content = text[match.end():].strip() instead of using .sub() so only
the leading reasoning/think block prefix is removed.
basilisk/services/conversation_service.py (1)

255-260: ⚠️ Potential issue | 🟡 Minor

Defensively validate stream chunk shape before unpacking.

This loop assumes every chunk is a 2-item tuple; malformed/legacy chunks can still break title generation.

🛡️ Suggested hardening
 				for chunk in engine.completion_response_with_stream(
 					response, new_block=new_block
 				):
-					chunk_type, chunk_data = chunk
+					if isinstance(chunk, str):
+						content_parts.append(chunk)
+						continue
+					if (
+						not isinstance(chunk, (tuple, list))
+						or len(chunk) < 2
+					):
+						continue
+					chunk_type, chunk_data = chunk[0], chunk[1]
 					if chunk_type == "content" and isinstance(chunk_data, str):
 						content_parts.append(chunk_data)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@basilisk/services/conversation_service.py` around lines 255 - 260, The loop
over engine.completion_response_with_stream(response, new_block=new_block)
assumes each chunk unpacks to (chunk_type, chunk_data); change it to defensively
validate chunk shape before unpacking by checking that chunk is a sequence
(tuple/list) with length >=2 (or use a try/except around unpacking), skip or log
malformed chunks, and only assign chunk_type and chunk_data and append to
content_parts when chunk_type == "content" and isinstance(chunk_data, str);
reference engine.completion_response_with_stream, variables chunk, chunk_type,
chunk_data, content_parts, response, and new_block when making the change.
basilisk/config/conversation_profile.py (1)

259-263: ⚠️ Potential issue | 🟠 Major

Add bounds validation for reasoning_budget_tokens when a model is set.

This block only validates the no-model case. With ai_model_info present, negative token budgets currently pass through unchecked.

🔧 Proposed validation addition
 	if self.ai_model_info is None:
 		...
 		if self.reasoning_effort is not None:
 			raise ValueError("Reasoning effort must be None without model")
+	if (
+		self.reasoning_budget_tokens is not None
+		and self.reasoning_budget_tokens < 0
+	):
+		raise ValueError("Reasoning budget tokens must be >= 0")
 	return self
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@basilisk/config/conversation_profile.py` around lines 259 - 263, When
ai_model_info is present, add validation to reject negative
reasoning_budget_tokens: inside the same validation block that checks
self.reasoning_budget_tokens and self.reasoning_effort (the code referencing
ai_model_info), if self.reasoning_budget_tokens is not None ensure it's an int
and self.reasoning_budget_tokens >= 0 (otherwise raise ValueError with a clear
message); keep the existing None checks for the no-model branch intact and apply
this new bounds check alongside those checks for reasoning_effort and
ai_model_info.
basilisk/conversation/conversation_params.py (1)

19-29: ⚠️ Potential issue | 🟠 Major

Fail fast when expected reasoning attributes are missing.

Using getattr(..., None) here can silently hide model/profile shape drift and propagate invalid sync state.

🔧 Proposed hardening
+def _extract_params(source: object, keys: tuple[str, ...]) -> dict[str, Any]:
+	missing = [key for key in keys if not hasattr(source, key)]
+	if missing:
+		raise ValueError(
+			f"Missing expected reasoning params: {', '.join(missing)}"
+		)
+	return {key: getattr(source, key) for key in keys}
+
 def get_reasoning_params_from_block(block: Any) -> dict[str, Any]:
@@
-	return {k: getattr(block, k, None) for k in REASONING_PARAM_KEYS}
+	return _extract_params(block, REASONING_PARAM_KEYS)
@@
 def get_reasoning_params_from_profile(profile: Any) -> dict[str, Any]:
@@
-	return {k: getattr(profile, k, None) for k in REASONING_PARAM_KEYS}
+	return _extract_params(profile, REASONING_PARAM_KEYS)
Based on learnings: Conversation and message handling should use Pydantic models (`Conversation`, `MessageBlock`, `Message`) with proper validation.

Also applies to: 31-40

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@basilisk/conversation/conversation_params.py` around lines 19 - 29,
get_reasoning_params_from_block currently uses getattr(..., None) which hides
missing/shape-drift attributes; change it to validate and fail fast: for each
key in REASONING_PARAM_KEYS call getattr(block, key) (no default) or check
result and if any are missing/None raise a clear ValueError/AttributeError
listing the missing keys and include the block type in the message; prefer using
the Pydantic MessageBlock/Conversation model (validate instance or convert via
.dict()) before extraction so model validation enforces shapes; apply the same
hardening to the other extractor(s) referenced alongside this function.
basilisk/services/attachment_service.py (1)

239-250: ⚠️ Potential issue | 🟡 Minor

Add translator context before these new attachment errors.

These _() strings are all user-facing, but this helper still adds them
without # Translators: comments. Please add context above each message,
especially the %s variants that interpolate compatible model names.

Suggested fix
 		if has_images and not current_model.vision:
 			vision_models = [m.name or m.id for m in engine.models if m.vision]
+			# Translators: Error shown when the selected model cannot accept image attachments.
+			# %s is a comma-separated list of compatible model names.
 			return False, _(
 				"The selected model does not support images. "
 				"Please select a vision model instead (%s)."
 			) % ", ".join(vision_models)

Apply the same pattern to the audio and document branches below.

As per coding guidelines: Use `_("string")` for all user-facing text and use `# Translators:` comments before translatable strings to provide context for translators.

Also applies to: 259-264

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@basilisk/services/attachment_service.py` around lines 239 - 250, Add
translator context comments above each user-facing translatable string in the
attachment validation branches: place a "# Translators: ..." comment immediately
before the _("The selected model does not support images. ... (%s)") in the
image branch and before both _("The selected model does not support audio.
Please select an audio-capable model instead (%s)") and _("The selected model
does not support audio.") in the audio branch, and do the same for the
document-related messages referenced at 259-264; for the "%s" variants note in
the comment that the placeholder will be replaced with a comma-separated list of
compatible model names (e.g., "comma-separated list of compatible model names").
basilisk/presenters/edit_block_presenter.py (3)

131-148: ⚠️ Potential issue | 🟠 Major

Resolve reasoning params against the compatibility-adjusted model.

ensure_model_compatibility() can replace the selected model, but get_reasoning_params_from_view() is still resolving values from the raw view state here. Pass the adjusted model (and engine) into the helper so provider-specific option mapping matches the model that will actually be requested.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@basilisk/presenters/edit_block_presenter.py` around lines 131 - 148, The
reasoning params are being derived from the raw view instead of the
compatibility-adjusted model; update the call to get_reasoning_params_from_view
so it accepts and uses the adjusted model (and engine if applicable) returned by
ensure_model_compatibility before building temp_block. Specifically, after
ensure_model_compatibility resolves/returns the final model (and engine), call
get_reasoning_params_from_view(self.view, model=model, engine=engine) (or the
equivalent parameter names) so provider-specific mappings match the actual model
used when constructing MessageBlock/AIModelInfo; keep the rest of temp_block
construction (including view_get_web_search_value) unchanged.

223-232: ⚠️ Potential issue | 🟠 Major

Keep structured reasoning in response.reasoning instead of re-embedding it into content.

_on_non_stream_finish() rebuilds the editor text from new_block.response.content, so providers that already populate response.reasoning show no reasoning here. save_block() then serializes edited reasoning back into content while leaving response.reasoning stale. Render from response.reasoning/response.content, and save both fields separately.

Also applies to: 290-297

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@basilisk/presenters/edit_block_presenter.py` around lines 223 - 232, The code
currently rebuilds editor text from new_block.response.content and then
re-parses editor text with split_reasoning_and_content_from_display, which
embeds reasoning back into content and leaves response.reasoning stale; update
_on_non_stream_finish (and the similar block around lines ~290-297) to render
the editor from response.reasoning and response.content separately (i.e., if
response.reasoning exists, prepend the ```think``` section from
response.reasoning, else use response.content), and change save_block() logic to
set both self.block.response = self.block.response.model_copy(update={"content":
content_only, "reasoning": reasoning_only}) based on
split_reasoning_and_content_from_display instead of stuffing reasoning into
content; ensure you use split_reasoning_and_content_from_display only to extract
reasoning/content on save and preserve response.reasoning when rendering.

244-249: ⚠️ Potential issue | 🟠 Major

Guard the async completion callbacks against dialog teardown.

These handlers all mutate controls from CompletionHandler callbacks, but none are wrapped with _guard_destroying. A late completion after the dialog closes can still touch dead widgets. Import _guard_destroying and decorate the UI-touching callbacks.

As per coding guidelines: Use DestroyGuardMixin and decorate presenter callbacks with @_guard_destroying to guard against post-destroy calls.

Also applies to: 251-261, 273-314

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@basilisk/presenters/edit_block_presenter.py` around lines 244 - 249, Import
and use the dialog destruction guard and decorate UI-mutating completion
callbacks: add "from .destroy_guard import _guard_destroying" (or the existing
mixin import) and annotate _on_regenerate_start with `@_guard_destroying`; do the
same for all other CompletionHandler/UI-touching presenter callbacks in this
file (the callbacks in the ranges indicated: the handlers around 251-261 and
273-314). Also ensure the presenter class uses DestroyGuardMixin (or the
appropriate mixin providing _guard_destroying) so the decorator works.
basilisk/provider_engine/dynamic_model_loader.py (1)

55-58: ⚠️ Potential issue | 🔴 Critical

Fix the Python 3 except syntax before this module can import.

except TypeError, ValueError: is a SyntaxError in Python 3 (multiple exception types must be parenthesized), so this new module will fail to import as written. Apply the same fix to all four occurrences.

🐛 Proposed fix
-		except TypeError, ValueError:
+		except (TypeError, ValueError):
#!/bin/bash
python -m py_compile basilisk/provider_engine/dynamic_model_loader.py || true
rg -n 'except TypeError, ValueError:' basilisk/provider_engine/dynamic_model_loader.py

Also applies to: 65-68, 72-75, 134-137

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@basilisk/provider_engine/dynamic_model_loader.py` around lines 55 - 58, The
module uses Python 2 except syntax "except TypeError, ValueError:" which is a
SyntaxError in Python 3; update each occurrence in dynamic_model_loader.py to
use the tuple form "except (TypeError, ValueError):" (there are four occurrences
around the int(val) conversion blocks mentioned) so the module can import; after
making the changes, run a quick py_compile check to verify the file parses
cleanly.
basilisk/presenters/conversation_presenter.py (1)

184-200: ⚠️ Potential issue | 🟠 Major

Resolve reasoning params against the compatibility-adjusted model.

ensure_model_compatibility() can replace the selected model, but get_reasoning_params_from_view() is still resolving values from the raw view state here. Pass the adjusted model (and engine) into the helper so provider-specific option mapping matches the model that will actually be requested.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@basilisk/presenters/conversation_presenter.py` around lines 184 - 200, The
reasoning params are being derived from the raw view state instead of the
compatibility-adjusted model returned by ensure_model_compatibility(); update
the call to get_reasoning_params_from_view to accept and use the adjusted model
(and engine if applicable) so provider-specific option mapping matches the model
that will actually be requested—i.e., after calling
ensure_model_compatibility(), pass that adjusted model (and engine identifier)
into get_reasoning_params_from_view before constructing the MessageBlock.
basilisk/views/base_conversation.py (1)

705-705: ⚠️ Potential issue | 🔴 Critical

Critical: Invalid Python 3 exception syntax causes SyntaxError.

The syntax except IndexError, TypeError: is Python 2 style and will cause a SyntaxError that prevents the module from importing. Multiple exceptions must be wrapped in parentheses in Python 3.

Suggested fix
-			except IndexError, TypeError:
+			except (IndexError, TypeError):
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@basilisk/views/base_conversation.py` at line 705, The except clause using
Python 2 syntax (`except IndexError, TypeError:`) is invalid in Python 3;
replace it with a tuple-based except such as `except (IndexError, TypeError):`
or `except (IndexError, TypeError) as e:` if the exception object is referenced
later. Update the offending clause (the `except IndexError, TypeError:` line) to
use the Python 3 form so the module can import correctly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@basilisk/conversation/database/manager.py`:
- Line 392: Replace the getattr usage for web_search_mode with direct attribute
access: wherever getattr(block, "web_search_mode", False) is used (e.g., in the
code creating/reading MessageBlock fields), change it to block.web_search_mode;
update both occurrences mentioned so the code reads block.web_search_mode (using
the MessageBlock type's declared default) to be more idiomatic and avoid
redundant fallback logic.

In `@basilisk/presenters/conversation_presenter.py`:
- Around line 176-182: The presenter currently calls wx.MessageBox directly (in
conversation_presenter.py within the block that checks getattr(model,
"image_output", False)), which breaks MVP; change this to call a view method
(e.g. self.view.show_error(...) or self.view.show_enhanced_error(...)) passing
the same localized message and title (e.g. _("Image generation models are not
supported yet.") and _("Not Supported")), keep the subsequent return None
behavior, and remove the direct wx dependency/import from the presenter so all
UI/dialog handling is delegated to the view.

In `@basilisk/presenters/edit_block_presenter.py`:
- Around line 118-124: The presenter currently calls wx.MessageBox directly when
getattr(model, "image_output", False) is True; move that UI handling into the
view by replacing the wx.MessageBox call with a call to a view method (e.g.,
view.show_error(...) or view.show_enhanced_error(...)) and remove any wx
usage/imports from the presenter; keep the same control flow (return False)
after invoking the view method and ensure the view method displays the "Image
generation models are not supported yet." / "Not Supported" message and
appropriate icon.

In `@basilisk/presenters/reasoning_params_helper.py`:
- Around line 45-58: The isinstance checks around GetValue() are unnecessary for
wx controls; update the assignments in reasoning_params_helper to directly use
the control return values: set result["reasoning_mode"] =
bool(view.reasoning_mode.GetValue()), for reasoning_adaptive set
result["reasoning_adaptive"] = bool(view.reasoning_adaptive.GetValue()) when the
attribute exists, and for reasoning_budget_tokens set
result["reasoning_budget_tokens"] = int(view.reasoning_budget_spin.GetValue())
when that attribute exists; keep the hasattr guards so missing controls yield
the same defaults (False or None) but remove the redundant isinstance(...)
checks.
- Around line 76-82: The current logic falls back to options[-1] when effort_idx
is invalid, which can silently change the user's saved setting; update the block
in reasoning_params_helper.py so result["reasoning_effort"] is only set when
effort_idx is an int and 0 <= effort_idx < len(options) (i.e., remove the elif
fallback to the last option), use effort_val only from that valid branch, and
leave result["reasoning_effort"] unchanged otherwise; reference the existing
symbols effort_idx, options, effort_val, and result["reasoning_effort"] when
making this change.

In `@basilisk/provider_engine/dynamic_model_loader.py`:
- Around line 183-190: The ProviderAIModel instantiation in
dynamic_model_loader.py is hardcoding max_temperature=2.0 which overrides the
dataclass default; remove the hardcoded max_temperature from the
ProviderAIModel(...) call so the dataclass default (1.0) is used, and if any
provider needs a different cap apply those adjustments inside the existing
_postprocess_models(models, provider_name, ...) function (or set
provider-specific limits there) instead of in the generic loader.

In `@basilisk/provider_engine/gemini_engine.py`:
- Around line 98-100: The method model_supports_web_search in gemini_engine.py
is a redundant override that only calls super() and should be removed to avoid
dead code; delete the model_supports_web_search method (or, if you intend to
keep it as explicit documentation/placeholder, add a comment indicating that
purpose) so the class relies on the base implementation and reduce maintenance
overhead.
- Line 87: The constant MODELS_JSON_URL in gemini_engine.py is longer than 80
characters; split the string across two (or more) adjacent string literals
inside parentheses (like the xAI engine does) so Python will implicitly
concatenate them and preserve the exact URL value, e.g., break after a logical
path boundary (".../SigmaNight/" and "model-metadata/...") and assign the
concatenated result back to MODELS_JSON_URL.
- Around line 306-325: The completion_response_with_stream method currently only
yields ("content", chunk.text); update it to iterate chunk.parts on each
GenerateContentResponse (from completion_response_with_stream) and for each
part: if part.thought is true yield ("reasoning", part.text) else yield
("content", part.text); keep the fallback of using chunk.text if parts is empty
to preserve existing behavior; reference completion_response_with_stream,
GenerateContentResponse, MessageBlock, and part.thought when making the change
so the thinking portions created by _build_thinking_config are emitted to the UI
as "reasoning".

In `@basilisk/provider_engine/openai_engine.py`:
- Line 111: The constant MODELS_JSON_URL exceeds the 80-character line length;
split the URL literal across two or more lines using a parenthesized
implicit-concatenation or explicit concatenation so the assignment to
MODELS_JSON_URL still produces the same full URL (e.g., break after
"SigmaNight/" so the symbol MODELS_JSON_URL is assigned by combining the two
pieces), keeping whitespace and quotes correct and not changing the URL value.
- Around line 248-266: Validate audio MIME support explicitly by calling
_audio_mime_to_format(mime) and if it returns None raise a clear ValueError
about unsupported audio formats (mention accepted formats like mp3/wav); then
separately check url.startswith("data:") and only parse and append the
input_audio dict to parts (using parts.append with keys "type":"input_audio" and
"input_audio":{ "data":..., "format": audio_format }) when it's a data URL,
otherwise raise a ValueError stating non-data audio URLs are not supported for
gpt-audio. Ensure you reference _audio_mime_to_format, audio_format,
url.startswith, and parts.append when making these fixes.

In `@basilisk/services/attachment_service.py`:
- Around line 251-258: The branch incorrectly treats vision as a fallback for
document support: update the check so it only blocks when has_documents is true
and current_model.document is false (remove the current_model.vision condition),
and change the doc_models list comprehension to list only models with
document=True (use [m.name or m.id for m in engine.models if m.document]) so
models with only vision=True are not treated as document-capable; keep
references to current_model, engine.models, and doc_models when making the
change.

In `@basilisk/views/edit_block_dialog.py`:
- Around line 268-270: The current code falls back to the last option when
self.block.reasoning_effort isn't present in options, which can silently change
the saved effort; change the logic in the block where val =
self.block.reasoning_effort is computed so that: if val is in options use its
index, else if a configured default (e.g. self.default_reasoning_effort or a
provider default lookup) exists in options use that index, otherwise call
self.reasoning_effort_choice.SetSelection(-1) or leave it unselected instead of
selecting len(options)-1; update the selection call that uses
self.reasoning_effort_choice.SetSelection(idx) accordingly so it no longer
defaults to the last option.

In `@tests/provider_engine/test_reasoning_params.py`:
- Around line 30-38: The test is vacuous because the fixture ProviderAIModel
does not advertise "top_p" and the test never sets a top_p value, so
_filter_params_for_model never has a top_p to strip; update the test to either
(A) add "top_p" to the fixture's supported_parameters for the ProviderAIModel
(id="claude-sonnet-4-6") or (B) set a top_p in the params passed to
_filter_params_for_model before toggling reasoning, then call
_filter_params_for_model and assert that with reasoning=True the returned dict
no longer contains "top_p" while with reasoning=False it does; reference
ProviderAIModel and _filter_params_for_model to locate places to change.

---

Outside diff comments:
In `@basilisk/presenters/edit_block_presenter.py`:
- Around line 151-158: The start_completion call in edit_block_presenter.py
omits the temp_block.web_search_mode flag so engines (e.g.,
AnthropicEngine.completion) that expect web_search_mode in kwargs will have web
search tooling disabled on regenerate; update the
completion_handler.start_completion invocation to pass
web_search_mode=temp_block.web_search_mode (or forward kwargs including
web_search_mode) alongside the existing parameters (engine, system_message,
conversation, new_block, stream, stop_block_index) so the engine receives the
web_search_mode switch.

In `@basilisk/provider_engine/legacy_openai_engine.py`:
- Around line 217-223: In completion_response_with_stream, the loop over stream
currently skips chunks with empty chunk.choices (dropping the final usage-only
chunk); instead after the "if not chunk.choices" check, inspect chunk.usage and,
when present, set the usage onto the response/new_block (the same object you
populate later) before continuing so token usage from the final OpenAI stream
chunk is captured; update references to stream, chunk, and new_block accordingly
so usage is persisted even when choices is empty.

In `@basilisk/provider_engine/openai_engine.py`:
- Around line 426-430: Replace the manual open/close with a context manager to
avoid leaking the file handle: in openai_engine.py change the code around
audio_file_path and transcription creation (the block that calls
self.client.audio.transcriptions.create with model="whisper-1" and
response_format=response_format) to use with open(audio_file_path, "rb") as
file: and call self.client.audio.transcriptions.create(...) inside that block so
the file is automatically closed even if transcriptions.create raises.

---

Duplicate comments:
In `@basilisk/config/conversation_profile.py`:
- Around line 259-263: When ai_model_info is present, add validation to reject
negative reasoning_budget_tokens: inside the same validation block that checks
self.reasoning_budget_tokens and self.reasoning_effort (the code referencing
ai_model_info), if self.reasoning_budget_tokens is not None ensure it's an int
and self.reasoning_budget_tokens >= 0 (otherwise raise ValueError with a clear
message); keep the existing None checks for the no-model branch intact and apply
this new bounds check alongside those checks for reasoning_effort and
ai_model_info.

In `@basilisk/conversation/content_utils.py`:
- Around line 15-19: The current regex and removal logic for
_THINK_BLOCK_PATTERN and _REASONING_BLOCK_PATTERN can match non-leading blocks;
update both patterns to require start-of-string (prefix with ^) and replace
.search()/sub() usage with .match() and slicing so only a leading block is
stripped: change _THINK_BLOCK_PATTERN and _REASONING_BLOCK_PATTERN to use
r"^..." anchors, call match = _THINK_BLOCK_PATTERN.match(text) (and similarly
for _REASONING_BLOCK_PATTERN), and compute content = text[match.end():].strip()
instead of using .sub() so only the leading reasoning/think block prefix is
removed.

In `@basilisk/conversation/conversation_params.py`:
- Around line 19-29: get_reasoning_params_from_block currently uses getattr(...,
None) which hides missing/shape-drift attributes; change it to validate and fail
fast: for each key in REASONING_PARAM_KEYS call getattr(block, key) (no default)
or check result and if any are missing/None raise a clear
ValueError/AttributeError listing the missing keys and include the block type in
the message; prefer using the Pydantic MessageBlock/Conversation model (validate
instance or convert via .dict()) before extraction so model validation enforces
shapes; apply the same hardening to the other extractor(s) referenced alongside
this function.

In `@basilisk/presenters/conversation_presenter.py`:
- Around line 184-200: The reasoning params are being derived from the raw view
state instead of the compatibility-adjusted model returned by
ensure_model_compatibility(); update the call to get_reasoning_params_from_view
to accept and use the adjusted model (and engine if applicable) so
provider-specific option mapping matches the model that will actually be
requested—i.e., after calling ensure_model_compatibility(), pass that adjusted
model (and engine identifier) into get_reasoning_params_from_view before
constructing the MessageBlock.

In `@basilisk/presenters/edit_block_presenter.py`:
- Around line 131-148: The reasoning params are being derived from the raw view
instead of the compatibility-adjusted model; update the call to
get_reasoning_params_from_view so it accepts and uses the adjusted model (and
engine if applicable) returned by ensure_model_compatibility before building
temp_block. Specifically, after ensure_model_compatibility resolves/returns the
final model (and engine), call get_reasoning_params_from_view(self.view,
model=model, engine=engine) (or the equivalent parameter names) so
provider-specific mappings match the actual model used when constructing
MessageBlock/AIModelInfo; keep the rest of temp_block construction (including
view_get_web_search_value) unchanged.
- Around line 223-232: The code currently rebuilds editor text from
new_block.response.content and then re-parses editor text with
split_reasoning_and_content_from_display, which embeds reasoning back into
content and leaves response.reasoning stale; update _on_non_stream_finish (and
the similar block around lines ~290-297) to render the editor from
response.reasoning and response.content separately (i.e., if response.reasoning
exists, prepend the ```think``` section from response.reasoning, else use
response.content), and change save_block() logic to set both self.block.response
= self.block.response.model_copy(update={"content": content_only, "reasoning":
reasoning_only}) based on split_reasoning_and_content_from_display instead of
stuffing reasoning into content; ensure you use
split_reasoning_and_content_from_display only to extract reasoning/content on
save and preserve response.reasoning when rendering.
- Around line 244-249: Import and use the dialog destruction guard and decorate
UI-mutating completion callbacks: add "from .destroy_guard import
_guard_destroying" (or the existing mixin import) and annotate
_on_regenerate_start with `@_guard_destroying`; do the same for all other
CompletionHandler/UI-touching presenter callbacks in this file (the callbacks in
the ranges indicated: the handlers around 251-261 and 273-314). Also ensure the
presenter class uses DestroyGuardMixin (or the appropriate mixin providing
_guard_destroying) so the decorator works.

In `@basilisk/provider_engine/dynamic_model_loader.py`:
- Around line 55-58: The module uses Python 2 except syntax "except TypeError,
ValueError:" which is a SyntaxError in Python 3; update each occurrence in
dynamic_model_loader.py to use the tuple form "except (TypeError, ValueError):"
(there are four occurrences around the int(val) conversion blocks mentioned) so
the module can import; after making the changes, run a quick py_compile check to
verify the file parses cleanly.

In `@basilisk/services/attachment_service.py`:
- Around line 239-250: Add translator context comments above each user-facing
translatable string in the attachment validation branches: place a "#
Translators: ..." comment immediately before the _("The selected model does not
support images. ... (%s)") in the image branch and before both _("The selected
model does not support audio. Please select an audio-capable model instead
(%s)") and _("The selected model does not support audio.") in the audio branch,
and do the same for the document-related messages referenced at 259-264; for the
"%s" variants note in the comment that the placeholder will be replaced with a
comma-separated list of compatible model names (e.g., "comma-separated list of
compatible model names").

In `@basilisk/services/conversation_service.py`:
- Around line 255-260: The loop over
engine.completion_response_with_stream(response, new_block=new_block) assumes
each chunk unpacks to (chunk_type, chunk_data); change it to defensively
validate chunk shape before unpacking by checking that chunk is a sequence
(tuple/list) with length >=2 (or use a try/except around unpacking), skip or log
malformed chunks, and only assign chunk_type and chunk_data and append to
content_parts when chunk_type == "content" and isinstance(chunk_data, str);
reference engine.completion_response_with_stream, variables chunk, chunk_type,
chunk_data, content_parts, response, and new_block when making the change.

In `@basilisk/views/base_conversation.py`:
- Line 705: The except clause using Python 2 syntax (`except IndexError,
TypeError:`) is invalid in Python 3; replace it with a tuple-based except such
as `except (IndexError, TypeError):` or `except (IndexError, TypeError) as e:`
if the exception object is referenced later. Update the offending clause (the
`except IndexError, TypeError:` line) to use the Python 3 form so the module can
import correctly.

In `@tests/conversation/test_model.py`:
- Around line 53-67: Extend the existing test_message_block_reasoning_fields to
also instantiate MessageBlock with the sibling web-search flags and assert their
values; specifically, when creating the MessageBlock (in
test_message_block_reasoning_fields) pass web_search_mode=True along with
web_search_budget_tokens (e.g., 8000), web_search_effort (e.g., "low"), and
web_search_adaptive=True, then add assertions that block.web_search_mode is True
and that block.web_search_budget_tokens, block.web_search_effort, and
block.web_search_adaptive equal the values provided to detect regressions in the
web search fields.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: 6cfe063a-a296-401b-b433-d78315a0cbba

📥 Commits

Reviewing files that changed from the base of the PR and between 0dc11ba and e34b0e6.

⛔ Files ignored due to path filters (1)
  • basilisk/res/alembic/versions/004_add_web_search_mode_to_message_blocks.py is excluded by !basilisk/res/**
📒 Files selected for processing (47)
  • basilisk/completion_handler.py
  • basilisk/config/conversation_profile.py
  • basilisk/config/main_config.py
  • basilisk/conversation/content_utils.py
  • basilisk/conversation/conversation_model.py
  • basilisk/conversation/conversation_params.py
  • basilisk/conversation/database/manager.py
  • basilisk/conversation/database/models.py
  • basilisk/presenters/attachment_panel_presenter.py
  • basilisk/presenters/base_conversation_presenter.py
  • basilisk/presenters/conversation_presenter.py
  • basilisk/presenters/conversation_profile_presenter.py
  • basilisk/presenters/edit_block_presenter.py
  • basilisk/presenters/reasoning_params_helper.py
  • basilisk/presenters/view_utils.py
  • basilisk/provider_ai_model.py
  • basilisk/provider_engine/anthropic_engine.py
  • basilisk/provider_engine/base_engine.py
  • basilisk/provider_engine/deepseek_engine.py
  • basilisk/provider_engine/dynamic_model_loader.py
  • basilisk/provider_engine/gemini_engine.py
  • basilisk/provider_engine/legacy_openai_engine.py
  • basilisk/provider_engine/mistralai_engine.py
  • basilisk/provider_engine/ollama_engine.py
  • basilisk/provider_engine/openai_engine.py
  • basilisk/provider_engine/openrouter_engine.py
  • basilisk/provider_engine/provider_ui_spec.py
  • basilisk/provider_engine/responses_api_engine.py
  • basilisk/provider_engine/xai_engine.py
  • basilisk/services/attachment_service.py
  • basilisk/services/conversation_service.py
  • basilisk/views/base_conversation.py
  • basilisk/views/conversation_profile_dialog.py
  • basilisk/views/conversation_tab.py
  • basilisk/views/edit_block_dialog.py
  • basilisk/views/history_msg_text_ctrl.py
  • tests/conversation/test_content_utils.py
  • tests/conversation/test_model.py
  • tests/presenters/test_attachment_panel_presenter.py
  • tests/presenters/test_conversation_presenter.py
  • tests/provider_engine/__init__.py
  • tests/provider_engine/test_base_engine.py
  • tests/provider_engine/test_dynamic_model_loader.py
  • tests/provider_engine/test_reasoning_params.py
  • tests/services/test_attachment_service.py
  • tests/services/test_conversation_service.py
  • tests/test_conversation_profile.py

max_tokens=block.max_tokens,
top_p=block.top_p,
stream=block.stream,
web_search_mode=getattr(block, "web_search_mode", False),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify if web_search_mode is a declared field on MessageBlock
ast-grep --pattern 'class MessageBlock($_) {
  $$$
  web_search_mode$_
  $$$
}'
# Fallback: search for field declaration
rg -n 'web_search_mode' basilisk/conversation/conversation_model.py

Repository: SigmaNight/basiliskLLM

Length of output: 115


Use direct attribute access for web_search_mode.

MessageBlock declares web_search_mode: bool = Field(default=False), making the getattr() pattern at lines 392 and 696 unnecessary. Direct access via block.web_search_mode is cleaner and more idiomatic.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@basilisk/conversation/database/manager.py` at line 392, Replace the getattr
usage for web_search_mode with direct attribute access: wherever getattr(block,
"web_search_mode", False) is used (e.g., in the code creating/reading
MessageBlock fields), change it to block.web_search_mode; update both
occurrences mentioned so the code reads block.web_search_mode (using the
MessageBlock type's declared default) to be more idiomatic and avoid redundant
fallback logic.

Comment on lines +176 to +182
if getattr(model, "image_output", False) is True:
wx.MessageBox(
_("Image generation models are not supported yet."),
_("Not Supported"),
wx.OK | wx.ICON_INFORMATION,
)
return None
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Move this dialog handling back into the view.

Calling wx.MessageBox from a presenter breaks the MVP boundary and bypasses the view error/display helpers. Use a view method such as show_error() or show_enhanced_error() here instead, and drop the direct wx dependency from this presenter.

As per coding guidelines: Presenter modules should avoid import wx where possible; delegate all wx calls to the view, and use view.show_error() or view.show_enhanced_error() instead of wx.MessageBox.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@basilisk/presenters/conversation_presenter.py` around lines 176 - 182, The
presenter currently calls wx.MessageBox directly (in conversation_presenter.py
within the block that checks getattr(model, "image_output", False)), which
breaks MVP; change this to call a view method (e.g. self.view.show_error(...) or
self.view.show_enhanced_error(...)) passing the same localized message and title
(e.g. _("Image generation models are not supported yet.") and _("Not
Supported")), keep the subsequent return None behavior, and remove the direct wx
dependency/import from the presenter so all UI/dialog handling is delegated to
the view.

Comment on lines +118 to +124
if getattr(model, "image_output", False) is True:
wx.MessageBox(
_("Image generation models are not supported yet."),
_("Not Supported"),
wx.OK | wx.ICON_INFORMATION,
)
return False
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Move this dialog handling back into the view.

Calling wx.MessageBox from a presenter breaks the MVP boundary and bypasses the view error/display helpers. Use a view method such as show_error() or show_enhanced_error() here instead, and drop the direct wx dependency from this presenter.

As per coding guidelines: Presenter modules should avoid import wx where possible; delegate all wx calls to the view, and use view.show_error() or view.show_enhanced_error() instead of wx.MessageBox.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@basilisk/presenters/edit_block_presenter.py` around lines 118 - 124, The
presenter currently calls wx.MessageBox directly when getattr(model,
"image_output", False) is True; move that UI handling into the view by replacing
the wx.MessageBox call with a call to a view method (e.g., view.show_error(...)
or view.show_enhanced_error(...)) and remove any wx usage/imports from the
presenter; keep the same control flow (return False) after invoking the view
method and ensure the view method displays the "Image generation models are not
supported yet." / "Not Supported" message and appropriate icon.

Comment on lines +45 to +58
val = view.reasoning_mode.GetValue()
result["reasoning_mode"] = bool(val) if isinstance(val, bool) else False

if hasattr(view, "reasoning_adaptive"):
val = view.reasoning_adaptive.GetValue()
result["reasoning_adaptive"] = (
bool(val) if isinstance(val, bool) else False
)

if hasattr(view, "reasoning_budget_spin"):
val = view.reasoning_budget_spin.GetValue()
result["reasoning_budget_tokens"] = (
val if isinstance(val, int) else None
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Defensive type checks are overly cautious for wx controls.

GetValue() on wx.CheckBox always returns bool and on wx.SpinCtrl always returns int. The isinstance checks add noise without providing real safety.

♻️ Suggested simplification
 	val = view.reasoning_mode.GetValue()
-	result["reasoning_mode"] = bool(val) if isinstance(val, bool) else False
+	result["reasoning_mode"] = bool(val)

 	if hasattr(view, "reasoning_adaptive"):
 		val = view.reasoning_adaptive.GetValue()
-		result["reasoning_adaptive"] = (
-			bool(val) if isinstance(val, bool) else False
-		)
+		result["reasoning_adaptive"] = bool(val)

 	if hasattr(view, "reasoning_budget_spin"):
 		val = view.reasoning_budget_spin.GetValue()
-		result["reasoning_budget_tokens"] = (
-			val if isinstance(val, int) else None
-		)
+		result["reasoning_budget_tokens"] = val
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
val = view.reasoning_mode.GetValue()
result["reasoning_mode"] = bool(val) if isinstance(val, bool) else False
if hasattr(view, "reasoning_adaptive"):
val = view.reasoning_adaptive.GetValue()
result["reasoning_adaptive"] = (
bool(val) if isinstance(val, bool) else False
)
if hasattr(view, "reasoning_budget_spin"):
val = view.reasoning_budget_spin.GetValue()
result["reasoning_budget_tokens"] = (
val if isinstance(val, int) else None
)
val = view.reasoning_mode.GetValue()
result["reasoning_mode"] = bool(val)
if hasattr(view, "reasoning_adaptive"):
val = view.reasoning_adaptive.GetValue()
result["reasoning_adaptive"] = bool(val)
if hasattr(view, "reasoning_budget_spin"):
val = view.reasoning_budget_spin.GetValue()
result["reasoning_budget_tokens"] = val
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@basilisk/presenters/reasoning_params_helper.py` around lines 45 - 58, The
isinstance checks around GetValue() are unnecessary for wx controls; update the
assignments in reasoning_params_helper to directly use the control return
values: set result["reasoning_mode"] = bool(view.reasoning_mode.GetValue()), for
reasoning_adaptive set result["reasoning_adaptive"] =
bool(view.reasoning_adaptive.GetValue()) when the attribute exists, and for
reasoning_budget_tokens set result["reasoning_budget_tokens"] =
int(view.reasoning_budget_spin.GetValue()) when that attribute exists; keep the
hasattr guards so missing controls yield the same defaults (False or None) but
remove the redundant isinstance(...) checks.

Comment on lines +76 to +82
if isinstance(effort_idx, int) and 0 <= effort_idx < len(options):
effort_val = options[effort_idx]
elif options:
effort_val = options[-1]
result["reasoning_effort"] = (
effort_val if isinstance(effort_val, str) else None
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Fallback to last option may silently change user intent.

When effort_idx is invalid but options exist, this falls back to the last option (line 79). If a saved effort value is no longer supported by the provider spec, this silently mutates the setting rather than preserving the current/default selection.

Consider only setting the value if the index is valid:

-			if isinstance(effort_idx, int) and 0 <= effort_idx < len(options):
-				effort_val = options[effort_idx]
-			elif options:
-				effort_val = options[-1]
-			result["reasoning_effort"] = (
-				effort_val if isinstance(effort_val, str) else None
-			)
+			if isinstance(effort_idx, int) and 0 <= effort_idx < len(options):
+				result["reasoning_effort"] = options[effort_idx]
+			# else: leave as None (default) rather than assuming last option
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if isinstance(effort_idx, int) and 0 <= effort_idx < len(options):
effort_val = options[effort_idx]
elif options:
effort_val = options[-1]
result["reasoning_effort"] = (
effort_val if isinstance(effort_val, str) else None
)
if isinstance(effort_idx, int) and 0 <= effort_idx < len(options):
result["reasoning_effort"] = options[effort_idx]
else:
result["reasoning_effort"] = None
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@basilisk/presenters/reasoning_params_helper.py` around lines 76 - 82, The
current logic falls back to options[-1] when effort_idx is invalid, which can
silently change the user's saved setting; update the block in
reasoning_params_helper.py so result["reasoning_effort"] is only set when
effort_idx is an int and 0 <= effort_idx < len(options) (i.e., remove the elif
fallback to the last option), use effort_val only from that valid branch, and
leave result["reasoning_effort"] unchanged otherwise; reference the existing
symbols effort_idx, options, effort_val, and result["reasoning_effort"] when
making this change.

super().models
log.debug("Getting openAI models")
# See <https://platform.openai.com/docs/models>
MODELS_JSON_URL = "https://raw.githubusercontent.com/SigmaNight/model-metadata/master/data/openai.json"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Line exceeds 80-character limit.

Suggested fix
-	MODELS_JSON_URL = "https://raw.githubusercontent.com/SigmaNight/model-metadata/master/data/openai.json"
+	MODELS_JSON_URL = (
+		"https://raw.githubusercontent.com/SigmaNight/model-metadata/"
+		"master/data/openai.json"
+	)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
MODELS_JSON_URL = "https://raw.githubusercontent.com/SigmaNight/model-metadata/master/data/openai.json"
MODELS_JSON_URL = (
"https://raw.githubusercontent.com/SigmaNight/model-metadata/"
"master/data/openai.json"
)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@basilisk/provider_engine/openai_engine.py` at line 111, The constant
MODELS_JSON_URL exceeds the 80-character line length; split the URL literal
across two or more lines using a parenthesized implicit-concatenation or
explicit concatenation so the assignment to MODELS_JSON_URL still produces the
same full URL (e.g., break after "SigmaNight/" so the symbol MODELS_JSON_URL is
assigned by combining the two pieces), keeping whitespace and quotes correct and
not changing the URL value.

Comment on lines +248 to +266
elif mime.startswith("audio/"):
audio_format = _audio_mime_to_format(mime)
if audio_format and url.startswith("data:"):
_, _, data = url.partition(",")
if data:
parts.append(
{
"type": "input_audio",
"input_audio": {
"data": data,
"format": audio_format,
},
}
)
else:
raise ValueError(
f"Audio format {mime} not supported for gpt-audio. "
"Use mp3 or wav."
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Audio format validation could be more explicit.

When _audio_mime_to_format returns None (for unsupported formats like mp4/webm), the code falls through to raise an error. However, the check if audio_format and url.startswith("data:") silently skips non-data-URL audio attachments even for supported formats.

Suggested fix to handle all cases explicitly
 			elif mime.startswith("audio/"):
 				audio_format = _audio_mime_to_format(mime)
-				if audio_format and url.startswith("data:"):
+				if not audio_format:
+					raise ValueError(
+						f"Audio format {mime} not supported for gpt-audio. "
+						"Use mp3 or wav."
+					)
+				if not url.startswith("data:"):
+					raise ValueError(
+						"Audio attachments must be base64 data URLs for Chat Completions."
+					)
+				_, _, data = url.partition(",")
+				if data:
+					parts.append(
+						{
+							"type": "input_audio",
+							"input_audio": {
+								"data": data,
+								"format": audio_format,
+							},
+						}
+					)
-					_, _, data = url.partition(",")
-					if data:
-						parts.append(
-							{
-								"type": "input_audio",
-								"input_audio": {
-									"data": data,
-									"format": audio_format,
-								},
-							}
-						)
-				else:
-					raise ValueError(
-						f"Audio format {mime} not supported for gpt-audio. "
-						"Use mp3 or wav."
-					)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@basilisk/provider_engine/openai_engine.py` around lines 248 - 266, Validate
audio MIME support explicitly by calling _audio_mime_to_format(mime) and if it
returns None raise a clear ValueError about unsupported audio formats (mention
accepted formats like mp3/wav); then separately check url.startswith("data:")
and only parse and append the input_audio dict to parts (using parts.append with
keys "type":"input_audio" and "input_audio":{ "data":..., "format": audio_format
}) when it's a data URL, otherwise raise a ValueError stating non-data audio
URLs are not supported for gpt-audio. Ensure you reference
_audio_mime_to_format, audio_format, url.startswith, and parts.append when
making these fixes.

Comment on lines +251 to +258
if (
has_documents
and not current_model.document
and not current_model.vision
):
doc_models = [
m.name or m.id for m in engine.models if m.document or m.vision
]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Do not treat vision as a fallback for document support.

This branch now accepts document attachments on any model with
vision=True, even when the new document capability is false. That
lets .pdf/text/* attachments through on image-only models and pushes
the failure to the provider call instead of blocking it here. If a
provider truly wants vision to imply document support, encode that in the
model metadata rather than hard-coding it in the generic service.

Suggested fix
-		if (
-			has_documents
-			and not current_model.document
-			and not current_model.vision
-		):
-			doc_models = [
-				m.name or m.id for m in engine.models if m.document or m.vision
-			]
+		if has_documents and not current_model.document:
+			doc_models = [
+				m.name or m.id for m in engine.models if m.document
+			]
 			return False, _(
 				"The selected model does not support documents. "
 				"Please select a document-capable model instead (%s)."
 			) % ", ".join(doc_models) if doc_models else _(
 				"The selected model does not support documents."
 			)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@basilisk/services/attachment_service.py` around lines 251 - 258, The branch
incorrectly treats vision as a fallback for document support: update the check
so it only blocks when has_documents is true and current_model.document is false
(remove the current_model.vision condition), and change the doc_models list
comprehension to list only models with document=True (use [m.name or m.id for m
in engine.models if m.document]) so models with only vision=True are not treated
as document-capable; keep references to current_model, engine.models, and
doc_models when making the change.

Comment on lines +268 to +270
val = self.block.reasoning_effort.lower()
idx = options.index(val) if val in options else len(options) - 1
self.reasoning_effort_choice.SetSelection(idx)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Fallback to last option may silently mutate saved effort.

Same issue as in reasoning_params_helper.py: when the saved reasoning_effort isn't in the current provider's options, this selects the last available option rather than preserving the default.

 		val = self.block.reasoning_effort.lower()
-		idx = options.index(val) if val in options else len(options) - 1
-		self.reasoning_effort_choice.SetSelection(idx)
+		if val in options:
+			self.reasoning_effort_choice.SetSelection(options.index(val))
+		# else: leave at default selection
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@basilisk/views/edit_block_dialog.py` around lines 268 - 270, The current code
falls back to the last option when self.block.reasoning_effort isn't present in
options, which can silently change the saved effort; change the logic in the
block where val = self.block.reasoning_effort is computed so that: if val is in
options use its index, else if a configured default (e.g.
self.default_reasoning_effort or a provider default lookup) exists in options
use that index, otherwise call self.reasoning_effort_choice.SetSelection(-1) or
leave it unselected instead of selecting len(options)-1; update the selection
call that uses self.reasoning_effort_choice.SetSelection(idx) accordingly so it
no longer defaults to the last option.

Comment on lines +30 to +38
return ProviderAIModel(
id="claude-sonnet-4-6",
name="Claude Sonnet 4.6",
context_window=200000,
max_output_tokens=64000,
reasoning=False,
reasoning_capable=True,
supported_parameters=["temperature", "max_tokens", "include_reasoning"],
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

This top_p assertion is vacuous right now.

The test never proves that reasoning mode strips top_p: the fixture model
does not advertise top_p support, and the block never sets a top_p
value, so _filter_params_for_model() removes that key before the
reasoning branch is exercised. This will still pass if the
reasoning-specific omission regresses.

Suggested fix
 	return ProviderAIModel(
 		id="claude-sonnet-4-6",
 		name="Claude Sonnet 4.6",
 		context_window=200000,
 		max_output_tokens=64000,
 		reasoning=False,
 		reasoning_capable=True,
-		supported_parameters=["temperature", "max_tokens", "include_reasoning"],
+		supported_parameters=[
+			"temperature",
+			"max_tokens",
+			"top_p",
+			"include_reasoning",
+		],
 	)
@@
 	return MessageBlock(
 		request=Message(role=MessageRoleEnum.USER, content="Test"),
 		model=AIModelInfo(
 			provider_id="anthropic", model_id="claude-sonnet-4-6"
 		),
 		temperature=0.7,
+		top_p=0.95,
 		max_tokens=4096,
 		stream=False,
 		reasoning_mode=True,

Also applies to: 44-55, 93-98

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/provider_engine/test_reasoning_params.py` around lines 30 - 38, The
test is vacuous because the fixture ProviderAIModel does not advertise "top_p"
and the test never sets a top_p value, so _filter_params_for_model never has a
top_p to strip; update the test to either (A) add "top_p" to the fixture's
supported_parameters for the ProviderAIModel (id="claude-sonnet-4-6") or (B) set
a top_p in the params passed to _filter_params_for_model before toggling
reasoning, then call _filter_params_for_model and assert that with
reasoning=True the returned dict no longer contains "top_p" while with
reasoning=False it does; reference ProviderAIModel and _filter_params_for_model
to locate places to change.

- Replace General group with Output, Reasoning, Tools
- Add create_settings_section() to share layout across tab, edit block, profile
- Move Apply profile to account row; Submit/Stop/Record after all settings
- Use model section (label + list) instead of Model group box
- Fix edit block dialog to load web_search_mode from block
- Advanced mode: only hide temperature and top_p; keep max tokens, web search, reasoning, stream visible
…d, top_k, stop)

- Add frequency_penalty, presence_penalty, seed, top_k, stop to MessageBlock,
  ConversationProfile, DB schema, and all provider engines
- Parse default_parameters from model JSON for UI defaults
- Add IntSpinCtrl (FloatSpin-based) and AccessibleWithHelp for screen-reader
  support on output controls
- Add "Reset to model defaults" context menu on Output group
- Reorder settings: reasoning, tools, output (output last before Submit)
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 13

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
basilisk/presenters/conversation_profile_presenter.py (1)

60-74: ⚠️ Potential issue | 🟠 Major

Don't mutate the original profile until validation passes.

This method edits self.profile in place before entering the validation
block. If validation fails, the dialog returns None but the caller is still
left with a partially updated ConversationProfile.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@basilisk/presenters/conversation_profile_presenter.py` around lines 60 - 74,
Do not mutate self.profile in-place before validation: create a temporary
profile instance (e.g., call ConversationProfile.model_construct() or clone
existing self.profile into temp_profile), apply changes to temp_profile using
the same helper methods (_apply_account_and_model, _apply_generation_params,
_apply_reasoning_params) but targeting temp_profile, then call
ConversationProfile.model_validate(temp_profile); if validation passes assign
temp_profile to self.profile and return it, otherwise log the ValidationError
and return None so the original self.profile remains unchanged.
basilisk/provider_engine/openai_engine.py (1)

83-90: ⚠️ Potential issue | 🔴 Critical

Don't keep the parser choice in shared engine state.

_last_used_chat_completions is written in completion() and read later in both response handlers. If the same engine instance handles overlapping requests, a later call can flip this flag before the earlier response is parsed, sending that response down the wrong parser.

Also applies to: 328-382, 384-413

♻️ Duplicate comments (17)
basilisk/config/conversation_profile.py (1)

274-277: ⚠️ Potential issue | 🟠 Major

Reasoning params still skip validation when a model exists.

The validator only rejects reasoning_budget_tokens and
reasoning_effort in the no-model branch. Negative budgets or unknown effort
strings still pass as soon as ai_model_info is set.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@basilisk/config/conversation_profile.py` around lines 274 - 277, The
validator for ConversationProfile currently only rejects reasoning_budget_tokens
and reasoning_effort when ai_model_info is None, letting invalid values pass
when a model exists; update the validation logic (in the same validator that
references ai_model_info, reasoning_budget_tokens, reasoning_effort) to always
validate these fields: enforce that reasoning_budget_tokens is either None or a
non-negative integer and that reasoning_effort is either None or one of the
allowed effort strings (e.g., "low","medium","high" or the project's canonical
set); keep the existing checks that both must be None when ai_model_info is None
but add the positive/enum checks for the model-present branch so invalid budgets
or unknown effort strings are rejected regardless of ai_model_info.
basilisk/presenters/edit_block_presenter.py (4)

284-303: ⚠️ Potential issue | 🟠 Major

Guard async completion callback against dialog teardown.

_on_non_stream_finish() updates controls and may trigger audio after async completion; it should be destroy-guarded.

Based on learnings: Applies to basilisk/presenters/*.py : Presenters should inherit DestroyGuardMixin and decorate callbacks with _guard_destroying to guard against post-destroy calls.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@basilisk/presenters/edit_block_presenter.py` around lines 284 - 303, The
presenter method _on_non_stream_finish can run after the view/dialog is torn
down; update the presenter class to inherit DestroyGuardMixin and protect async
callbacks by decorating _on_non_stream_finish with `@_guard_destroying` so it
no-ops if the presenter is being destroyed; locate the presenter class in this
file, add DestroyGuardMixin to its base classes, import the mixin if missing,
and add the `@_guard_destroying` decorator above the _on_non_stream_finish
definition (apply same pattern to other async callback methods in this presenter
as needed).

131-132: ⚠️ Potential issue | 🟠 Major

Resolve reasoning params against the compatibility-adjusted model.

ensure_model_compatibility() may swap the model, but reasoning params are resolved without passing that adjusted model, which can map UI selections to wrong provider values.

Suggested fix
-		reasoning_params = get_reasoning_params_from_view(self.view)
+		reasoning_params = get_reasoning_params_from_view(
+			self.view, engine=self.view.current_engine, model=model
+		)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@basilisk/presenters/edit_block_presenter.py` around lines 131 - 132, The
reasoning params are being derived from the raw view without using the
potentially swapped model from ensure_model_compatibility(); call
ensure_model_compatibility() first to obtain the compatibility-adjusted model
and then resolve reasoning params against that model (i.e., pass the adjusted
model into get_reasoning_params_from_view or update
get_reasoning_params_from_view to accept a model argument), and likewise ensure
view_get_web_search_value is interpreted in the context of the adjusted model so
UI selections map to the correct provider values.

226-235: ⚠️ Potential issue | 🟠 Major

Do not clear stored reasoning when reasoning blocks are hidden.

When the editor hides reasoning, reparsing visible text can drop previously saved reasoning on OK.

Suggested fix
 		if self.block.response:
 			text = self.view.response_txt.GetValue()
 			reasoning, content = split_reasoning_and_content_from_display(text)
+			if (
+				reasoning is None
+				and not self.view.get_effective_show_reasoning_blocks()
+			):
+				reasoning, _ = split_reasoning_and_content(
+					self.block.response.content
+				)
 			combined = (
 				f"```think\n{reasoning}\n```\n\n{content}"
 				if reasoning
 				else content
 			)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@basilisk/presenters/edit_block_presenter.py` around lines 226 - 235, The
current logic in edit_block_presenter.py clears stored reasoning whenever the
visible editor text contains no reasoning; to fix this, when
split_reasoning_and_content_from_display(text) returns no reasoning, retrieve
the existing reasoning from the stored block response (e.g., call
split_reasoning_and_content_from_display(self.block.response.content) or
otherwise parse self.block.response.content) and use that existing reasoning
when building combined so stored reasoning is preserved; update the code around
the combined construction and the model_copy call (the block that assigns
self.block.response = self.block.response.model_copy(update={"content":
combined})) to merge existing reasoning with the new content only when visible
reasoning is absent.

14-15: ⚠️ Potential issue | 🟠 Major

Move message-box handling out of the presenter.

wx.MessageBox in this presenter bypasses the view error surface and keeps a direct wx dependency in MVP logic.

Suggested fix
-import wx
@@
-		if getattr(model, "image_output", False) is True:
-			wx.MessageBox(
-				_("Image generation models are not supported yet."),
-				_("Not Supported"),
-				wx.OK | wx.ICON_INFORMATION,
-			)
+		if getattr(model, "image_output", False) is True:
+			self.view.show_error(
+				_("Image generation models are not supported yet."),
+				title=_("Not Supported"),
+			)
 			return False
As per coding guidelines: Presenter modules should avoid `import wx` where possible; delegate all wx calls to the view and use `view.show_error()`/`view.show_enhanced_error()` instead of `wx.MessageBox`.

Also applies to: 118-124

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@basilisk/presenters/edit_block_presenter.py` around lines 14 - 15, The
presenter currently imports wx and calls wx.MessageBox directly; remove the wx
dependency from edit_block_presenter.py and delegate all message-box usage to
the view by replacing any wx.MessageBox calls (including those around lines
118-124) with calls to the view API (e.g., view.show_error(message) or
view.show_enhanced_error(message, details, style)). Update the
EditBlockPresenter methods that call wx.MessageBox to accept/use the view
instance to surface errors, forward the original message and any detail/flags to
the appropriate view method, and delete the top-level "import wx" from the
module.
basilisk/conversation/conversation_model.py (1)

151-153: ⚠️ Potential issue | 🟠 Major

Constrain reasoning fields to prevent invalid request payloads.

reasoning_budget_tokens accepts non-positive integers and reasoning_effort accepts arbitrary strings.

Suggested fix
-from typing import Any
+from typing import Any, Literal
@@
-	reasoning_budget_tokens: int | None = Field(default=None)
-	reasoning_effort: str | None = Field(default=None)
+	reasoning_budget_tokens: int | None = Field(default=None, ge=1)
+	reasoning_effort: Literal["low", "medium", "high"] | None = (
+		Field(default=None)
+	)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@basilisk/conversation/conversation_model.py` around lines 151 - 153, The
reasoning fields allow invalid values; restrict them by changing
reasoning_budget_tokens to a positive integer type or add a Field constraint
(e.g., ge=1) so it cannot accept non-positive values, and constrain
reasoning_effort to a fixed set of allowed values (use a Literal or Enum like
"low"/"medium"/"high" or the project-approved effort levels) instead of
arbitrary strings; update validation on the reasoning_budget_tokens and
reasoning_effort fields in the model where they are declared to enforce these
constraints (adjust any related serialization/deserialization/tests
accordingly).
basilisk/provider_engine/anthropic_engine.py (1)

208-210: ⚠️ Potential issue | 🟠 Major

Do not gate web-search tools on generic truthiness.

web_search_mode is mode-based; truthy non-enabled values can still inject tools.

Suggested fix
-		web_search = kwargs.pop("web_search_mode", False)
-		if web_search and model and self.model_supports_web_search(model):
+		web_search_mode = kwargs.pop("web_search_mode", False)
+		web_search_enabled = web_search_mode in (True, "on", "auto")
+		if web_search_enabled and self.model_supports_web_search(model):
 			tools.extend(self.get_web_search_tool_definitions(model))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@basilisk/provider_engine/anthropic_engine.py` around lines 208 - 210, The
code currently uses generic truthiness for web_search from kwargs which can
allow unintended values to enable tools; change the handling of web_search_mode
so you pop with a neutral default (e.g., None) and only enable web search when
the mode explicitly matches the allowed enabling values (e.g., web_search ==
"enabled" or web_search is True, or web_search in {"enabled","on","search"} as
your app defines); then keep the existing model check
self.model_supports_web_search(model) and only call
tools.extend(self.get_web_search_tool_definitions(model)) when the explicit-mode
check passes. Ensure you update the use sites: the kwargs.pop("web_search_mode",
...) and the conditional that references web_search.
basilisk/provider_engine/openai_engine.py (1)

55-81: ⚠️ Potential issue | 🟠 Major

Don't advertise audio formats that _audio_mime_to_format() still rejects.

audio/mp4 and audio/webm now pass the attachment compatibility check, but _to_chat_content_part() can only encode MIME types that _audio_mime_to_format() maps. Those attachments will therefore fail later during request construction.

🚫 Safe short-term fix
 		"audio/mpeg",
 		"audio/wav",
-		"audio/mp4",
-		"audio/webm",
 	}

Also applies to: 248-266

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@basilisk/provider_engine/openai_engine.py` around lines 55 - 81, The
supported_attachment_formats set currently lists "audio/mp4" and "audio/webm"
but those MIME types are not handled by _audio_mime_to_format(), causing
failures later in _to_chat_content_part(); either remove "audio/mp4" and
"audio/webm" from supported_attachment_formats or add mappings for those MIME
types inside _audio_mime_to_format() (and update any dependent encoding logic
used by _to_chat_content_part()) so advertised formats and actual encoder
mappings stay consistent.
basilisk/provider_engine/base_engine.py (1)

104-117: ⚠️ Potential issue | 🟠 Major

Filter after provider mapping misses max_output_tokens.

_filter_params_for_model() only knows about max_tokens, but ResponsesAPIEngine now converts that field to max_output_tokens before calling this helper. Unsupported token caps therefore still leak through on the OpenAI/xAI Responses API path.

Also applies to: 152-177

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@basilisk/provider_engine/base_engine.py` around lines 104 - 117, The filter
list in base_engine uses _FILTERABLE_PARAMS and only contains "max_tokens", but
ResponsesAPIEngine maps that to "max_output_tokens" before calling
_filter_params_for_model, so unsupported token-cap fields leak through; update
_FILTERABLE_PARAMS (or _get_filterable_params()) to include "max_output_tokens"
(and any other provider-mapped aliases) so that _filter_params_for_model() will
correctly filter both "max_tokens" and "max_output_tokens" when invoked by
ResponsesAPIEngine and other subclasses.
basilisk/presenters/conversation_presenter.py (1)

13-14: 🛠️ Refactor suggestion | 🟠 Major

Keep the unsupported-model dialog in the view.

The presenter now imports wx just to call wx.MessageBox(). That reintroduces UI logic into the presenter and bypasses the view's standardized error helpers; route this through self.view.show_error() or show_enhanced_error() and drop the direct wx dependency.

♻️ Suggested refactor
-import wx
-
 ...
 		if getattr(model, "image_output", False) is True:
-			wx.MessageBox(
-				_("Image generation models are not supported yet."),
-				_("Not Supported"),
-				wx.OK | wx.ICON_INFORMATION,
-			)
+			self.view.show_error(
+				_("Image generation models are not supported yet."),
+				_("Not Supported"),
+			)
 			return None

As per coding guidelines: Presenter modules should avoid import wx where possible; delegate all wx calls to the view, and use view.show_error() or view.show_enhanced_error() instead of wx.MessageBox.

Also applies to: 176-182

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@basilisk/presenters/conversation_presenter.py` around lines 13 - 14, The
presenter currently imports wx and calls wx.MessageBox directly (in
conversation_presenter and also around the block at lines ~176-182), which
pushes UI logic into the presenter; remove the wx import and replace each
wx.MessageBox(...) call with a call to the view's error helper (e.g.,
self.view.show_error(...) or self.view.show_enhanced_error(...)) while
preserving the message text and dialog flags/purpose, and ensure any return/flow
logic that relied on the MessageBox value adapts to the view method's return
semantics; update conversation_presenter to use only
self.view.show_error/show_enhanced_error and drop the wx dependency.
basilisk/views/base_conversation.py (2)

780-782: ⚠️ Potential issue | 🟠 Major

Respect the show_reasoning_blocks preference here.

Every new display path in this PR now goes through this helper, but it always returns True. That makes the new preference ineffective and reasoning blocks can never be hidden.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@basilisk/views/base_conversation.py` around lines 780 - 782, The
get_effective_show_reasoning_blocks method currently always returns True,
ignoring the user's show_reasoning_blocks preference; change it so it reads and
returns the actual preference (e.g., return self.show_reasoning_blocks or the
conversation/user preference accessor used elsewhere) while keeping a sensible
default (True) if the attribute or setting is missing—update
get_effective_show_reasoning_blocks to check the show_reasoning_blocks
preference and return that value instead of unconditionally True.

990-998: ⚠️ Potential issue | 🔴 Critical

Fix the Python 2 multi-exception handler.

except IndexError, TypeError: is a SyntaxError in Python 3, so this module cannot import until it is rewritten as a parenthesized tuple.

🔧 Minimal fix
-			except IndexError, TypeError:
+			except (IndexError, TypeError):
#!/bin/bash
set -e
python -m py_compile basilisk/views/base_conversation.py
rg -n 'except [^:]+, [^:]+' basilisk/views/base_conversation.py
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@basilisk/views/base_conversation.py` around lines 990 - 998, The except
clause using Python 2 syntax must be converted to a Python 3 multi-exception
tuple: replace the line "except IndexError, TypeError:" with a parenthesized
tuple form so the block catches both IndexError and TypeError; update the
handler around self.reasoning_effort_choice.GetSelection() and
state.effort_options (i.e., the try/except that sets old_val) to use "except
(IndexError, TypeError):" so the module imports under Python 3.
basilisk/views/conversation_tab.py (2)

520-530: ⚠️ Potential issue | 🟠 Major

Don't coerce an unknown saved effort to the last option.

If a saved reasoning_effort is no longer supported by the current provider spec, selecting len(options) - 1 silently changes the draft's semantics on load. Only update the choice when the saved value still exists.

🎛️ Safer selection logic
 			val = draft_block.reasoning_effort.lower()
-			idx = options.index(val) if val in options else len(options) - 1
-			self.reasoning_effort_choice.SetSelection(idx)
+			if val in options:
+				self.reasoning_effort_choice.SetSelection(
+					options.index(val)
+				)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@basilisk/views/conversation_tab.py` around lines 520 - 530, The code
currently maps an unknown draft_block.reasoning_effort to the last option which
silently mutates semantics; change the logic in the block that reads
self.current_engine/self.current_model and calls
engine.get_reasoning_ui_spec(model) so that you only call
self.reasoning_effort_choice.SetSelection(idx) when the saved val is present in
options (i.e., if val in options then idx = options.index(val) and set
selection), and otherwise leave the choice untouched (do not fallback to
len(options)-1 or mutate the draft); keep using spec.effort_options when
available.

480-482: ⚠️ Potential issue | 🟠 Major

Finish restoring draft state before recomputing visibility.

_restore_draft_block() still drops draft_block.web_search_mode, and the only update_parameter_controls_visibility() call runs inside _restore_draft_block_model() before the reasoning flags are restored. Reopening a saved draft can therefore lose its tools state and leave the reasoning-dependent widgets out of sync until the user toggles something.

🧩 Suggested restore order
 		self._restore_draft_block_model(draft_block)
 		self._restore_draft_block_params(draft_block)
 		self._restore_draft_block_reasoning(draft_block)
+		self._restore_draft_block_web_search_mode(draft_block)
+		self.update_parameter_controls_visibility()
 ...
-			self.update_parameter_controls_visibility()
 		except Exception:
 			log.debug("Could not restore draft model selection", exc_info=True)
+
+	def _restore_draft_block_web_search_mode(
+		self, draft_block: MessageBlock
+	) -> None:
+		if hasattr(self, "web_search_mode"):
+			self.web_search_mode.SetValue(bool(draft_block.web_search_mode))

Also applies to: 499-499

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@basilisk/views/conversation_tab.py` around lines 480 - 482, The draft
restoration currently calls _restore_draft_block_model() which triggers
update_parameter_controls_visibility() before reasoning flags and
draft_block.web_search_mode are restored, causing tool/reasoning widgets to be
out of sync; change the order so that
_restore_draft_block_reasoning(draft_block) (and restore
draft_block.web_search_mode) and _restore_draft_block_params(draft_block) run
before any call to update_parameter_controls_visibility(), or move the
visibility recomputation out of _restore_draft_block_model() and invoke
update_parameter_controls_visibility() once after _restore_draft_block_model(),
_restore_draft_block_params(), and _restore_draft_block_reasoning() have all
completed; apply the same ordering fix to the other occurrence referenced around
the second block.
basilisk/provider_engine/gemini_engine.py (1)

197-225: ⚠️ Potential issue | 🟠 Major

Emit Gemini thought parts as reasoning chunks.

_build_thinking_config() turns reasoning on, but the stream handler still forwards only chunk.text. That means the UI never receives typed ("reasoning", ...) events from Gemini and cannot render reasoning blocks correctly.

🧠 Suggested stream handling
 		for chunk in stream:
-			chunk_text = chunk.text
-			if chunk_text:
-				yield ("content", chunk_text)
+			parts = getattr(chunk, "parts", None) or []
+			if parts:
+				for part in parts:
+					text = getattr(part, "text", None)
+					if not text:
+						continue
+					if getattr(part, "thought", False):
+						yield ("reasoning", text)
+					else:
+						yield ("content", text)
+				continue
+			chunk_text = chunk.text
+			if chunk_text:
+				yield ("content", chunk_text)

Also applies to: 314-333

basilisk/provider_engine/dynamic_model_loader.py (2)

197-205: ⚠️ Potential issue | 🟠 Major

Don't force every metadata-backed model to the same temperature cap.

Hardcoding max_temperature=2.0 in the generic loader removes any stricter default/provider cap that should come from the model definition or _postprocess_models(). That can expose values the upstream API rejects.

🎛️ Safer change
 			ProviderAIModel(
 				id=model_id,
 				name=item.get("name"),
 				description=item.get("description"),
 				context_window=_get_context_length(item),
 				max_output_tokens=_get_max_completion_tokens(item),
-				max_temperature=2.0,
 				default_temperature=default_temperature,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@basilisk/provider_engine/dynamic_model_loader.py` around lines 197 - 205, The
loader is forcibly setting max_temperature=2.0 for every ProviderAIModel which
overrides any stricter model/provider caps; instead, read max_temperature from
the model metadata (e.g., item.get("max_temperature")) or leave it unset/None so
that ProviderAIModel construction and _postprocess_models() can apply the
correct cap and provider defaults; update the ProviderAIModel creation in
dynamic_model_loader.py (the models.append(...) call) to use the
metadata-derived value or omit the hardcoded 2.0 and ensure downstream code
(_postprocess_models, ProviderAIModel) enforces provider limits.

55-58: ⚠️ Potential issue | 🔴 Critical

Fix the Python 2 multi-exception handlers.

except TypeError, ValueError: is a SyntaxError in Python 3, so this module will not import until all four handlers use a parenthesized tuple.

🔧 Minimal fix
-	except TypeError, ValueError:
+	except (TypeError, ValueError):
#!/bin/bash
set -e
python -m py_compile basilisk/provider_engine/dynamic_model_loader.py
rg -n 'except [^:]+, [^:]+' basilisk/provider_engine/dynamic_model_loader.py

Also applies to: 65-68, 72-75, 134-137

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@basilisk/provider_engine/dynamic_model_loader.py` around lines 55 - 58,
Replace the Python 2 style multi-exception handlers with parenthesized tuples in
basilisk.provider_engine.dynamic_model_loader: change all occurrences of "except
TypeError, ValueError:" (and similar comma-separated forms) to "except
(TypeError, ValueError):" so the module imports under Python 3; update the four
handlers noted (around the int conversion and the other three locations) and run
python -m py_compile to verify no syntax errors remain.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@basilisk/conversation/database/manager.py`:
- Around line 698-699: The except clause in manager.py uses Python 2 comma
syntax ("except TypeError, ValueError:") which is invalid in Python 3; update
that handler to use the tuple form or separate handlers — i.e., change the
clause to catch both exceptions using a tuple (except (TypeError, ValueError):)
or add two explicit except TypeError: and except ValueError: blocks in the same
try/except area so the module imports cleanly; locate the offending clause in
the surrounding try/except in conversation/database/manager.py and apply one of
these fixes.

In `@basilisk/presenters/conversation_profile_presenter.py`:
- Around line 91-139: The problem is that _apply_generation_params and
_apply_reasoning_params only test for model existence, so hidden/unsupported
fields (top_k, stop, seed, stream_mode, frequency/presence penalties,
reasoning_budget_tokens, reasoning_effort, etc.) can be persisted even when the
new model doesn't support them; update both methods to check the selected
model's capability flags before assigning each profile field (e.g., use
model.supports_top_k / supports_stop_sequences / supports_seed /
supports_streaming / supports_frequency_penalty / supports_presence_penalty and
for reasoning use model.supports_reasoning or model.reasoning_capabilities) and
explicitly set the unsupported profile attributes (self.profile.top_k,
self.profile.stop, self.profile.seed, self.profile.stream_mode,
self.profile.frequency_penalty, self.profile.presence_penalty,
self.profile.reasoning_budget_tokens, self.profile.reasoning_effort, etc.) to
None when the model lacks the capability so only supported parameters are
persisted.

In `@basilisk/provider_ai_model.py`:
- Around line 128-129: get_default_param currently assumes
self.extra_info["default_parameters"] is a dict and calls .get(), which will
raise if the metadata is malformed; update get_default_param to validate that
extra_info is present and that default_parameters is a dict before using it
(e.g., check isinstance(self.extra_info, dict) and
isinstance(self.extra_info.get("default_parameters"), dict)), fall back to an
empty dict when not, and then return defaults.get(param, fallback) so malformed
metadata doesn't raise in get_default_param.

In `@basilisk/provider_engine/anthropic_engine.py`:
- Around line 342-346: The Message model lacks a reasoning field so the
non-streamed `reasoning` is being dropped; update the response construction
where `new_block.response = Message(...)` (Message, MessageRoleEnum.ASSISTANT,
content, reasoning, citations) to merge `reasoning` into the message body:
create a `combined_content` (e.g., prepend/append the reasoning with a clear
delimiter like "think\n{reasoning}\n```\n\n{content}" only if `reasoning` is
present, otherwise use `content`) and pass that as `content=combined_content` to
Message, removing the `reasoning` argument so `citations` and role remain
unchanged.

In `@basilisk/provider_engine/base_engine.py`:
- Around line 72-84: The models attribute should not be a cached_property
because that prevents subsequent calls from hitting the URL-level TTL cache;
change the decorator on models from `@cached_property` to a normal `@property` so
each access will call _load_models_from_json_url and allow that function's
model_metadata_cache_ttl_seconds-based caching to refresh; keep the existing
logic using MODELS_JSON_URL and _postprocess_models/_load_models_from_json_url
intact and raise NotImplementedError if MODELS_JSON_URL is unset.

In `@basilisk/provider_engine/dynamic_model_loader.py`:
- Around line 193-195: Guard the numeric conversion of
default_parameters.temperature so a non-numeric value doesn't raise and abort
parsing: in the code path using _get_default_params(item) and def_temp =
default_params.get("temperature"), replace the direct float(...) call with a
safe parse (wrap float(def_temp) in a try/except catching ValueError/TypeError)
and fall back to 1.0 (and optionally log a warning) so
parse_model_metadata()/load_models_from_url() continue rather than aborting on
bad metadata.

In `@basilisk/provider_engine/mistralai_engine.py`:
- Around line 164-167: You resolved the model into the local variable via model
= self.get_model(new_block.model.model_id) but the API params still use
new_block.model.model_id; update the params construction to use the resolved
model identifier (e.g., model.model_id or the appropriate attribute on the
object returned by get_model) so the request uses the compatibility-remapped
model; change the "model" value in params to reference the resolved model
variable and keep get_messages(new_block, ...) unchanged.

In `@basilisk/provider_engine/openai_engine.py`:
- Around line 171-179: The get_audio_output_spec function is incorrectly using
model.audio (input modality) to decide TTS UI; change its guard to check the
output capability (model.audio_output) and return AudioOutputUISpec when
audio_output is truthy (keep default_voice and _AUDIO_VOICES). Also update the
analogous check elsewhere in this file that mirrors this logic (the other
audio-output helper around the later block) to use model.audio_output instead of
model.audio so output-only models expose TTS controls and routing.

In `@basilisk/views/accessible.py`:
- Around line 16-34: The public API is missing docstrings causing D107/D102
failures: add a concise module/class docstring at the top of the class in
accessible.py and provide short docstrings for the public methods __init__,
GetName, and GetHelpText describing their purpose, parameters
(win/name/help_text/childId) and return values (tuple[int, str]) so the linter
stops flagging D107/D102; ensure docstrings follow the project's style (one-line
summary and optional details) and place them immediately inside the class and
each method respectively.

In `@basilisk/views/base_conversation.py`:
- Around line 866-876: The current logic maps an unknown saved
profile.reasoning_effort to the last option, which can silently change the
user's setting; instead only set the UI selection when the saved value is
supported. Modify the block that reads profile.reasoning_effort (using
self.current_engine, self.current_model, engine.get_reasoning_ui_spec,
spec.effort_options) so it computes options as before but only calls
self.reasoning_effort_choice.SetSelection(idx) if val is in options (i.e., find
idx = options.index(val) when val exists); if not present, do nothing and leave
the current/default selection unchanged.

In `@basilisk/views/conversation_tab.py`:
- Around line 503-509: _restore_draft_block_params only re-applies temperature,
max_tokens, top_p and stream; you need to also restore the other saved
generation params from the draft_block (frequency_penalty, presence_penalty,
seed, top_k, stop). Update _restore_draft_block_params to set the corresponding
UI controls from draft_block.frequency_penalty, draft_block.presence_penalty,
draft_block.seed, draft_block.top_k and draft_block.stop (e.g., set the matching
spinner/text control values such as frequency_penalty_spinner.SetValue(...),
presence_penalty_spinner.SetValue(...), seed control.SetValue(...),
top_k_spinner.SetValue(...), and stop control.SetValue(...)) so reopening a
draft preserves all generation settings.

In `@basilisk/views/edit_block_dialog.py`:
- Around line 249-255: The editable field is being populated with the formatted
display (format_response_for_display) which mixes reasoning and content and
later gets saved verbatim; change the dialog to bind the editor (response_txt)
to the raw response content instead of the display string by replacing
response_txt.SetValue(display) with response_txt.SetValue(content) and keep
reasoning stored in self.block.response.reasoning (or, alternatively, implement
a parser in the OK/save handler to split the formatted text back into
response.reasoning and response.content before writing). Ensure the code paths
using get_effective_show_reasoning_blocks and format_response_for_display are
only used for non-editable previews, and that the save/OK routine updates
self.block.response.content directly from response_txt.GetValue() without
clobbering structured reasoning unless explicitly parsed.

In `@tests/presenters/test_edit_block_presenter.py`:
- Around line 66-76: Add assertions that the full generation params returned by
mock_view.get_generation_params_from_view are preserved when saving: after
calling the code that triggers save_block, fetch the payload passed to the
mocked save_block (e.g., via mock_repo.save_block.assert_called_once_with(...)
or inspect mock_repo.save_block.call_args) and assert that keys
"frequency_penalty", "presence_penalty", "seed", "top_k", and "stop" (and their
expected values: 0.0, 0.0, None, None, None) are present and equal to the
originals; reference the mock_view.get_generation_params_from_view return dict
and the save_block call to locate where to add these assertions.

---

Outside diff comments:
In `@basilisk/presenters/conversation_profile_presenter.py`:
- Around line 60-74: Do not mutate self.profile in-place before validation:
create a temporary profile instance (e.g., call
ConversationProfile.model_construct() or clone existing self.profile into
temp_profile), apply changes to temp_profile using the same helper methods
(_apply_account_and_model, _apply_generation_params, _apply_reasoning_params)
but targeting temp_profile, then call
ConversationProfile.model_validate(temp_profile); if validation passes assign
temp_profile to self.profile and return it, otherwise log the ValidationError
and return None so the original self.profile remains unchanged.

---

Duplicate comments:
In `@basilisk/config/conversation_profile.py`:
- Around line 274-277: The validator for ConversationProfile currently only
rejects reasoning_budget_tokens and reasoning_effort when ai_model_info is None,
letting invalid values pass when a model exists; update the validation logic (in
the same validator that references ai_model_info, reasoning_budget_tokens,
reasoning_effort) to always validate these fields: enforce that
reasoning_budget_tokens is either None or a non-negative integer and that
reasoning_effort is either None or one of the allowed effort strings (e.g.,
"low","medium","high" or the project's canonical set); keep the existing checks
that both must be None when ai_model_info is None but add the positive/enum
checks for the model-present branch so invalid budgets or unknown effort strings
are rejected regardless of ai_model_info.

In `@basilisk/conversation/conversation_model.py`:
- Around line 151-153: The reasoning fields allow invalid values; restrict them
by changing reasoning_budget_tokens to a positive integer type or add a Field
constraint (e.g., ge=1) so it cannot accept non-positive values, and constrain
reasoning_effort to a fixed set of allowed values (use a Literal or Enum like
"low"/"medium"/"high" or the project-approved effort levels) instead of
arbitrary strings; update validation on the reasoning_budget_tokens and
reasoning_effort fields in the model where they are declared to enforce these
constraints (adjust any related serialization/deserialization/tests
accordingly).

In `@basilisk/presenters/conversation_presenter.py`:
- Around line 13-14: The presenter currently imports wx and calls wx.MessageBox
directly (in conversation_presenter and also around the block at lines
~176-182), which pushes UI logic into the presenter; remove the wx import and
replace each wx.MessageBox(...) call with a call to the view's error helper
(e.g., self.view.show_error(...) or self.view.show_enhanced_error(...)) while
preserving the message text and dialog flags/purpose, and ensure any return/flow
logic that relied on the MessageBox value adapts to the view method's return
semantics; update conversation_presenter to use only
self.view.show_error/show_enhanced_error and drop the wx dependency.

In `@basilisk/presenters/edit_block_presenter.py`:
- Around line 284-303: The presenter method _on_non_stream_finish can run after
the view/dialog is torn down; update the presenter class to inherit
DestroyGuardMixin and protect async callbacks by decorating
_on_non_stream_finish with `@_guard_destroying` so it no-ops if the presenter is
being destroyed; locate the presenter class in this file, add DestroyGuardMixin
to its base classes, import the mixin if missing, and add the `@_guard_destroying`
decorator above the _on_non_stream_finish definition (apply same pattern to
other async callback methods in this presenter as needed).
- Around line 131-132: The reasoning params are being derived from the raw view
without using the potentially swapped model from ensure_model_compatibility();
call ensure_model_compatibility() first to obtain the compatibility-adjusted
model and then resolve reasoning params against that model (i.e., pass the
adjusted model into get_reasoning_params_from_view or update
get_reasoning_params_from_view to accept a model argument), and likewise ensure
view_get_web_search_value is interpreted in the context of the adjusted model so
UI selections map to the correct provider values.
- Around line 226-235: The current logic in edit_block_presenter.py clears
stored reasoning whenever the visible editor text contains no reasoning; to fix
this, when split_reasoning_and_content_from_display(text) returns no reasoning,
retrieve the existing reasoning from the stored block response (e.g., call
split_reasoning_and_content_from_display(self.block.response.content) or
otherwise parse self.block.response.content) and use that existing reasoning
when building combined so stored reasoning is preserved; update the code around
the combined construction and the model_copy call (the block that assigns
self.block.response = self.block.response.model_copy(update={"content":
combined})) to merge existing reasoning with the new content only when visible
reasoning is absent.
- Around line 14-15: The presenter currently imports wx and calls wx.MessageBox
directly; remove the wx dependency from edit_block_presenter.py and delegate all
message-box usage to the view by replacing any wx.MessageBox calls (including
those around lines 118-124) with calls to the view API (e.g.,
view.show_error(message) or view.show_enhanced_error(message, details, style)).
Update the EditBlockPresenter methods that call wx.MessageBox to accept/use the
view instance to surface errors, forward the original message and any
detail/flags to the appropriate view method, and delete the top-level "import
wx" from the module.

In `@basilisk/provider_engine/anthropic_engine.py`:
- Around line 208-210: The code currently uses generic truthiness for web_search
from kwargs which can allow unintended values to enable tools; change the
handling of web_search_mode so you pop with a neutral default (e.g., None) and
only enable web search when the mode explicitly matches the allowed enabling
values (e.g., web_search == "enabled" or web_search is True, or web_search in
{"enabled","on","search"} as your app defines); then keep the existing model
check self.model_supports_web_search(model) and only call
tools.extend(self.get_web_search_tool_definitions(model)) when the explicit-mode
check passes. Ensure you update the use sites: the kwargs.pop("web_search_mode",
...) and the conditional that references web_search.

In `@basilisk/provider_engine/base_engine.py`:
- Around line 104-117: The filter list in base_engine uses _FILTERABLE_PARAMS
and only contains "max_tokens", but ResponsesAPIEngine maps that to
"max_output_tokens" before calling _filter_params_for_model, so unsupported
token-cap fields leak through; update _FILTERABLE_PARAMS (or
_get_filterable_params()) to include "max_output_tokens" (and any other
provider-mapped aliases) so that _filter_params_for_model() will correctly
filter both "max_tokens" and "max_output_tokens" when invoked by
ResponsesAPIEngine and other subclasses.

In `@basilisk/provider_engine/dynamic_model_loader.py`:
- Around line 197-205: The loader is forcibly setting max_temperature=2.0 for
every ProviderAIModel which overrides any stricter model/provider caps; instead,
read max_temperature from the model metadata (e.g., item.get("max_temperature"))
or leave it unset/None so that ProviderAIModel construction and
_postprocess_models() can apply the correct cap and provider defaults; update
the ProviderAIModel creation in dynamic_model_loader.py (the models.append(...)
call) to use the metadata-derived value or omit the hardcoded 2.0 and ensure
downstream code (_postprocess_models, ProviderAIModel) enforces provider limits.
- Around line 55-58: Replace the Python 2 style multi-exception handlers with
parenthesized tuples in basilisk.provider_engine.dynamic_model_loader: change
all occurrences of "except TypeError, ValueError:" (and similar comma-separated
forms) to "except (TypeError, ValueError):" so the module imports under Python
3; update the four handlers noted (around the int conversion and the other three
locations) and run python -m py_compile to verify no syntax errors remain.

In `@basilisk/provider_engine/openai_engine.py`:
- Around line 55-81: The supported_attachment_formats set currently lists
"audio/mp4" and "audio/webm" but those MIME types are not handled by
_audio_mime_to_format(), causing failures later in _to_chat_content_part();
either remove "audio/mp4" and "audio/webm" from supported_attachment_formats or
add mappings for those MIME types inside _audio_mime_to_format() (and update any
dependent encoding logic used by _to_chat_content_part()) so advertised formats
and actual encoder mappings stay consistent.

In `@basilisk/views/base_conversation.py`:
- Around line 780-782: The get_effective_show_reasoning_blocks method currently
always returns True, ignoring the user's show_reasoning_blocks preference;
change it so it reads and returns the actual preference (e.g., return
self.show_reasoning_blocks or the conversation/user preference accessor used
elsewhere) while keeping a sensible default (True) if the attribute or setting
is missing—update get_effective_show_reasoning_blocks to check the
show_reasoning_blocks preference and return that value instead of
unconditionally True.
- Around line 990-998: The except clause using Python 2 syntax must be converted
to a Python 3 multi-exception tuple: replace the line "except IndexError,
TypeError:" with a parenthesized tuple form so the block catches both IndexError
and TypeError; update the handler around
self.reasoning_effort_choice.GetSelection() and state.effort_options (i.e., the
try/except that sets old_val) to use "except (IndexError, TypeError):" so the
module imports under Python 3.

In `@basilisk/views/conversation_tab.py`:
- Around line 520-530: The code currently maps an unknown
draft_block.reasoning_effort to the last option which silently mutates
semantics; change the logic in the block that reads
self.current_engine/self.current_model and calls
engine.get_reasoning_ui_spec(model) so that you only call
self.reasoning_effort_choice.SetSelection(idx) when the saved val is present in
options (i.e., if val in options then idx = options.index(val) and set
selection), and otherwise leave the choice untouched (do not fallback to
len(options)-1 or mutate the draft); keep using spec.effort_options when
available.
- Around line 480-482: The draft restoration currently calls
_restore_draft_block_model() which triggers
update_parameter_controls_visibility() before reasoning flags and
draft_block.web_search_mode are restored, causing tool/reasoning widgets to be
out of sync; change the order so that
_restore_draft_block_reasoning(draft_block) (and restore
draft_block.web_search_mode) and _restore_draft_block_params(draft_block) run
before any call to update_parameter_controls_visibility(), or move the
visibility recomputation out of _restore_draft_block_model() and invoke
update_parameter_controls_visibility() once after _restore_draft_block_model(),
_restore_draft_block_params(), and _restore_draft_block_reasoning() have all
completed; apply the same ordering fix to the other occurrence referenced around
the second block.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: 53145390-043f-413b-9254-a239b55c902b

📥 Commits

Reviewing files that changed from the base of the PR and between e34b0e6 and 1bb2560.

⛔ Files ignored due to path filters (1)
  • basilisk/res/alembic/versions/006_add_generation_params_to_message_blocks.py is excluded by !basilisk/res/**
📒 Files selected for processing (25)
  • basilisk/config/conversation_profile.py
  • basilisk/conversation/conversation_model.py
  • basilisk/conversation/database/manager.py
  • basilisk/conversation/database/models.py
  • basilisk/presenters/base_conversation_presenter.py
  • basilisk/presenters/conversation_presenter.py
  • basilisk/presenters/conversation_profile_presenter.py
  • basilisk/presenters/edit_block_presenter.py
  • basilisk/provider_ai_model.py
  • basilisk/provider_engine/anthropic_engine.py
  • basilisk/provider_engine/base_engine.py
  • basilisk/provider_engine/dynamic_model_loader.py
  • basilisk/provider_engine/gemini_engine.py
  • basilisk/provider_engine/legacy_openai_engine.py
  • basilisk/provider_engine/mistralai_engine.py
  • basilisk/provider_engine/openai_engine.py
  • basilisk/provider_engine/responses_api_engine.py
  • basilisk/views/accessible.py
  • basilisk/views/base_conversation.py
  • basilisk/views/conversation_profile_dialog.py
  • basilisk/views/conversation_tab.py
  • basilisk/views/edit_block_dialog.py
  • basilisk/views/int_spin_ctrl.py
  • tests/presenters/test_conversation_profile_presenter.py
  • tests/presenters/test_edit_block_presenter.py

Comment on lines +698 to +699
except TypeError, ValueError:
pass
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify the invalid syntax pattern is present/removed.
rg -n 'except\s+TypeError,\s*ValueError' basilisk/conversation/database/manager.py

Repository: SigmaNight/basiliskLLM

Length of output: 102


🏁 Script executed:

#!/bin/bash
# Check for other Python 2-style exception syntax patterns in the file
rg -n 'except\s+\w+,\s*\w+' basilisk/conversation/database/manager.py

# Also check context around line 698
sed -n '690,710p' basilisk/conversation/database/manager.py | cat -n

Repository: SigmaNight/basiliskLLM

Length of output: 893


🏁 Script executed:

#!/bin/bash
# Check for Python 2-style exception syntax across the entire codebase
rg -n 'except\s+\w+,\s+\w+' --type py .

Repository: SigmaNight/basiliskLLM

Length of output: 913


Fix invalid Python 3 exception syntax.

except TypeError, ValueError: uses Python 2 comma-separated syntax and is invalid in Python 3. This will cause a SyntaxError at module import.

Suggested fix
-			except TypeError, ValueError:
+			except (TypeError, ValueError):
 				pass
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@basilisk/conversation/database/manager.py` around lines 698 - 699, The except
clause in manager.py uses Python 2 comma syntax ("except TypeError,
ValueError:") which is invalid in Python 3; update that handler to use the tuple
form or separate handlers — i.e., change the clause to catch both exceptions
using a tuple (except (TypeError, ValueError):) or add two explicit except
TypeError: and except ValueError: blocks in the same try/except area so the
module imports cleanly; locate the offending clause in the surrounding
try/except in conversation/database/manager.py and apply one of these fixes.

- Add "Refresh models" to model list context menu and F5 shortcut
- Add invalidate_model_cache() for dynamic model loader
- Add preferences SpinCtrl for cache TTL (1–168 hours)
- Extend config TTL: default 7h, min 1h, max 7 days
- Clamp legacy TTL values below 1h on config load
@AAClause AAClause force-pushed the feat/dynamicModels branch from 01401a7 to 6d0674f Compare March 15, 2026 11:34
- Add Message.reasoning field and DB persistence (save/load)
- Anthropic 4.6: adaptive effort (low/medium/high, max for Opus)
- Gemini 2.5/3: update thinkingBudget and thinkingLevel options
- OpenRouter: get_reasoning_ui_spec per provider (effort vs budget)
…elpers

- Add created, default_temperature, web_search_capable, default_parameters
- Use shared helpers from dynamic_model_loader (context_length, modality flags, etc.)
- Fix release date sort for OpenRouter models
- Sort by None/Name/Release date/Max output/Context window via context menu
- Reverse order toggle
- Preferences: default sort and reverse in Conversation group
- Account: override with Default (use preferences), None, or explicit sort
- Effective sort: account override > preference default
…s default

- Right-click account combo in conversation tab shows Edit account and Set as default
- Edit account opens EditAccountDialog; disabled for ENV_VAR accounts
- Set as default persists default account; disabled when already default
- Extract _refresh_account_combo() for reuse in ConversationTab
- Add context menu on account list in Manage accounts dialog
@AAClause AAClause force-pushed the feat/dynamicModels branch from 522adf4 to c04b481 Compare March 15, 2026 13:45
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 15, 2026

Note

Autofix is a beta feature. Expect some limitations and changes as we gather feedback and continue to improve it.

Autofix skipped. No unresolved CodeRabbit review comments with fix instructions found.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant