Skip to content

feat: add reasoning effort parameter#41

Open
billybonks wants to merge 2 commits intomainfrom
feat/add-reasoning-effort-param
Open

feat: add reasoning effort parameter#41
billybonks wants to merge 2 commits intomainfrom
feat/add-reasoning-effort-param

Conversation

@billybonks
Copy link
Contributor

Currently the clients dont use the same parameter key for reasoning, which is not ideal but something we could normalize on to reduce complexity

openai responses api also is brittle because it reimplements chat so that it can use a different method in the client.

Copy link

@managerbot-app managerbot-app bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed Changes

Managerbot reviewed 13 out of 13 changed files in this pull request and generated 8 comments.

Files not reviewed (0)
Comments suppressed due to low confidence (3)
    lib/llm_gateway/adapters/adapter.rb:81
  • (null_guard) The base `normalize_reasoning_effort` simply returns the value as-is without any validation. For OpenAI's chat completions adapter (which uses the default implementation), this means any arbitrary value will be passed through. Consider adding validation or documenting the accepted values.
  • test/adapter_reasoning_test.rb:84
  • (test_completeness) There's no test for the `ClaudeCode` adapter with an invalid `reasoning_effort` string. Since both `Claude::MessagesAdapter` and `ClaudeCode::MessagesAdapter` include `AnthropicReasoning`, the error behavior is shared, but a test for the `ClaudeCode` path would improve coverage symmetry.
  • lib/llm_gateway/adapters/anthropic_reasoning.rb:12
  • (are_you_sure) The `private` keyword inside a module used as a mixin affects all methods defined after it in the module. When included into a class, these methods will become private instance methods of that class. Ensure this is intentional — if `normalize_reasoning_effort` and `reasoning_effort_parameter` need to be private in the including class, this is correct, but it may be surprising to future maintainers.

end

def chat(messages, response_format: { type: "text" }, tools: nil, system: [], max_completion_tokens: 4096)
def chat(messages, response_format: { type: "text" }, tools: nil, system: [], max_completion_tokens: 4096, thinking: {})

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The default value for thinking is {} (empty hash). However, the check LlmGateway::Utils.present?(thinking) on the next line will treat {} as falsy/not-present, which is the desired behavior. But using nil as the default would be more semantically clear and consistent with how reasoning_effort: nil is used in other clients.

end

def chat(messages, response_format: { type: "text" }, tools: nil, system: [], max_completion_tokens: 4096)
def chat(messages, response_format: { type: "text" }, tools: nil, system: [], max_completion_tokens: 4096, thinking: {})

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same concern as claude.rb: the default value for thinking is {} instead of nil. Using nil as the default would be more idiomatic and consistent with the other clients in this patch.

def normalize_reasoning_effort(reasoning_effort)
return nil if reasoning_effort.nil?

budget_tokens = if reasoning_effort.is_a?(Integer)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When reasoning_effort is an Integer, there is no validation that the value is within a reasonable range (e.g., Anthropic's API requires a minimum of 1024 budget tokens). Consider adding a minimum/maximum bounds check for the integer path.

def normalize_reasoning_effort(reasoning_effort)
return nil if reasoning_effort.nil?

{ effort: reasoning_effort }

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The normalize_reasoning_effort wraps the value as { effort: reasoning_effort } without validating that reasoning_effort is one of the accepted values ('low', 'medium', 'high'). Consider adding validation similar to AnthropicReasoning to surface errors early rather than letting the API return an error.

output: [ { id: "msg_123", type: "message", role: "assistant", content: [ { type: "output_text", text: "hello", annotations: [] } ] } ]
}.freeze

class AnthropicClient

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The mock client classes (AnthropicClient, OpenAiChatClient, etc.) are defined as inner classes of the test but they duplicate the real client interfaces. If the real client signatures change, these mocks won't automatically reflect that. Consider using Mocha's mock or stub directly on instances of the actual client classes to keep tests closer to the real implementation.

@billybonks billybonks force-pushed the feat/add-reasoning-effort-param branch 3 times, most recently from cb88751 to 7c661a2 Compare March 25, 2026 14:08
built with in mind that we will need to have mutliple options to map
between providers.

the actual clients are called with what the api expects and the adapter
will map it between the clients

refactor option params slightly so its less verbose and passes through
the mapper into the api
@billybonks billybonks force-pushed the feat/add-reasoning-effort-param branch from 7c661a2 to 35f0f8e Compare March 25, 2026 14:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant