Summary
The OpenAI Chat Completions instrumentation in both the openai and ruby-openai integrations does not capture several parameters that are critical for reasoning models and web search, because the METADATA_FIELDS lists have not been updated for newer API additions. Users of reasoning models (o1, o3, o4-mini) and web-search-enabled completions get incomplete observability — these parameters are silently dropped from span metadata.
What is missing
The METADATA_FIELDS constant in both Chat Completions instrumentations currently includes:
METADATA_FIELDS = %i[
model frequency_penalty logit_bias logprobs max_tokens n
presence_penalty response_format seed service_tier stop
stream stream_options temperature top_p top_logprobs
tools tool_choice parallel_tool_calls user functions function_call
].freeze
The following parameters accepted by the upstream OpenAI Chat Completions API are not captured:
| Parameter |
Why it matters |
max_completion_tokens |
Required for reasoning models (o1, o3, o4-mini) — replaces max_tokens which doesn't work for these models. Without this, users can't see the output token limit in their traces. |
reasoning_effort |
Controls reasoning depth (low/medium/high) for reasoning models. Directly affects cost, latency, and output quality — essential for observability. |
web_search_options |
Configures web search during generation (OpenAI's built-in web search tool for Chat Completions). Without this, users can't distinguish web-search-enabled generations from regular ones. |
Inconsistency with Responses API
The Responses API instrumentation in this SDK already captures the reasoning parameter (which contains effort and other reasoning config). The Chat Completions API uses different parameter names (max_completion_tokens, reasoning_effort) for equivalent functionality, but these are not captured.
Braintrust docs status
not_found — The Braintrust docs at https://www.braintrust.dev/docs/instrument/wrap-providers do not mention max_completion_tokens, reasoning_effort, reasoning models, or web_search_options.
Upstream sources
Local repo files inspected
lib/braintrust/contrib/openai/instrumentation/chat.rb (lines 27–32) — METADATA_FIELDS missing max_completion_tokens, reasoning_effort, web_search_options
lib/braintrust/contrib/ruby_openai/instrumentation/chat.rb (lines 32–37) — same METADATA_FIELDS list, same omissions
lib/braintrust/contrib/openai/instrumentation/responses.rb (lines 26–31) — Responses API already captures reasoning parameter, showing the pattern exists
lib/braintrust/contrib/support/openai.rb (lines 14–68) — usage token parser already handles *_tokens_details including reasoning_tokens, so metrics capture reasoning token usage but metadata doesn't capture the reasoning configuration
Summary
The OpenAI Chat Completions instrumentation in both the
openaiandruby-openaiintegrations does not capture several parameters that are critical for reasoning models and web search, because theMETADATA_FIELDSlists have not been updated for newer API additions. Users of reasoning models (o1, o3, o4-mini) and web-search-enabled completions get incomplete observability — these parameters are silently dropped from span metadata.What is missing
The
METADATA_FIELDSconstant in both Chat Completions instrumentations currently includes:The following parameters accepted by the upstream OpenAI Chat Completions API are not captured:
max_completion_tokensmax_tokenswhich doesn't work for these models. Without this, users can't see the output token limit in their traces.reasoning_effortlow/medium/high) for reasoning models. Directly affects cost, latency, and output quality — essential for observability.web_search_optionsInconsistency with Responses API
The Responses API instrumentation in this SDK already captures the
reasoningparameter (which contains effort and other reasoning config). The Chat Completions API uses different parameter names (max_completion_tokens,reasoning_effort) for equivalent functionality, but these are not captured.Braintrust docs status
not_found— The Braintrust docs at https://www.braintrust.dev/docs/instrument/wrap-providers do not mentionmax_completion_tokens,reasoning_effort, reasoning models, orweb_search_options.Upstream sources
OpenAI::Resources::Chat::Completions#createacceptsmax_completion_tokens,reasoning_effort,prediction,audio, andweb_search_optionsparametersmax_completion_tokensandreasoning_effortas the primary configuration parametersLocal repo files inspected
lib/braintrust/contrib/openai/instrumentation/chat.rb(lines 27–32) —METADATA_FIELDSmissingmax_completion_tokens,reasoning_effort,web_search_optionslib/braintrust/contrib/ruby_openai/instrumentation/chat.rb(lines 32–37) — sameMETADATA_FIELDSlist, same omissionslib/braintrust/contrib/openai/instrumentation/responses.rb(lines 26–31) — Responses API already capturesreasoningparameter, showing the pattern existslib/braintrust/contrib/support/openai.rb(lines 14–68) — usage token parser already handles*_tokens_detailsincludingreasoning_tokens, so metrics capture reasoning token usage but metadata doesn't capture the reasoning configuration