Skip to content

[BOT ISSUE] OpenAI Responses API instrumentation uses wrong token-limit parameter name and is missing several metadata fields #145

@braintrust-bot

Description

@braintrust-bot

Summary

The Responses API instrumentation in both the openai and ruby-openai integrations has an incorrect parameter name (max_tokens instead of max_output_tokens) and is missing several parameters that are part of the stable upstream API. Users who set max_output_tokens, service_tier, include, text, or background on their Responses API calls get no visibility into these values in their Braintrust span metadata.

What is missing

The METADATA_FIELDS constant in both Responses API instrumentations currently includes:

METADATA_FIELDS = %i[
  model instructions modalities tools parallel_tool_calls
  tool_choice temperature max_tokens top_p frequency_penalty
  presence_penalty seed user metadata store response_format
  reasoning previous_response_id truncation
].freeze

Wrong parameter name

Current field Correct field Why it matters
max_tokens max_output_tokens The Responses API uses max_output_tokens, not max_tokens. The SDK captures a field that doesn't exist in the Responses API, and misses the one users actually pass.

Missing parameters

Parameter Why it matters
service_tier Controls processing priority (auto, default, flex, priority). Already captured in the Chat Completions instrumentation — this is an inconsistency within the SDK.
include Array controlling what additional data is returned (e.g., web_search_call.action.sources, code_interpreter_call.outputs). Important for understanding what data the user requested.
text Text response format configuration for structured outputs. The newer alternative to response_format for the Responses API.
background Boolean for async/background response execution. Changes how the response is processed — users need to see this in traces to understand response behavior.

Braintrust docs status

not_found — The Braintrust docs at https://www.braintrust.dev/docs/instrument/wrap-providers do not mention max_output_tokens, service_tier, include, text, or background for the Responses API.

Upstream sources

Local repo files inspected

  • lib/braintrust/contrib/openai/instrumentation/responses.rb (lines 26–31) — METADATA_FIELDS uses max_tokens (wrong) and is missing max_output_tokens, service_tier, include, text, background
  • lib/braintrust/contrib/ruby_openai/instrumentation/responses.rb (lines 32–37) — identical METADATA_FIELDS, same issues
  • lib/braintrust/contrib/openai/instrumentation/chat.rb (lines 27–32) — Chat Completions already captures service_tier, showing this is an oversight in the Responses instrumentation

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions