Skip to content

[bot] aws-sdk-bedrockruntime gem not instrumented #152

@braintrust-bot

Description

@braintrust-bot

Summary

The aws-sdk-bedrockruntime gem is the official AWS SDK for invoking foundation models on Amazon Bedrock. With ~16.6M downloads, continuous releases (v1.75.0, April 2026), and a standardized converse/converse_stream API surface across all Bedrock-hosted models (Claude, Titan, Llama, Mistral, Cohere, etc.), it is not instrumented by this SDK.

The SDK instruments standalone provider clients for OpenAI (openai, ruby-openai) and Anthropic (anthropic), but has no instrumentation for the AWS Bedrock Runtime client, which is one of the most widely used ways to invoke generative AI models in Ruby production environments.

What is missing

No instrumentation exists for any aws-sdk-bedrockruntime execution surface. Key APIs that should be instrumented:

Converse API (unified chat)

  • client.converse(...) — Unified chat API with structured messages, system prompts, tool use (tool_config, tool_use, tool_result), inference config (temperature, max_tokens, top_p, stop_sequences), and guardrails. Returns structured ConverseResponse with usage metrics (input/output tokens), stop reason, and model-specific metadata. Works identically across all Bedrock-hosted models.
  • client.converse_stream(...) — Streaming variant with event-based output (content_block_delta, content_block_start, content_block_stop, message_start, message_stop, metadata). Includes token usage and latency metrics in the stream metadata event.

Invoke Model API (raw model invocation)

  • client.invoke_model(...) — Raw model invocation with provider-specific request body. Returns raw response body. Used for embeddings (e.g., Titan Embeddings, Cohere Embed) and other model-specific APIs.
  • client.invoke_model_with_response_stream(...) — Streaming variant of raw invocation.

Async Invoke API

  • client.start_async_invoke(...) — Asynchronous model invocation for long-running tasks.
  • client.get_async_invoke(...) — Check status of async invocation.

Expected instrumentation

The converse/converse_stream APIs are the primary instrumentation target due to their standardized request/response shapes:

Converse spans should capture:

  • Input: messages array, system prompts, tool configuration
  • Metadata: model_id, inference_config (temperature, max_tokens, top_p, stop_sequences), guardrail_config, additional_model_request_fields, provider, endpoint
  • Metrics: usage (input_tokens, output_tokens, total_tokens), latency metrics from stream metadata
  • Output: response messages, stop_reason, additional_model_response_fields

Invoke model spans (for embeddings) should capture:

  • Input: request body (model-specific)
  • Metadata: model_id, content_type, accept, provider
  • Output: response body metadata

Braintrust docs status

supported — Braintrust lists AWS Bedrock as a supported AI provider at https://www.braintrust.dev/docs. The Braintrust proxy supports Bedrock routing. However, there is no Ruby SDK auto-instrumentation for the aws-sdk-bedrockruntime gem — the SDK integrations page at https://www.braintrust.dev/docs/integrations/sdk-integrations does not list it as a supported Ruby library.

Upstream sources

Local repo files inspected

  • lib/braintrust/contrib/ — contains only openai/, ruby_openai/, ruby_llm/, anthropic/, and rails/ directories. No bedrock or aws directory.
  • lib/braintrust/contrib.rb — registers only 4 integrations: OpenAI, RubyOpenAI, RubyLLM, Anthropic. No Bedrock integration.
  • Appraisals — no appraisal scenarios for aws-sdk-bedrockruntime
  • braintrust.gemspec — no mention of aws or bedrock
  • Grep for bedrock across entire codebase returns zero matches

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions