Summary
The SDK provides usage extractors for OpenAI (extract_openai_usage) and Anthropic (extract_anthropic_usage) but has no equivalent for the AWS Bedrock Converse API. Bedrock uses camelCase field names (inputTokens, outputTokens, totalTokens) that neither existing extractor can parse. Braintrust documents Bedrock as a supported cloud provider and the Go SDK already traces Converse, ConverseStream, and InvokeModel calls.
What is missing
AWS Bedrock's Converse API returns a usage object with camelCase field names:
{
"output": {
"message": {
"role": "assistant",
"content": [{"text": "Response text"}]
}
},
"stopReason": "end_turn",
"usage": {
"inputTokens": 30,
"outputTokens": 628,
"totalTokens": 658
},
"metrics": {
"latencyMs": 1275
}
}
Key differences from OpenAI and Anthropic formats:
- Field naming: camelCase (
inputTokens, outputTokens, totalTokens) — neither prompt_tokens/completion_tokens (OpenAI) nor input_tokens/output_tokens (Anthropic)
- Latency metrics:
metrics.latencyMs provides server-side latency — useful for span timing
- Cache tokens for Claude models: When using Claude via Bedrock, responses include
cacheReadInputTokenCount and cacheWriteInputTokenCount in the usage object
- Stop reason at root:
stopReason is at the response root level with Bedrock-specific values (end_turn, tool_use, max_tokens, stop_sequence, guardrail_intervened, content_filtered)
Passing a Bedrock Converse response through extract_openai_usage() returns empty metrics because it looks for usage.prompt_tokens / usage.completion_tokens (snake_case), which don't exist in Bedrock's camelCase format. Similarly, extract_anthropic_usage() looks for usage.input_tokens (snake_case), which also doesn't match.
Braintrust docs status
supported — Braintrust's Bedrock integration page documents tracing support: "Converse, ConverseStream, and InvokeModel calls are traced." The Go SDK provides Bedrock Runtime middleware that captures token usage including cache tokens for Claude models. The Rust SDK has no Bedrock support.
Upstream sources
Relationship to existing issues
Local files inspected
src/extractors.rs — only extract_openai_usage() and extract_anthropic_usage() exist; no Bedrock extractor
src/types.rs — UsageMetrics struct could represent Bedrock token data if mapped, but no mapping exists
src/stream.rs — stream aggregator only parses OpenAI Chat Completions chunk format; no Bedrock ConverseStream support
src/lib.rs — public API exports; no Bedrock references
- Full codebase grep for "bedrock", "inputTokens", "outputTokens", "converse", "latencyMs" — zero results
Summary
The SDK provides usage extractors for OpenAI (
extract_openai_usage) and Anthropic (extract_anthropic_usage) but has no equivalent for the AWS Bedrock Converse API. Bedrock uses camelCase field names (inputTokens,outputTokens,totalTokens) that neither existing extractor can parse. Braintrust documents Bedrock as a supported cloud provider and the Go SDK already traces Converse, ConverseStream, and InvokeModel calls.What is missing
AWS Bedrock's Converse API returns a
usageobject with camelCase field names:{ "output": { "message": { "role": "assistant", "content": [{"text": "Response text"}] } }, "stopReason": "end_turn", "usage": { "inputTokens": 30, "outputTokens": 628, "totalTokens": 658 }, "metrics": { "latencyMs": 1275 } }Key differences from OpenAI and Anthropic formats:
inputTokens,outputTokens,totalTokens) — neitherprompt_tokens/completion_tokens(OpenAI) norinput_tokens/output_tokens(Anthropic)metrics.latencyMsprovides server-side latency — useful for span timingcacheReadInputTokenCountandcacheWriteInputTokenCountin theusageobjectstopReasonis at the response root level with Bedrock-specific values (end_turn,tool_use,max_tokens,stop_sequence,guardrail_intervened,content_filtered)Passing a Bedrock Converse response through
extract_openai_usage()returns empty metrics because it looks forusage.prompt_tokens/usage.completion_tokens(snake_case), which don't exist in Bedrock's camelCase format. Similarly,extract_anthropic_usage()looks forusage.input_tokens(snake_case), which also doesn't match.Braintrust docs status
supported — Braintrust's Bedrock integration page documents tracing support: "Converse, ConverseStream, and InvokeModel calls are traced." The Go SDK provides Bedrock Runtime middleware that captures token usage including cache tokens for Claude models. The Rust SDK has no Bedrock support.
Upstream sources
usage): https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Converse.htmlRelationship to existing issues
usageMetadatawithpromptTokenCount/candidatesTokenCount. Bedrock usesusagewithinputTokens/outputTokens— a different provider with a different field naming convention and additional fields likelatencyMsand cache tokens.usage.billed_units/usage.tokensstructure. Bedrock uses flat camelCase fields directly inusage.async-openaiRust crate. This issue is about adding an extractor for the Bedrock response format, independent of any specific Rust Bedrock client.Local files inspected
src/extractors.rs— onlyextract_openai_usage()andextract_anthropic_usage()exist; no Bedrock extractorsrc/types.rs—UsageMetricsstruct could represent Bedrock token data if mapped, but no mapping existssrc/stream.rs— stream aggregator only parses OpenAI Chat Completions chunk format; no Bedrock ConverseStream supportsrc/lib.rs— public API exports; no Bedrock references