fix: extract Anthropic cache_creation_input_tokens without TTL breakdown#5606
fix: extract Anthropic cache_creation_input_tokens without TTL breakdown#5606colegottdank wants to merge 1 commit intomainfrom
Conversation
The AnthropicUsageProcessor was only extracting cache write tokens from the TTL breakdown (ephemeral_5m_input_tokens, ephemeral_1h_input_tokens) but completely ignoring the overall cache_creation_input_tokens field. When Anthropic's API returns cache_creation_input_tokens without the TTL breakdown (older API versions, certain response types), the cache write tokens were lost, causing: - Incorrect cost calculations (undercharging) - Missing cache write data in the dashboard - "cache read without cache creation" display issues Fixes: 1. AnthropicUsageProcessor now extracts cache_creation_input_tokens and uses it as fallback when TTL breakdown is not available 2. Fixed toOpenai.ts non-streaming transformation that incorrectly fell back to cachedTokens (read) instead of cacheWriteTokens (write) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
This pull request has been ignored for the connected project Preview Branches by Supabase. |
|
This pull request has been ignored for the connected project Preview Branches by Supabase. |
There was a problem hiding this comment.
Your free trial has ended. If you'd like to continue receiving code reviews, you can add a payment method here.
|
Claude finished @colegottdank's task —— View job Code Review CompleteScore: 8/10 - Good implementation with solid test coverage. Minor suggestions for improvement, ready to merge. Todo List:
Detailed Review Analysis✅ Strengths
🔧 Minor Suggestions
🔍 Technical AnalysisAnthropicUsageProcessor Changes:
toOpenai.ts Changes:
Test Coverage:
🚫 No Critical Issues Found
💡 Suggestions Summary
Overall Assessment: This is a well-implemented fix that addresses a real customer issue with appropriate testing. The code follows repository patterns and maintains backward compatibility while fixing the underlying problem. |
Summary
Fixes Anthropic prompt caching issue where cache write tokens were not being extracted when the TTL breakdown was not provided.
Root cause: The
AnthropicUsageProcessorwas only looking atcache_creation.ephemeral_5m_input_tokensandcache_creation.ephemeral_1h_input_tokens, completely ignoring the maincache_creation_input_tokensfield. When Anthropic's API returns the total without the TTL breakdown (older API responses, certain scenarios), the cache write tokens were lost.Impact:
Changes
packages/cost/usage/anthropicUsageProcessor.ts: Now extractscache_creation_input_tokensand uses it as fallback when TTL breakdown is not availablepackages/llm-mapper/transform/providers/anthropic/response/toOpenai.ts: Fixed fallback fromcachedTokens(read) tocacheWriteTokens(write) in non-streaming transformationAdded 2 new test cases to verify the fix
Test plan
🤖 Generated with Claude Code