Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
97 commits
Select commit Hold shift + click to select a range
91d4e2e
feat: generate SSE parser
Vladislav0Art Mar 19, 2026
a7a0068
feat: create tests for SSE parser (spec's example 3 is incorrect)
Vladislav0Art Mar 19, 2026
ace712a
fix: remove example 3 from sse parser tests
Vladislav0Art Mar 19, 2026
17c8897
feat: add comments to implementation of `SseParser`
Vladislav0Art Mar 19, 2026
1970d53
refactor
Vladislav0Art Mar 19, 2026
89a7f73
feat: add TODO in `SseParserTest`
Vladislav0Art Mar 22, 2026
a1fa4d9
feat: introduce `LLMTracingAdapter.handleStreamingEvent` method
Vladislav0Art Mar 22, 2026
aca23ba
feat: remove Tracy request param from adapter's `getSpanName`
Vladislav0Art Mar 22, 2026
836378c
feat: use `getSpanName` in Ktor and OkHttp interceptors
Vladislav0Art Mar 22, 2026
aedbba2
feat: identify streaming response by response body content type
Vladislav0Art Mar 22, 2026
4552ceb
feat: write tests with OpenAI SSE events
Vladislav0Art Mar 23, 2026
6dbfa47
feat: add explanation comments in `OpenTelemetryOkHttpInterceptor`
Vladislav0Art Mar 23, 2026
fb191a2
fix: correctly trace completion event of OpenAI Responses API
Vladislav0Art Mar 23, 2026
cb063cb
feat: rename Tracy SSE events count attribute key
Vladislav0Art Mar 23, 2026
d14c800
refactor: remove commented code
Vladislav0Art Mar 23, 2026
dd3afe3
refactor: imports
Vladislav0Art Mar 23, 2026
3c9cd9c
refactor: return `Result<Unit>` from SSE event handling methods
Vladislav0Art Mar 23, 2026
7a56f47
fix: avoid allocations of `AttributeKey`
Vladislav0Art Mar 23, 2026
70f9ea4
feat: unconditionally set content type in `registerResponse`
Vladislav0Art Mar 23, 2026
6cf0068
refactor: make `SseCapturingSource` private in `OpenTelemetryOkHttpIn…
Vladislav0Art Mar 23, 2026
677fa3d
fix: close SSE parser in Ktor interceptor
Vladislav0Art Mar 23, 2026
e18977f
fix: return index 0 in Chat Completions SSE event handling
Vladislav0Art Mar 23, 2026
94ec8d2
feat(`LLMTracingAdapter`): warn on SSE event parsing failure
Vladislav0Art Mar 23, 2026
c659c39
feat: remove TODO
Vladislav0Art Mar 23, 2026
5b719b3
fix: correctly trace assistant content for chat completions SSE events
Vladislav0Art Mar 23, 2026
7e4389e
feat: rename `gen_ai.response.streaming` -> `gen_ai.response.sse.stre…
Vladislav0Art Mar 23, 2026
e266704
feat: update KDoc of `registerResponseStreamEvent`
Vladislav0Art Mar 25, 2026
e45d61e
feat: call logger's debug instead of warn
Vladislav0Art Mar 25, 2026
a485d3d
refactor: fix mime type name in comments
Vladislav0Art Mar 25, 2026
69fbbe1
refactor: comment (color -> colon) in SseParser.kt
Vladislav0Art Mar 25, 2026
edf5f27
refactor: mention variable type `Long`
Vladislav0Art Mar 25, 2026
d7d5a83
refactor(`OpenAIApiUtils`): convert to `jsonPrimitive` and extract va…
Vladislav0Art Mar 25, 2026
4457403
refactor: fix usage example in `LLMTracingAdapter`'s KDoc
Vladislav0Art Mar 25, 2026
a4fc174
fix: prefix attribute `gen_ai.response.sse.streaming` with `tracy` in…
Vladislav0Art Mar 25, 2026
7a175f5
refactor(`AGENTS.md`): update comment about `LLMTracingAdapter`
Vladislav0Art Mar 25, 2026
6c15c2f
feat: change plugin name in `createClientPlugin` call
Vladislav0Art Mar 30, 2026
9cd1142
refactor: move SSE tracing in Ktor plugin into separate function
Vladislav0Art Mar 30, 2026
1829acd
feat: update Anthropic span name in accordance with GenAI
Vladislav0Art Mar 30, 2026
692a997
feat: add `asString` to `TracyHttpUrl`
Vladislav0Art Mar 30, 2026
18b34a9
feat: create `sseHandlingUnsupported` function
Vladislav0Art Mar 30, 2026
3b99009
refactor: remove redundant `?.`
Vladislav0Art Mar 30, 2026
fb90bc3
feat: use `sseHandlingUnsupported` in Anthropic adapter and Gemini ha…
Vladislav0Art Mar 30, 2026
c31dd93
feat: print warning about unsupported SSE event handling only once pe…
Vladislav0Art Mar 30, 2026
9264178
feat: call `getResponseBodyAttributes` for any non-SSE response type
Vladislav0Art Mar 30, 2026
85e9321
feat: re-implement `TracyHttpUrl.asString`
Vladislav0Art Mar 30, 2026
ed1625a
refactor: imports
Vladislav0Art Mar 30, 2026
9c762af
fix: check for nullness of `mimeType` in `LLMTracingAdapter`
Vladislav0Art Mar 30, 2026
023618b
feat(`VideosOpenAIApiEndpointHandler`): mark stream tracing as unsupp…
Vladislav0Art Mar 30, 2026
aff96b1
refactor: comment wording
Vladislav0Art Mar 30, 2026
d8ca934
refactor: comment wording
Vladislav0Art Mar 30, 2026
e5f1724
fix: set `retryValue` to `null` when dispatching
Vladislav0Art Mar 30, 2026
fd2636b
fix: use streaming UTF-8 CharsetDecoder in Ktor SSE tracing to handle…
Copilot Mar 30, 2026
95c4664
feat(`SseParser`): dispatch final event if any on close
Vladislav0Art Mar 30, 2026
5c140ef
fix: set `endOfInput` as `originalBody.isClosedForRead`
Vladislav0Art Mar 30, 2026
630ab43
refactor: imports
Vladislav0Art Mar 30, 2026
ff430c3
refactor: move call of `traceServerSentEvents` into `transformRespons…
Vladislav0Art Mar 30, 2026
3b0fae8
feat(test-utils): add okhttp/serialization/mocking deps to `jvmTestFi…
Vladislav0Art Mar 17, 2026
9a9dbfb
fix: fix description in Javadoc of `patchOpenAICompatibleClient`
Vladislav0Art Mar 17, 2026
5781265
feat: create `TestMode` enum class
Vladislav0Art Mar 17, 2026
07bff8d
feat: create `ResponseSanitizer` interface
Vladislav0Art Mar 17, 2026
a629a2f
feat: create `OpenAISanitizer` class (rn, no-op)
Vladislav0Art Mar 17, 2026
4600fa2
feat: create fixture recording/mocking functionality
Vladislav0Art Mar 17, 2026
37ab6f3
feat: select mock server depending on test mode & record responses on…
Vladislav0Art Mar 17, 2026
2fe6e4c
refactor: optimize imports
Vladislav0Art Mar 17, 2026
fe2d63a
feat(`RecordingInterceptor`): record non-successful responses
Vladislav0Art Mar 17, 2026
1579d6e
feat: save fixture responses with custom per-test-case tags
Vladislav0Art Mar 17, 2026
da8a1fb
feat: distinguish parameterized tests & collect responses in per-test…
Vladislav0Art Mar 17, 2026
f3028ff
feat: rename test case
Vladislav0Art Mar 17, 2026
cf17a54
feat: set correct fixture tags for test cases
Vladislav0Art Mar 17, 2026
8f618a5
feat: generate `FixtureNamingTest` test suite
Vladislav0Art Mar 17, 2026
9526334
fix: correctly count dispatched requests per fixture tag
Vladislav0Art Mar 17, 2026
7df8222
feat: create `isMockMode` function
Vladislav0Art Apr 2, 2026
a9352ac
feat: write mock response large/non-JSON bodies in files (WIP)
Vladislav0Art Apr 2, 2026
a01ed7c
fix: close original bodies in capturing response bodies
Vladislav0Art Apr 2, 2026
9991440
fix(Anthropic): fix original response body in test case
Vladislav0Art Apr 2, 2026
9c197c2
fix: do on-close operations when source gets closed
Vladislav0Art Apr 2, 2026
f034ad4
refactor: move tests that required litellm into separate test suite
Vladislav0Art Apr 2, 2026
34c1a63
refactor
Vladislav0Art Apr 2, 2026
a52748f
feat: save fixtures under `containingTestSuiteName/fixtureTag` instea…
Vladislav0Art Apr 2, 2026
c2b1d18
feat: add `openai` tag to `AdditionalAttributesOpenAIEndpointHandlerT…
Vladislav0Art Apr 2, 2026
eac9d86
feat: prefix fixture body filename with 'body'
Vladislav0Art Apr 2, 2026
f5e09b8
feat: assume mock mode is on in image tests
Vladislav0Art Apr 2, 2026
6ec3b52
refactor: wrap recorder params into `Record` data class
Vladislav0Art Apr 7, 2026
3fe3913
refactor(`ResponseSanitizer`): rename contentType -> mimeType
Vladislav0Art Apr 7, 2026
fd9ba41
refactor: log instead of printing
Vladislav0Art Apr 7, 2026
7978ff3
feat(`OpenAISanitizer`): drop non-deterministic/unwanted headers
Vladislav0Art Apr 7, 2026
0044cc6
feat(`OpenAIClientTest`): use `BaseOpenAITracingTest` as base test class
Vladislav0Art Apr 7, 2026
232a7cb
feat(`OpenAIClientTest`): record fixtures
Vladislav0Art Apr 7, 2026
e679c5d
feat(`ImagesCreateOpenAIApiEndpointHandlerTest`): record fixtures
Vladislav0Art Apr 7, 2026
1d70c96
feat(`ImagesCreateEditOpenAIApiEndpointHandlerTest`): record fixtures
Vladislav0Art Apr 7, 2026
65b1048
feat(`ResponsesOpenAIApiEndpointHandlerTest`): record fixtures
Vladislav0Art Apr 7, 2026
1185077
feat(`ChatCompletionsOpenAIApiEndpointHandlerTest`): record fixtures
Vladislav0Art Apr 7, 2026
217733f
feat(`AdditionalAttributesOpenAIEndpointHandlerTest`): record fixture…
Vladislav0Art Apr 7, 2026
da5f711
refactor: remove prints
Vladislav0Art Apr 7, 2026
05331e0
feat: add Gradle task to record fixtures for OpenAI
Vladislav0Art Apr 7, 2026
475b886
feat: update AGENTS.md
Vladislav0Art Apr 7, 2026
8d1bc8c
feat: generate GH workflow yml config to collect test fixtures (untes…
Vladislav0Art Apr 7, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
170 changes: 170 additions & 0 deletions .github/workflows/update-fixtures.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,170 @@
name: Update Test Fixtures

on:
# Manual trigger
workflow_dispatch:
inputs:
providers:
description: 'Providers to update (comma-separated: openai,anthropic,gemini or "all")'
required: true
default: 'openai'
type: string

# Scheduled weekly updates (every Monday at 2 AM UTC)
schedule:
- cron: '0 2 * * 1'

permissions:
contents: write
pull-requests: write

jobs:
update-openai-fixtures:
if: |
(github.event_name == 'schedule') ||
(github.event_name == 'workflow_dispatch' && (github.event.inputs.providers == 'all' || contains(github.event.inputs.providers, 'openai')))
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0

- name: Setup Java
uses: actions/setup-java@v4
with:
distribution: temurin
java-version: '17'
cache: gradle

- name: Setup Gradle
uses: gradle/actions/setup-gradle@v4

- name: Record OpenAI Fixtures
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
run: |
./gradlew :tracing:openai:recordFixtures --no-daemon

- name: Check for changes
id: check_changes
run: |
if git diff --quiet tracing/openai/src/jvmTest/resources/fixtures/; then
echo "has_changes=false" >> "$GITHUB_OUTPUT"
echo "No changes detected in OpenAI fixtures"
else
echo "has_changes=true" >> "$GITHUB_OUTPUT"
echo "Changes detected in OpenAI fixtures"
fi

- name: Create Pull Request
if: steps.check_changes.outputs.has_changes == 'true'
env:
GH_TOKEN: ${{ github.token }}
run: |
set -euo pipefail

# Configure git
git config user.name "github-actions[bot]"
git config user.email "github-actions[bot]@users.noreply.github.com"

# Create branch name with timestamp
BRANCH_NAME="fixtures/update-openai-$(date +%Y-%m-%d-%H%M%S)"

# Create and checkout new branch
git checkout -b "$BRANCH_NAME"

# Add and commit changes
git add tracing/openai/src/jvmTest/resources/fixtures/
git commit -m "chore: update OpenAI test fixtures

Automated update of OpenAI test fixtures from real API endpoints.

Changes:
- Refreshed API response schemas
- Sanitized non-deterministic data (timestamps, IDs, etc.)

Triggered by: ${{ github.event_name }}
Workflow run: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}"

# Push branch
git push origin "$BRANCH_NAME"

# Create PR using gh CLI
gh pr create \
--title "chore: Update OpenAI test fixtures" \
--body "$(cat <<'EOF'
## Summary
Automated update of OpenAI test fixtures from real API endpoints.

## Changes
- Refreshed API response schemas to match current OpenAI API
- Sanitized non-deterministic data (timestamps, IDs, LLM outputs)
- All tests pass with updated fixtures

## Testing
Tests were run in both RECORD mode (against real API) and MOCK mode (against fixtures) to ensure correctness.

## Checklist
- [ ] Review sanitized responses for any leaked sensitive data
- [ ] Verify tests pass in CI with new fixtures
- [ ] Confirm fixture schemas match current API documentation

---
🤖 Generated by [GitHub Actions](${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }})
EOF
)" \
--base main \
--head "$BRANCH_NAME" \
--label "automated" \
--label "fixtures"

update-anthropic-fixtures:
if: |
(github.event_name == 'workflow_dispatch' && (github.event.inputs.providers == 'all' || contains(github.event.inputs.providers, 'anthropic')))
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4

- name: Setup Java
uses: actions/setup-java@v4
with:
distribution: temurin
java-version: '17'
cache: gradle

- name: Setup Gradle
uses: gradle/actions/setup-gradle@v4

- name: Record Anthropic Fixtures
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
run: |
echo "Anthropic fixture recording not yet implemented"
# ./gradlew :tracing:anthropic:recordFixtures --no-daemon

update-gemini-fixtures:
if: |
(github.event_name == 'workflow_dispatch' && (github.event.inputs.providers == 'all' || contains(github.event.inputs.providers, 'gemini')))
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4

- name: Setup Java
uses: actions/setup-java@v4
with:
distribution: temurin
java-version: '17'
cache: gradle

- name: Setup Gradle
uses: gradle/actions/setup-gradle@v4

- name: Record Gemini Fixtures
env:
GEMINI_API_KEY: ${{ secrets.GEMINI_API_KEY }}
run: |
echo "Gemini fixture recording not yet implemented"
# ./gradlew :tracing:gemini:recordFixtures --no-daemon
46 changes: 45 additions & 1 deletion AGENTS.md
Original file line number Diff line number Diff line change
Expand Up @@ -76,13 +76,57 @@ tracy/
4. **plugin/**: Kotlin compiler plugins for `@Trace` annotation processing
5. **examples/**: Reference implementations (keep in sync with API changes)

## Test Fixture Recording

Tracy uses a dual-mode testing system to avoid calling real LLM endpoints in CI:

**Mock Mode (Default)**
- Tests run against a mock HTTP server using pre-recorded fixtures
- Fast, offline, no API keys required
- Fixtures stored in `tracing/{provider}/src/jvmTest/resources/fixtures/`

**Record Mode**
- Tests call real LLM endpoints and record sanitized responses as fixtures
- Automatically sanitizes non-deterministic data (IDs, timestamps, AI outputs)
- Used to update fixtures when API schemas change

**Recording Fixtures:**

```bash
# Record OpenAI fixtures
export OPENAI_API_KEY=sk-...
./gradlew :tracing:openai:recordFixtures

# Run tests in mock mode (default)
./gradlew :tracing:openai:test

# Run tests in record mode manually
./gradlew :tracing:openai:test -Dtracy.test.mode=record
```

**Automated Updates:**
- GitHub Actions workflow runs weekly to update fixtures automatically
- Creates PR with updated fixtures for review
- Manual trigger available via workflow_dispatch
- See `.github/workflows/update-fixtures.yml`

**Fixture Sanitization:**
Each provider has a custom sanitizer that removes:
- **IDs**: `id`, `request_id`, `organization_id`, etc. → `"sanitized-*"`
- **Timestamps**: `created`, `created_at` → fixed timestamp
- **AI Content**: Assistant messages, tool arguments → generic placeholders
- **Rate Limit Headers**: Removed entirely

## Adding a New Provider

Use existing providers as reference.

**Steps:**
1. Create `tracing/{provider}/` module, register it in `settings.gradle.kts`
2. Extend `LLMTracingAdapter(genAISystem)` — override `getRequestBodyAttributes`, `getResponseBodyAttributes`, `getSpanName`, `isStreamingRequest`, `handleStreaming`
2. Extend `LLMTracingAdapter(genAISystem)` — override `getRequestBodyAttributes`, `getResponseBodyAttributes`, `getSpanName`, and abstract `registerResponseStreamEvent`
3. If multiple distinct API endpoints exist, implement `EndpointApiHandler` per endpoint and delegate from the adapter
4. Add a public `instrument(client)` function — use `patchOpenAICompatibleClient()` for OpenAI-compatible SDKs, or reflection + `patchInterceptors()` for others (see `GeminiClient.kt`)
5. Write tests extending `BaseAITracingTest`, tag with `@Tag("{provider}")`
6. Create a `ResponseSanitizer` implementation for the provider in `test-utils`
7. Add `recordFixtures` Gradle task (see `tracing/openai/build.gradle.kts`)
8. Create fixtures directory: `src/jvmTest/resources/fixtures/`
Original file line number Diff line number Diff line change
Expand Up @@ -6,11 +6,11 @@
package org.jetbrains.ai.tracy.examples.clients

import org.jetbrains.ai.tracy.anthropic.adapters.AnthropicLLMTracingAdapter
import org.jetbrains.ai.tracy.core.OpenTelemetryOkHttpInterceptor
import org.jetbrains.ai.tracy.core.interceptors.OpenTelemetryOkHttpInterceptor
import org.jetbrains.ai.tracy.core.TracingManager
import org.jetbrains.ai.tracy.core.configureOpenTelemetrySdk
import org.jetbrains.ai.tracy.core.exporters.ConsoleExporterConfig
import org.jetbrains.ai.tracy.core.instrument
import org.jetbrains.ai.tracy.core.interceptors.instrument
import org.jetbrains.ai.tracy.gemini.adapters.GeminiLLMTracingAdapter
import org.jetbrains.ai.tracy.openai.adapters.OpenAILLMTracingAdapter
import com.openai.models.ChatModel
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,16 @@

package org.jetbrains.ai.tracy.anthropic.adapters

import io.opentelemetry.api.trace.Span
import io.opentelemetry.sdk.trace.ReadableSpan
import io.opentelemetry.semconv.incubating.GenAiIncubatingAttributes.*
import kotlinx.serialization.json.*
import mu.KotlinLogging
import org.jetbrains.ai.tracy.core.adapters.LLMTracingAdapter
import org.jetbrains.ai.tracy.core.adapters.LLMTracingAdapter.Companion.PayloadType
import org.jetbrains.ai.tracy.core.adapters.handlers.sse.sseHandlingUnsupported
import org.jetbrains.ai.tracy.core.adapters.media.*
import org.jetbrains.ai.tracy.core.http.parsers.SseEvent
import org.jetbrains.ai.tracy.core.http.protocol.TracyHttpRequest
import org.jetbrains.ai.tracy.core.http.protocol.TracyHttpResponse
import org.jetbrains.ai.tracy.core.http.protocol.TracyHttpUrl
Expand All @@ -16,10 +23,6 @@ import org.jetbrains.ai.tracy.core.policy.ContentKind
import org.jetbrains.ai.tracy.core.policy.contentTracingAllowed
import org.jetbrains.ai.tracy.core.policy.orRedactedInput
import org.jetbrains.ai.tracy.core.policy.orRedactedOutput
import io.opentelemetry.api.trace.Span
import io.opentelemetry.semconv.incubating.GenAiIncubatingAttributes.*
import kotlinx.serialization.json.*
import mu.KotlinLogging

/**
* Tracing adapter for Anthropic Claude API.
Expand Down Expand Up @@ -52,8 +55,8 @@ class AnthropicLLMTracingAdapter : LLMTracingAdapter(genAISystem = GenAiSystemIn
val body = request.body.asJson()?.jsonObject ?: return

body["temperature"]?.jsonPrimitive?.doubleOrNull?.let { span.setAttribute(GEN_AI_REQUEST_TEMPERATURE, it) }
body["model"]?.jsonPrimitive?.let { span.setAttribute(GEN_AI_REQUEST_MODEL, it.content) }
body["max_tokens"]?.jsonPrimitive?.intOrNull?.let { span.setAttribute(GEN_AI_REQUEST_MAX_TOKENS, it.toLong()) }
body["model"]?.jsonPrimitive?.content?.let { span.setAttribute(GEN_AI_REQUEST_MODEL, it) }
body["max_tokens"]?.jsonPrimitive?.longOrNull?.let { span.setAttribute(GEN_AI_REQUEST_MAX_TOKENS, it) }

// metadata
body["metadata"]?.jsonObject?.let { metadata ->
Expand Down Expand Up @@ -127,10 +130,28 @@ class AnthropicLLMTracingAdapter : LLMTracingAdapter(genAISystem = GenAiSystemIn
val body = response.body.asJson()?.jsonObject ?: return

body["id"]?.let { span.setAttribute(GEN_AI_RESPONSE_ID, it.jsonPrimitive.content) }
body["type"]?.let { span.setAttribute(GEN_AI_OUTPUT_TYPE, it.jsonPrimitive.content) }
body["type"]?.jsonPrimitive?.content?.let {
span.setAttribute(GEN_AI_OUTPUT_TYPE, it)
span.setAttribute(GEN_AI_OPERATION_NAME, it)
}
body["role"]?.let { span.setAttribute("gen_ai.response.role", it.jsonPrimitive.content) }
body["model"]?.let { span.setAttribute(GEN_AI_RESPONSE_MODEL, it.jsonPrimitive.content) }

// update the span name to follow GenAI Anthropic Conventions
// convention: `{gen_ai.operation.name} {gen_ai.request.model}`
// see: https://opentelemetry.io/docs/specs/semconv/gen-ai/anthropic/#spans
val spanName = run {
val type = body["type"]?.jsonPrimitive?.contentOrNull
val model = (span as? ReadableSpan)
?.attributes
?.get(GEN_AI_REQUEST_MODEL)
?: body["model"]
if (type != null && model != null) "$type $model" else null
}
if (spanName != null) {
span.updateName(spanName)
}

// collecting response messages
body["content"]?.let {
for ((index, message) in it.jsonArray.withIndex()) {
Expand Down Expand Up @@ -204,11 +225,26 @@ class AnthropicLLMTracingAdapter : LLMTracingAdapter(genAISystem = GenAiSystemIn
span.populateUnmappedAttributes(body, mappedAttributes, PayloadType.RESPONSE)
}

override fun getSpanName(request: TracyHttpRequest) = "Anthropic-generation"

// streaming is not supported
override fun isStreamingRequest(request: TracyHttpRequest) = false
override fun handleStreaming(span: Span, url: TracyHttpUrl, events: String) = Unit
/**
* Sets a default span name to **"Anthropic-generation"**.
*
* This name will be overridden in [getResponseBodyAttributes] to follow GenAI Conventions for Anthropic:
* ```
* {gen_ai.operation.name} {gen_ai.request.model}
* ```
*
* See [GenAI Anthropic Spans](https://opentelemetry.io/docs/specs/semconv/gen-ai/anthropic/#spans)
*/
override fun getSpanName() = "Anthropic-generation"

override fun registerResponseStreamEvent(
span: Span,
url: TracyHttpUrl,
event: SseEvent,
index: Long
): Result<Unit> {
return sseHandlingUnsupported()
}

/**
* Parses content of the `messages` field when its type is
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,9 @@
package org.jetbrains.ai.tracy.anthropic.clients

import org.jetbrains.ai.tracy.anthropic.adapters.AnthropicLLMTracingAdapter
import org.jetbrains.ai.tracy.core.OpenTelemetryOkHttpInterceptor
import org.jetbrains.ai.tracy.core.interceptors.OpenTelemetryOkHttpInterceptor
import org.jetbrains.ai.tracy.core.TracingManager
import org.jetbrains.ai.tracy.core.patchOpenAICompatibleClient
import org.jetbrains.ai.tracy.core.interceptors.patchOpenAICompatibleClient
import com.anthropic.client.AnthropicClient

/**
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ package org.jetbrains.ai.tracy.anthropic

import org.jetbrains.ai.tracy.anthropic.clients.instrument
import org.jetbrains.ai.tracy.core.TracingManager
import org.jetbrains.ai.tracy.core.patchOpenAICompatibleClient
import org.jetbrains.ai.tracy.core.interceptors.patchOpenAICompatibleClient
import org.jetbrains.ai.tracy.core.policy.ContentCapturePolicy
import com.anthropic.core.JsonString
import com.anthropic.core.JsonValue
Expand Down Expand Up @@ -419,10 +419,14 @@ class AnthropicTracingTest : BaseAnthropicTracingTest() {
}
""".trimIndent().toResponseBody("application/json".toMediaTypeOrNull())

response.newBuilder()
val newResponse = response.newBuilder()
.body(errorBody)
.code(529)
.build()
// close the original response body
response.body.close()

return@Interceptor newResponse
}

patchOpenAICompatibleClient(
Expand Down
Loading