|
| 1 | +--- |
| 2 | +title: "OpenTelemetry" |
| 3 | +icon: "tower-broadcast" |
| 4 | +--- |
| 5 | + |
| 6 | +PromptLayer natively supports [OpenTelemetry (OTEL)](https://opentelemetry.io/), the industry-standard observability framework. You can send traces from **any** OpenTelemetry-compatible SDK or Collector directly to PromptLayer — no PromptLayer SDK required. |
| 7 | + |
| 8 | +This is ideal when: |
| 9 | + |
| 10 | +- Your framework isn't listed on the [Integrations](/languages/integrations) page |
| 11 | +- You already have an OpenTelemetry pipeline and want to add PromptLayer as a destination |
| 12 | +- You want vendor-neutral instrumentation |
| 13 | + |
| 14 | +<Note> |
| 15 | +If you're using a supported framework like the [Vercel AI SDK](/languages/integrations#vercel-ai-sdk), [OpenAI Agents SDK](/languages/integrations#openai-agents-sdk), or [Claude Code](/languages/integrations#claude-code), see the [Integrations](/languages/integrations) page for framework-specific setup — those integrations handle the OTEL configuration for you. |
| 16 | +</Note> |
| 17 | + |
| 18 | +## How It Works |
| 19 | + |
| 20 | +PromptLayer exposes an [OTLP/HTTP endpoint](/reference/otlp-ingest-traces) at: |
| 21 | + |
| 22 | +``` |
| 23 | +https://api.promptlayer.com/v1/traces |
| 24 | +``` |
| 25 | + |
| 26 | +Any OpenTelemetry SDK or Collector can export traces to this endpoint. Spans that include [GenAI semantic convention](https://opentelemetry.io/docs/specs/semconv/gen-ai/) attributes are automatically converted into PromptLayer request logs. |
| 27 | + |
| 28 | +## Setup |
| 29 | + |
| 30 | +Configure your OpenTelemetry SDK to export traces to PromptLayer using the OTLP/HTTP exporter. |
| 31 | + |
| 32 | +<CodeGroup> |
| 33 | +```python Python |
| 34 | +from opentelemetry.sdk.trace import TracerProvider |
| 35 | +from opentelemetry.sdk.trace.export import BatchSpanProcessor |
| 36 | +from opentelemetry.sdk.resources import Resource |
| 37 | +from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter |
| 38 | + |
| 39 | +# Install required packages: |
| 40 | +# pip install opentelemetry-sdk opentelemetry-exporter-otlp-proto-http |
| 41 | + |
| 42 | +exporter = OTLPSpanExporter( |
| 43 | + endpoint="https://api.promptlayer.com/v1/traces", |
| 44 | + headers={"X-API-KEY": "your-promptlayer-api-key"}, |
| 45 | +) |
| 46 | + |
| 47 | +provider = TracerProvider( |
| 48 | + resource=Resource.create({"service.name": "my-llm-app"}) |
| 49 | +) |
| 50 | +provider.add_span_processor(BatchSpanProcessor(exporter)) |
| 51 | + |
| 52 | +# Use the tracer to create spans |
| 53 | +tracer = provider.get_tracer("my-llm-app") |
| 54 | +``` |
| 55 | + |
| 56 | +```javascript JavaScript |
| 57 | +// Install required packages: |
| 58 | +// npm install @opentelemetry/sdk-node @opentelemetry/exporter-trace-otlp-http @opentelemetry/resources |
| 59 | + |
| 60 | +import { NodeSDK } from "@opentelemetry/sdk-node"; |
| 61 | +import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-http"; |
| 62 | +import { resourceFromAttributes } from "@opentelemetry/resources"; |
| 63 | + |
| 64 | +const sdk = new NodeSDK({ |
| 65 | + serviceName: "my-llm-app", |
| 66 | + resource: resourceFromAttributes({ |
| 67 | + "service.name": "my-llm-app", |
| 68 | + }), |
| 69 | + traceExporter: new OTLPTraceExporter({ |
| 70 | + url: "https://api.promptlayer.com/v1/traces", |
| 71 | + headers: { |
| 72 | + "X-API-Key": process.env.PROMPTLAYER_API_KEY, |
| 73 | + }, |
| 74 | + }), |
| 75 | +}); |
| 76 | + |
| 77 | +sdk.start(); |
| 78 | + |
| 79 | +// Shut down before exit to flush remaining spans |
| 80 | +process.on("beforeExit", async () => { |
| 81 | + await sdk.shutdown(); |
| 82 | +}); |
| 83 | +``` |
| 84 | +</CodeGroup> |
| 85 | + |
| 86 | +## GenAI Semantic Conventions |
| 87 | + |
| 88 | +Spans that use [GenAI semantic conventions](https://opentelemetry.io/docs/specs/semconv/gen-ai/gen-ai-spans/) are automatically parsed into PromptLayer request logs. Add these attributes to your LLM call spans: |
| 89 | + |
| 90 | +| Attribute | Description | |
| 91 | +|---|---| |
| 92 | +| `gen_ai.request.model` | Model name (e.g. `gpt-4`, `claude-sonnet-4-20250514`) | |
| 93 | +| `gen_ai.provider.name` | Provider (e.g. `openai`, `anthropic`) | |
| 94 | +| `gen_ai.operation.name` | Operation type (`chat`, `text_completion`, `embeddings`) | |
| 95 | +| `gen_ai.usage.input_tokens` | Input token count | |
| 96 | +| `gen_ai.usage.output_tokens` | Output token count | |
| 97 | +| `gen_ai.input.messages` | Request messages | |
| 98 | +| `gen_ai.output.messages` | Response messages | |
| 99 | +| `gen_ai.request.temperature` | Temperature parameter | |
| 100 | +| `gen_ai.request.max_tokens` | Max tokens parameter | |
| 101 | +| `gen_ai.response.finish_reasons` | Finish reasons | |
| 102 | + |
| 103 | +### Event-Based Conventions |
| 104 | + |
| 105 | +PromptLayer also supports the newer [event-based GenAI semantic conventions](https://opentelemetry.io/docs/specs/semconv/gen-ai/gen-ai-events/) where message content is sent as span events rather than span attributes. This format is used by frameworks like [LiveKit](https://docs.livekit.io/) and newer versions of OpenTelemetry GenAI instrumentation. |
| 106 | + |
| 107 | +The following event types are recognized: |
| 108 | + |
| 109 | +| Event Name | Description | |
| 110 | +|---|---| |
| 111 | +| `gen_ai.system.message` | System message | |
| 112 | +| `gen_ai.user.message` | User message | |
| 113 | +| `gen_ai.assistant.message` | Assistant message (including tool calls) | |
| 114 | +| `gen_ai.tool.message` | Tool/function result message | |
| 115 | +| `gen_ai.choice` | Model response/choice | |
| 116 | + |
| 117 | +Event attributes like `gen_ai.system.message.content`, `gen_ai.user.message.content`, and tool call data are automatically extracted and mapped to PromptLayer request logs. |
| 118 | + |
| 119 | +<Note> |
| 120 | +When both attribute-based messages (`gen_ai.input.messages`) and event-based messages are present on the same span, attribute-based messages take priority. |
| 121 | +</Note> |
| 122 | + |
| 123 | +## Linking to Prompt Templates |
| 124 | + |
| 125 | +You can associate OTEL spans with prompt templates in your PromptLayer workspace by setting custom span attributes: |
| 126 | + |
| 127 | +| Attribute | Type | Description | |
| 128 | +|---|---|---| |
| 129 | +| `promptlayer.prompt.name` | string | Name of the prompt template | |
| 130 | +| `promptlayer.prompt.id` | integer | ID of the prompt template (alternative to `name`) | |
| 131 | +| `promptlayer.prompt.version` | integer | Specific version number (optional) | |
| 132 | +| `promptlayer.prompt.label` | string | Label to resolve version (e.g. `production`) | |
| 133 | + |
| 134 | +<CodeGroup> |
| 135 | +```python Python |
| 136 | +from opentelemetry import trace |
| 137 | + |
| 138 | +tracer = trace.get_tracer("my-llm-app") |
| 139 | + |
| 140 | +with tracer.start_as_current_span("llm-call") as span: |
| 141 | + # Link this span to a prompt template |
| 142 | + span.set_attribute("promptlayer.prompt.name", "my-prompt") |
| 143 | + span.set_attribute("promptlayer.prompt.label", "production") |
| 144 | + |
| 145 | + # Add GenAI attributes |
| 146 | + span.set_attribute("gen_ai.request.model", "gpt-4") |
| 147 | + span.set_attribute("gen_ai.provider.name", "openai") |
| 148 | + |
| 149 | + # ... make your LLM call ... |
| 150 | +``` |
| 151 | + |
| 152 | +```javascript JavaScript |
| 153 | +import { trace } from "@opentelemetry/api"; |
| 154 | + |
| 155 | +const tracer = trace.getTracer("my-llm-app"); |
| 156 | + |
| 157 | +tracer.startActiveSpan("llm-call", (span) => { |
| 158 | + // Link this span to a prompt template |
| 159 | + span.setAttribute("promptlayer.prompt.name", "my-prompt"); |
| 160 | + span.setAttribute("promptlayer.prompt.label", "production"); |
| 161 | + |
| 162 | + // Add GenAI attributes |
| 163 | + span.setAttribute("gen_ai.request.model", "gpt-4"); |
| 164 | + span.setAttribute("gen_ai.provider.name", "openai"); |
| 165 | + |
| 166 | + // ... make your LLM call ... |
| 167 | + |
| 168 | + span.end(); |
| 169 | +}); |
| 170 | +``` |
| 171 | +</CodeGroup> |
| 172 | + |
| 173 | +## Using an OpenTelemetry Collector |
| 174 | + |
| 175 | +If you're already running an [OpenTelemetry Collector](https://opentelemetry.io/docs/collector/), you can add PromptLayer as an additional exporter in your Collector config: |
| 176 | + |
| 177 | +```yaml |
| 178 | +exporters: |
| 179 | + otlphttp/promptlayer: |
| 180 | + endpoint: "https://api.promptlayer.com" |
| 181 | + headers: |
| 182 | + X-API-Key: "${PROMPTLAYER_API_KEY}" |
| 183 | + |
| 184 | +service: |
| 185 | + pipelines: |
| 186 | + traces: |
| 187 | + exporters: [otlphttp/promptlayer] |
| 188 | +``` |
| 189 | +
|
| 190 | +This lets you fan out traces to PromptLayer alongside your existing observability backends (Datadog, New Relic, Jaeger, etc.) without changing your application code. |
| 191 | +
|
| 192 | +## Content Types |
| 193 | +
|
| 194 | +The endpoint accepts both binary protobuf (`application/x-protobuf`, recommended) and JSON (`application/json`) encodings. Both support `Content-Encoding: gzip`. |
| 195 | + |
| 196 | +## Next Steps |
| 197 | + |
| 198 | +- [OTLP Ingest Traces API Reference](/reference/otlp-ingest-traces) — full endpoint documentation |
| 199 | +- [Integrations](/languages/integrations) — framework-specific setups (Vercel AI SDK, OpenAI Agents, Claude Code) |
| 200 | +- [Traces](/running-requests/traces) — PromptLayer SDK native tracing with `@traceable` and `wrapWithSpan` |
0 commit comments