Skip to content

Commit 6caa67b

Browse files
authored
Merge pull request #245 from MagnivOrg/docs/reorganize-otel-traces
Update docs for PRs merged March 18-22
2 parents f6d4c57 + ab0ee9e commit 6caa67b

12 files changed

Lines changed: 591 additions & 131 deletions

File tree

features/evaluations/programmatic.mdx

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -451,6 +451,7 @@ Reference a prompt template stored in the Prompt Registry:
451451
"max_tokens": 500
452452
}
453453
},
454+
"chat_history_source": "chat_messages_column", // Optional: Dataset column containing chat history (list of {role, content} messages) to append to the prompt
454455
"verbose": false, // Optional: Include detailed response info
455456
"return_template_only": false // Optional: Return template without executing
456457
},
@@ -504,6 +505,10 @@ Define a prompt template directly in the configuration without saving it to the
504505
You must provide exactly one of `template` (registry reference) or `inline_template` (inline content) in the configuration. They are mutually exclusive.
505506
</Info>
506507

508+
<Info>
509+
**Chat History Source**: For chat-type prompts, you can use `chat_history_source` to specify a dataset column containing a list of chat messages (each with `role` and `content` fields). These messages are appended to the end of the prompt template before execution, allowing you to test prompts with different conversation histories. The column value should be a JSON array of message objects, e.g. `[{"role": "user", "content": "Hello"}, {"role": "assistant", "content": "Hi there!"}]`.
510+
</Info>
511+
507512
#### ENDPOINT
508513
Calls a custom API endpoint with data from previous columns.
509514

features/prompt-history/traces.mdx

Lines changed: 0 additions & 123 deletions
This file was deleted.

languages/integrations.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ icon: 'handshake'
55

66
PromptLayer works seamlessly with many popular LLM frameworks and abstractions.
77

8-
Don't see the integration you are looking for? [Email us!](mailto:hello@promptlayer.com) 👋
8+
Don't see your framework listed? You can send traces from **any** OpenTelemetry-compatible tool using the [OpenTelemetry](/languages/opentelemetry) page, or [email us!](mailto:hello@promptlayer.com)
99

1010
## LiteLLM
1111

languages/mcp.mdx

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -73,15 +73,15 @@ For clients that support stdio transport (e.g. Claude Desktop, Cursor), you can
7373

7474
## Available Tools
7575

76-
The MCP server exposes 36 tools covering all major PromptLayer features:
76+
The MCP server exposes 34 tools covering all major PromptLayer features:
7777

7878
| Category | Tools |
7979
|---|---|
8080
| **Prompt Templates** | `get-prompt-template`, `get-prompt-template-raw`, `list-prompt-templates`, `publish-prompt-template`, `list-prompt-template-labels`, `create-prompt-label`, `move-prompt-label`, `delete-prompt-label`, `get-snippet-usage` |
81-
| **Request Logs** | `search-request-logs`, `get-request` |
81+
| **Request Logs** | `get-request`, `search-request-logs`, `get-trace` |
8282
| **Tracking** | `log-request`, `create-spans-bulk` |
83-
| **Datasets** | `list-datasets`, `create-dataset-group`, `create-dataset-version-from-file`, `create-dataset-version-from-filter-params` |
84-
| **Evaluations** | `list-evaluations`, `create-report`, `run-report`, `get-report`, `get-report-score`, `update-report-score-card`, `delete-reports-by-name` |
83+
| **Datasets** | `list-datasets`, `get-dataset-rows`, `create-dataset-group`, `create-dataset-version-from-file`, `create-dataset-version-from-filter-params` |
84+
| **Evaluations** | `list-evaluations`, `get-evaluation-rows`, `create-report`, `run-report`, `get-report`, `get-report-score`, `update-report-score-card`, `delete-reports-by-name` |
8585
| **Agents** | `list-workflows`, `create-workflow`, `patch-workflow`, `run-workflow`, `get-workflow-version-execution-results`, `get-workflow` |
8686
| **Folders** | `create-folder`, `edit-folder`, `get-folder-entities`, `move-folder-entities`, `delete-folder-entities`, `resolve-folder-id` |
8787

languages/opentelemetry.mdx

Lines changed: 200 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,200 @@
1+
---
2+
title: "OpenTelemetry"
3+
icon: "tower-broadcast"
4+
---
5+
6+
PromptLayer natively supports [OpenTelemetry (OTEL)](https://opentelemetry.io/), the industry-standard observability framework. You can send traces from **any** OpenTelemetry-compatible SDK or Collector directly to PromptLayer — no PromptLayer SDK required.
7+
8+
This is ideal when:
9+
10+
- Your framework isn't listed on the [Integrations](/languages/integrations) page
11+
- You already have an OpenTelemetry pipeline and want to add PromptLayer as a destination
12+
- You want vendor-neutral instrumentation
13+
14+
<Note>
15+
If you're using a supported framework like the [Vercel AI SDK](/languages/integrations#vercel-ai-sdk), [OpenAI Agents SDK](/languages/integrations#openai-agents-sdk), or [Claude Code](/languages/integrations#claude-code), see the [Integrations](/languages/integrations) page for framework-specific setup — those integrations handle the OTEL configuration for you.
16+
</Note>
17+
18+
## How It Works
19+
20+
PromptLayer exposes an [OTLP/HTTP endpoint](/reference/otlp-ingest-traces) at:
21+
22+
```
23+
https://api.promptlayer.com/v1/traces
24+
```
25+
26+
Any OpenTelemetry SDK or Collector can export traces to this endpoint. Spans that include [GenAI semantic convention](https://opentelemetry.io/docs/specs/semconv/gen-ai/) attributes are automatically converted into PromptLayer request logs.
27+
28+
## Setup
29+
30+
Configure your OpenTelemetry SDK to export traces to PromptLayer using the OTLP/HTTP exporter.
31+
32+
<CodeGroup>
33+
```python Python
34+
from opentelemetry.sdk.trace import TracerProvider
35+
from opentelemetry.sdk.trace.export import BatchSpanProcessor
36+
from opentelemetry.sdk.resources import Resource
37+
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
38+
39+
# Install required packages:
40+
# pip install opentelemetry-sdk opentelemetry-exporter-otlp-proto-http
41+
42+
exporter = OTLPSpanExporter(
43+
endpoint="https://api.promptlayer.com/v1/traces",
44+
headers={"X-API-KEY": "your-promptlayer-api-key"},
45+
)
46+
47+
provider = TracerProvider(
48+
resource=Resource.create({"service.name": "my-llm-app"})
49+
)
50+
provider.add_span_processor(BatchSpanProcessor(exporter))
51+
52+
# Use the tracer to create spans
53+
tracer = provider.get_tracer("my-llm-app")
54+
```
55+
56+
```javascript JavaScript
57+
// Install required packages:
58+
// npm install @opentelemetry/sdk-node @opentelemetry/exporter-trace-otlp-http @opentelemetry/resources
59+
60+
import { NodeSDK } from "@opentelemetry/sdk-node";
61+
import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-http";
62+
import { resourceFromAttributes } from "@opentelemetry/resources";
63+
64+
const sdk = new NodeSDK({
65+
serviceName: "my-llm-app",
66+
resource: resourceFromAttributes({
67+
"service.name": "my-llm-app",
68+
}),
69+
traceExporter: new OTLPTraceExporter({
70+
url: "https://api.promptlayer.com/v1/traces",
71+
headers: {
72+
"X-API-Key": process.env.PROMPTLAYER_API_KEY,
73+
},
74+
}),
75+
});
76+
77+
sdk.start();
78+
79+
// Shut down before exit to flush remaining spans
80+
process.on("beforeExit", async () => {
81+
await sdk.shutdown();
82+
});
83+
```
84+
</CodeGroup>
85+
86+
## GenAI Semantic Conventions
87+
88+
Spans that use [GenAI semantic conventions](https://opentelemetry.io/docs/specs/semconv/gen-ai/gen-ai-spans/) are automatically parsed into PromptLayer request logs. Add these attributes to your LLM call spans:
89+
90+
| Attribute | Description |
91+
|---|---|
92+
| `gen_ai.request.model` | Model name (e.g. `gpt-4`, `claude-sonnet-4-20250514`) |
93+
| `gen_ai.provider.name` | Provider (e.g. `openai`, `anthropic`) |
94+
| `gen_ai.operation.name` | Operation type (`chat`, `text_completion`, `embeddings`) |
95+
| `gen_ai.usage.input_tokens` | Input token count |
96+
| `gen_ai.usage.output_tokens` | Output token count |
97+
| `gen_ai.input.messages` | Request messages |
98+
| `gen_ai.output.messages` | Response messages |
99+
| `gen_ai.request.temperature` | Temperature parameter |
100+
| `gen_ai.request.max_tokens` | Max tokens parameter |
101+
| `gen_ai.response.finish_reasons` | Finish reasons |
102+
103+
### Event-Based Conventions
104+
105+
PromptLayer also supports the newer [event-based GenAI semantic conventions](https://opentelemetry.io/docs/specs/semconv/gen-ai/gen-ai-events/) where message content is sent as span events rather than span attributes. This format is used by frameworks like [LiveKit](https://docs.livekit.io/) and newer versions of OpenTelemetry GenAI instrumentation.
106+
107+
The following event types are recognized:
108+
109+
| Event Name | Description |
110+
|---|---|
111+
| `gen_ai.system.message` | System message |
112+
| `gen_ai.user.message` | User message |
113+
| `gen_ai.assistant.message` | Assistant message (including tool calls) |
114+
| `gen_ai.tool.message` | Tool/function result message |
115+
| `gen_ai.choice` | Model response/choice |
116+
117+
Event attributes like `gen_ai.system.message.content`, `gen_ai.user.message.content`, and tool call data are automatically extracted and mapped to PromptLayer request logs.
118+
119+
<Note>
120+
When both attribute-based messages (`gen_ai.input.messages`) and event-based messages are present on the same span, attribute-based messages take priority.
121+
</Note>
122+
123+
## Linking to Prompt Templates
124+
125+
You can associate OTEL spans with prompt templates in your PromptLayer workspace by setting custom span attributes:
126+
127+
| Attribute | Type | Description |
128+
|---|---|---|
129+
| `promptlayer.prompt.name` | string | Name of the prompt template |
130+
| `promptlayer.prompt.id` | integer | ID of the prompt template (alternative to `name`) |
131+
| `promptlayer.prompt.version` | integer | Specific version number (optional) |
132+
| `promptlayer.prompt.label` | string | Label to resolve version (e.g. `production`) |
133+
134+
<CodeGroup>
135+
```python Python
136+
from opentelemetry import trace
137+
138+
tracer = trace.get_tracer("my-llm-app")
139+
140+
with tracer.start_as_current_span("llm-call") as span:
141+
# Link this span to a prompt template
142+
span.set_attribute("promptlayer.prompt.name", "my-prompt")
143+
span.set_attribute("promptlayer.prompt.label", "production")
144+
145+
# Add GenAI attributes
146+
span.set_attribute("gen_ai.request.model", "gpt-4")
147+
span.set_attribute("gen_ai.provider.name", "openai")
148+
149+
# ... make your LLM call ...
150+
```
151+
152+
```javascript JavaScript
153+
import { trace } from "@opentelemetry/api";
154+
155+
const tracer = trace.getTracer("my-llm-app");
156+
157+
tracer.startActiveSpan("llm-call", (span) => {
158+
// Link this span to a prompt template
159+
span.setAttribute("promptlayer.prompt.name", "my-prompt");
160+
span.setAttribute("promptlayer.prompt.label", "production");
161+
162+
// Add GenAI attributes
163+
span.setAttribute("gen_ai.request.model", "gpt-4");
164+
span.setAttribute("gen_ai.provider.name", "openai");
165+
166+
// ... make your LLM call ...
167+
168+
span.end();
169+
});
170+
```
171+
</CodeGroup>
172+
173+
## Using an OpenTelemetry Collector
174+
175+
If you're already running an [OpenTelemetry Collector](https://opentelemetry.io/docs/collector/), you can add PromptLayer as an additional exporter in your Collector config:
176+
177+
```yaml
178+
exporters:
179+
otlphttp/promptlayer:
180+
endpoint: "https://api.promptlayer.com"
181+
headers:
182+
X-API-Key: "${PROMPTLAYER_API_KEY}"
183+
184+
service:
185+
pipelines:
186+
traces:
187+
exporters: [otlphttp/promptlayer]
188+
```
189+
190+
This lets you fan out traces to PromptLayer alongside your existing observability backends (Datadog, New Relic, Jaeger, etc.) without changing your application code.
191+
192+
## Content Types
193+
194+
The endpoint accepts both binary protobuf (`application/x-protobuf`, recommended) and JSON (`application/json`) encodings. Both support `Content-Encoding: gzip`.
195+
196+
## Next Steps
197+
198+
- [OTLP Ingest Traces API Reference](/reference/otlp-ingest-traces) — full endpoint documentation
199+
- [Integrations](/languages/integrations) — framework-specific setups (Vercel AI SDK, OpenAI Agents, Claude Code)
200+
- [Traces](/running-requests/traces) — PromptLayer SDK native tracing with `@traceable` and `wrapWithSpan`

mint.json

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,8 @@
3434
"languages/javascript",
3535
"languages/rest-api",
3636
"languages/mcp",
37-
"languages/integrations"
37+
"languages/integrations",
38+
"languages/opentelemetry"
3839
]
3940
},
4041
{
@@ -154,6 +155,7 @@
154155
"group": "Tracking",
155156
"pages": [
156157
"reference/get-request",
158+
"reference/get-trace",
157159
"reference/search-request-logs",
158160
"reference/log-request",
159161
"reference/track-prompt",

0 commit comments

Comments
 (0)