diff --git a/features/evaluations/programmatic.mdx b/features/evaluations/programmatic.mdx
index 935c997..6a346ab 100644
--- a/features/evaluations/programmatic.mdx
+++ b/features/evaluations/programmatic.mdx
@@ -451,6 +451,7 @@ Reference a prompt template stored in the Prompt Registry:
"max_tokens": 500
}
},
+ "chat_history_source": "chat_messages_column", // Optional: Dataset column containing chat history (list of {role, content} messages) to append to the prompt
"verbose": false, // Optional: Include detailed response info
"return_template_only": false // Optional: Return template without executing
},
@@ -504,6 +505,10 @@ Define a prompt template directly in the configuration without saving it to the
You must provide exactly one of `template` (registry reference) or `inline_template` (inline content) in the configuration. They are mutually exclusive.
+
+**Chat History Source**: For chat-type prompts, you can use `chat_history_source` to specify a dataset column containing a list of chat messages (each with `role` and `content` fields). These messages are appended to the end of the prompt template before execution, allowing you to test prompts with different conversation histories. The column value should be a JSON array of message objects, e.g. `[{"role": "user", "content": "Hello"}, {"role": "assistant", "content": "Hi there!"}]`.
+
+
#### ENDPOINT
Calls a custom API endpoint with data from previous columns.
diff --git a/features/prompt-history/traces.mdx b/features/prompt-history/traces.mdx
deleted file mode 100644
index 8bf6788..0000000
--- a/features/prompt-history/traces.mdx
+++ /dev/null
@@ -1,123 +0,0 @@
----
-title: "Traces"
-icon: "diagram-project"
----
-
-Traces are a powerful feature in PromptLayer that allow you to monitor and analyze the execution flow of your applications, including LLM requests. Built on OpenTelemetry, Traces provide detailed insights into function calls, their durations, inputs, and outputs.
-
-## Overview
-
-Traces in PromptLayer offer a comprehensive view of your application's performance and behavior. They allow you to:
-
-- Visualize the execution flow of your functions
-- Track LLM requests and their associated metadata
-- Measure function durations and identify performance bottlenecks
-- Inspect function inputs and outputs for debugging
-
-**Note:** The left menu in the PromptLayer UI only shows root spans, which represent the entry function of your program. While your program is running, you might not see all spans in the UI immediately, even though child spans are being sent to the backend. The root span, along with all its child spans, will only appear in the UI once the program completes. This behavior is particularly noticeable in long-running programs or those with complex execution flows.
-
-
-
-## Automatic LLM Request Tracing
-
-When you initialize the PromptLayer class with `enable_tracing` set to `True`, PromptLayer will automatically track any LLM calls made using the PromptLayer library. This allows you to capture detailed information about your LLM requests, including:
-
-- Model used
-- Input prompts
-- Generated responses
-- Request duration
-- Associated metadata
-
-
-```python Python
-from promptlayer import PromptLayer
-
-# Initialize PromptLayer with tracing enabled
-pl_client = PromptLayer(enable_tracing=True)
-```
-
-```javascript JavaScript
-import { PromptLayer } from "promptlayer";
-
-// Initialize PromptLayer with tracing enabled
-const promptlayer = new PromptLayer({
- apiKey: process.env.PROMPTLAYER_API_KEY,
- enableTracing: true,
- workspaceId: YOUR_WORKSPACE_ID,
-});
-```
-
-
-Once PromptLayer is initialized with tracing enabled, you can use the `run()` method to execute prompts. All LLM calls made through this method will be automatically traced, providing detailed insights into your prompt executions.
-
-
-```python Python
-response = pl_client.run(
- prompt_name="simple-greeting",
- input_variables={
- "name": "Alice"
- },
- metadata={
- "user_id": "12345"
- }
-)
-
-print(response)
-```
-
-```javascript JavaScript
-async function runPrompt() {
- try {
- const response = await promptlayer.run({
- promptName: "simple-greeting",
- inputVariables: {
- name: "Alice"
- },
- metadata: {
- user_id: "12345"
- }
- });
-
- console.log(response);
- } catch (error) {
- console.error("Error running prompt:", error);
- }
-}
-
-runPrompt();
-```
-
-
-## Custom Function Tracing
-
-In addition to automatic LLM request tracing, you can also use the `traceable` decorator (for Python) or `wrapWithSpan` (for JavaScript) to explicitly track span data for additional functions. This allows you to gather detailed information about function executions.
-
-
-```python Python
-# Use the @pl_client.traceable() decorator to trace a function
-@pl_client.traceable()
-def greet(name):
- return f"Hello, {name}!"
-
-# Use the decorator with custom attributes
-@pl_client.traceable(attributes={"function_type": "math"})
-def calculate_sum(a, b):
- return a + b
-
-result1 = greet("Alice")
-print(result1)
-
-result2 = calculate_sum(5, 3)
-print(result2)
-```
-
-```javascript JavaScript
-// Define and wrap a function with PromptLayer tracing
-const greet = promptlayer.wrapWithSpan('greet', (name: string): string => {
- return `Hello, ${name}!`;
-});
-
-const result = greet("Alice");
-console.log(result);
-```
-
diff --git a/languages/integrations.mdx b/languages/integrations.mdx
index 519b439..777f0f9 100644
--- a/languages/integrations.mdx
+++ b/languages/integrations.mdx
@@ -5,7 +5,7 @@ icon: 'handshake'
PromptLayer works seamlessly with many popular LLM frameworks and abstractions.
-Don't see the integration you are looking for? [Email us!](mailto:hello@promptlayer.com) 👋
+Don't see your framework listed? You can send traces from **any** OpenTelemetry-compatible tool using the [OpenTelemetry](/languages/opentelemetry) page, or [email us!](mailto:hello@promptlayer.com)
## LiteLLM
diff --git a/languages/mcp.mdx b/languages/mcp.mdx
index f25ea5e..1596622 100644
--- a/languages/mcp.mdx
+++ b/languages/mcp.mdx
@@ -73,15 +73,15 @@ For clients that support stdio transport (e.g. Claude Desktop, Cursor), you can
## Available Tools
-The MCP server exposes 36 tools covering all major PromptLayer features:
+The MCP server exposes 34 tools covering all major PromptLayer features:
| Category | Tools |
|---|---|
| **Prompt Templates** | `get-prompt-template`, `get-prompt-template-raw`, `list-prompt-templates`, `publish-prompt-template`, `list-prompt-template-labels`, `create-prompt-label`, `move-prompt-label`, `delete-prompt-label`, `get-snippet-usage` |
-| **Request Logs** | `search-request-logs`, `get-request` |
+| **Request Logs** | `get-request`, `search-request-logs`, `get-trace` |
| **Tracking** | `log-request`, `create-spans-bulk` |
-| **Datasets** | `list-datasets`, `create-dataset-group`, `create-dataset-version-from-file`, `create-dataset-version-from-filter-params` |
-| **Evaluations** | `list-evaluations`, `create-report`, `run-report`, `get-report`, `get-report-score`, `update-report-score-card`, `delete-reports-by-name` |
+| **Datasets** | `list-datasets`, `get-dataset-rows`, `create-dataset-group`, `create-dataset-version-from-file`, `create-dataset-version-from-filter-params` |
+| **Evaluations** | `list-evaluations`, `get-evaluation-rows`, `create-report`, `run-report`, `get-report`, `get-report-score`, `update-report-score-card`, `delete-reports-by-name` |
| **Agents** | `list-workflows`, `create-workflow`, `patch-workflow`, `run-workflow`, `get-workflow-version-execution-results`, `get-workflow` |
| **Folders** | `create-folder`, `edit-folder`, `get-folder-entities`, `move-folder-entities`, `delete-folder-entities`, `resolve-folder-id` |
diff --git a/languages/opentelemetry.mdx b/languages/opentelemetry.mdx
new file mode 100644
index 0000000..6341c03
--- /dev/null
+++ b/languages/opentelemetry.mdx
@@ -0,0 +1,200 @@
+---
+title: "OpenTelemetry"
+icon: "tower-broadcast"
+---
+
+PromptLayer natively supports [OpenTelemetry (OTEL)](https://opentelemetry.io/), the industry-standard observability framework. You can send traces from **any** OpenTelemetry-compatible SDK or Collector directly to PromptLayer — no PromptLayer SDK required.
+
+This is ideal when:
+
+- Your framework isn't listed on the [Integrations](/languages/integrations) page
+- You already have an OpenTelemetry pipeline and want to add PromptLayer as a destination
+- You want vendor-neutral instrumentation
+
+
+If you're using a supported framework like the [Vercel AI SDK](/languages/integrations#vercel-ai-sdk), [OpenAI Agents SDK](/languages/integrations#openai-agents-sdk), or [Claude Code](/languages/integrations#claude-code), see the [Integrations](/languages/integrations) page for framework-specific setup — those integrations handle the OTEL configuration for you.
+
+
+## How It Works
+
+PromptLayer exposes an [OTLP/HTTP endpoint](/reference/otlp-ingest-traces) at:
+
+```
+https://api.promptlayer.com/v1/traces
+```
+
+Any OpenTelemetry SDK or Collector can export traces to this endpoint. Spans that include [GenAI semantic convention](https://opentelemetry.io/docs/specs/semconv/gen-ai/) attributes are automatically converted into PromptLayer request logs.
+
+## Setup
+
+Configure your OpenTelemetry SDK to export traces to PromptLayer using the OTLP/HTTP exporter.
+
+
+```python Python
+from opentelemetry.sdk.trace import TracerProvider
+from opentelemetry.sdk.trace.export import BatchSpanProcessor
+from opentelemetry.sdk.resources import Resource
+from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
+
+# Install required packages:
+# pip install opentelemetry-sdk opentelemetry-exporter-otlp-proto-http
+
+exporter = OTLPSpanExporter(
+ endpoint="https://api.promptlayer.com/v1/traces",
+ headers={"X-API-KEY": "your-promptlayer-api-key"},
+)
+
+provider = TracerProvider(
+ resource=Resource.create({"service.name": "my-llm-app"})
+)
+provider.add_span_processor(BatchSpanProcessor(exporter))
+
+# Use the tracer to create spans
+tracer = provider.get_tracer("my-llm-app")
+```
+
+```javascript JavaScript
+// Install required packages:
+// npm install @opentelemetry/sdk-node @opentelemetry/exporter-trace-otlp-http @opentelemetry/resources
+
+import { NodeSDK } from "@opentelemetry/sdk-node";
+import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-http";
+import { resourceFromAttributes } from "@opentelemetry/resources";
+
+const sdk = new NodeSDK({
+ serviceName: "my-llm-app",
+ resource: resourceFromAttributes({
+ "service.name": "my-llm-app",
+ }),
+ traceExporter: new OTLPTraceExporter({
+ url: "https://api.promptlayer.com/v1/traces",
+ headers: {
+ "X-API-Key": process.env.PROMPTLAYER_API_KEY,
+ },
+ }),
+});
+
+sdk.start();
+
+// Shut down before exit to flush remaining spans
+process.on("beforeExit", async () => {
+ await sdk.shutdown();
+});
+```
+
+
+## GenAI Semantic Conventions
+
+Spans that use [GenAI semantic conventions](https://opentelemetry.io/docs/specs/semconv/gen-ai/gen-ai-spans/) are automatically parsed into PromptLayer request logs. Add these attributes to your LLM call spans:
+
+| Attribute | Description |
+|---|---|
+| `gen_ai.request.model` | Model name (e.g. `gpt-4`, `claude-sonnet-4-20250514`) |
+| `gen_ai.provider.name` | Provider (e.g. `openai`, `anthropic`) |
+| `gen_ai.operation.name` | Operation type (`chat`, `text_completion`, `embeddings`) |
+| `gen_ai.usage.input_tokens` | Input token count |
+| `gen_ai.usage.output_tokens` | Output token count |
+| `gen_ai.input.messages` | Request messages |
+| `gen_ai.output.messages` | Response messages |
+| `gen_ai.request.temperature` | Temperature parameter |
+| `gen_ai.request.max_tokens` | Max tokens parameter |
+| `gen_ai.response.finish_reasons` | Finish reasons |
+
+### Event-Based Conventions
+
+PromptLayer also supports the newer [event-based GenAI semantic conventions](https://opentelemetry.io/docs/specs/semconv/gen-ai/gen-ai-events/) where message content is sent as span events rather than span attributes. This format is used by frameworks like [LiveKit](https://docs.livekit.io/) and newer versions of OpenTelemetry GenAI instrumentation.
+
+The following event types are recognized:
+
+| Event Name | Description |
+|---|---|
+| `gen_ai.system.message` | System message |
+| `gen_ai.user.message` | User message |
+| `gen_ai.assistant.message` | Assistant message (including tool calls) |
+| `gen_ai.tool.message` | Tool/function result message |
+| `gen_ai.choice` | Model response/choice |
+
+Event attributes like `gen_ai.system.message.content`, `gen_ai.user.message.content`, and tool call data are automatically extracted and mapped to PromptLayer request logs.
+
+
+When both attribute-based messages (`gen_ai.input.messages`) and event-based messages are present on the same span, attribute-based messages take priority.
+
+
+## Linking to Prompt Templates
+
+You can associate OTEL spans with prompt templates in your PromptLayer workspace by setting custom span attributes:
+
+| Attribute | Type | Description |
+|---|---|---|
+| `promptlayer.prompt.name` | string | Name of the prompt template |
+| `promptlayer.prompt.id` | integer | ID of the prompt template (alternative to `name`) |
+| `promptlayer.prompt.version` | integer | Specific version number (optional) |
+| `promptlayer.prompt.label` | string | Label to resolve version (e.g. `production`) |
+
+
+```python Python
+from opentelemetry import trace
+
+tracer = trace.get_tracer("my-llm-app")
+
+with tracer.start_as_current_span("llm-call") as span:
+ # Link this span to a prompt template
+ span.set_attribute("promptlayer.prompt.name", "my-prompt")
+ span.set_attribute("promptlayer.prompt.label", "production")
+
+ # Add GenAI attributes
+ span.set_attribute("gen_ai.request.model", "gpt-4")
+ span.set_attribute("gen_ai.provider.name", "openai")
+
+ # ... make your LLM call ...
+```
+
+```javascript JavaScript
+import { trace } from "@opentelemetry/api";
+
+const tracer = trace.getTracer("my-llm-app");
+
+tracer.startActiveSpan("llm-call", (span) => {
+ // Link this span to a prompt template
+ span.setAttribute("promptlayer.prompt.name", "my-prompt");
+ span.setAttribute("promptlayer.prompt.label", "production");
+
+ // Add GenAI attributes
+ span.setAttribute("gen_ai.request.model", "gpt-4");
+ span.setAttribute("gen_ai.provider.name", "openai");
+
+ // ... make your LLM call ...
+
+ span.end();
+});
+```
+
+
+## Using an OpenTelemetry Collector
+
+If you're already running an [OpenTelemetry Collector](https://opentelemetry.io/docs/collector/), you can add PromptLayer as an additional exporter in your Collector config:
+
+```yaml
+exporters:
+ otlphttp/promptlayer:
+ endpoint: "https://api.promptlayer.com"
+ headers:
+ X-API-Key: "${PROMPTLAYER_API_KEY}"
+
+service:
+ pipelines:
+ traces:
+ exporters: [otlphttp/promptlayer]
+```
+
+This lets you fan out traces to PromptLayer alongside your existing observability backends (Datadog, New Relic, Jaeger, etc.) without changing your application code.
+
+## Content Types
+
+The endpoint accepts both binary protobuf (`application/x-protobuf`, recommended) and JSON (`application/json`) encodings. Both support `Content-Encoding: gzip`.
+
+## Next Steps
+
+- [OTLP Ingest Traces API Reference](/reference/otlp-ingest-traces) — full endpoint documentation
+- [Integrations](/languages/integrations) — framework-specific setups (Vercel AI SDK, OpenAI Agents, Claude Code)
+- [Traces](/running-requests/traces) — PromptLayer SDK native tracing with `@traceable` and `wrapWithSpan`
diff --git a/mint.json b/mint.json
index 873befe..f83c6f3 100644
--- a/mint.json
+++ b/mint.json
@@ -34,7 +34,8 @@
"languages/javascript",
"languages/rest-api",
"languages/mcp",
- "languages/integrations"
+ "languages/integrations",
+ "languages/opentelemetry"
]
},
{
@@ -154,6 +155,7 @@
"group": "Tracking",
"pages": [
"reference/get-request",
+ "reference/get-trace",
"reference/search-request-logs",
"reference/log-request",
"reference/track-prompt",
diff --git a/openapi.json b/openapi.json
index 2699d2c..c3f2d7b 100644
--- a/openapi.json
+++ b/openapi.json
@@ -2279,6 +2279,11 @@
"type": "number",
"nullable": true,
"description": "Request latency in milliseconds, derived from start and end times."
+ },
+ "trace_id": {
+ "type": "string",
+ "nullable": true,
+ "description": "The trace ID associated with this request, if the request was part of a trace."
}
}
}
@@ -2328,6 +2333,172 @@
}
}
},
+ "/api/public/v2/traces/{trace_id}": {
+ "get": {
+ "summary": "Get Trace",
+ "operationId": "getTrace",
+ "tags": [
+ "tracking"
+ ],
+ "parameters": [
+ {
+ "name": "X-API-KEY",
+ "in": "header",
+ "required": true,
+ "schema": {
+ "type": "string"
+ },
+ "description": "API key for authentication."
+ },
+ {
+ "name": "trace_id",
+ "in": "path",
+ "required": true,
+ "schema": {
+ "type": "string"
+ },
+ "description": "The trace ID to retrieve spans for."
+ }
+ ],
+ "responses": {
+ "200": {
+ "description": "Successfully retrieved trace spans.",
+ "content": {
+ "application/json": {
+ "schema": {
+ "type": "object",
+ "properties": {
+ "success": {
+ "type": "boolean",
+ "description": "Indicates the request was successful."
+ },
+ "spans": {
+ "type": "array",
+ "description": "List of spans belonging to this trace.",
+ "items": {
+ "type": "object",
+ "properties": {
+ "id": {
+ "type": "integer",
+ "description": "The internal span ID."
+ },
+ "name": {
+ "type": "string",
+ "description": "The name of the span."
+ },
+ "trace_id": {
+ "type": "string",
+ "description": "The trace ID this span belongs to."
+ },
+ "span_id": {
+ "type": "string",
+ "description": "The unique span identifier."
+ },
+ "parent_id": {
+ "type": "string",
+ "nullable": true,
+ "description": "The parent span ID, or null for root spans."
+ },
+ "start": {
+ "type": "string",
+ "description": "ISO 8601 timestamp of when the span started."
+ },
+ "end": {
+ "type": "string",
+ "nullable": true,
+ "description": "ISO 8601 timestamp of when the span ended."
+ },
+ "attributes": {
+ "type": "object",
+ "nullable": true,
+ "description": "Arbitrary key-value attributes attached to the span."
+ },
+ "resource": {
+ "type": "object",
+ "nullable": true,
+ "description": "Resource information for the span."
+ },
+ "context": {
+ "type": "object",
+ "nullable": true,
+ "description": "Span context information."
+ },
+ "kind": {
+ "type": "string",
+ "nullable": true,
+ "description": "The span kind (e.g. INTERNAL, CLIENT, SERVER)."
+ },
+ "status": {
+ "type": "object",
+ "nullable": true,
+ "description": "The span status including status code."
+ },
+ "events": {
+ "type": "array",
+ "nullable": true,
+ "description": "Events recorded on the span."
+ },
+ "links": {
+ "type": "array",
+ "nullable": true,
+ "description": "Links to other spans."
+ },
+ "request_log_id": {
+ "type": "integer",
+ "nullable": true,
+ "description": "The PromptLayer request log ID associated with this span, if any."
+ }
+ }
+ }
+ }
+ }
+ }
+ }
+ }
+ },
+ "401": {
+ "description": "Authentication failed.",
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/ErrorResponse"
+ }
+ }
+ }
+ },
+ "403": {
+ "description": "Invalid workspace.",
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/ErrorResponse"
+ }
+ }
+ }
+ },
+ "404": {
+ "description": "Trace not found.",
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/ErrorResponse"
+ }
+ }
+ }
+ },
+ "500": {
+ "description": "Internal server error.",
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/ErrorResponse"
+ }
+ }
+ }
+ }
+ }
+ }
+ },
"/api/public/v2/datasets/{dataset_id}/rows": {
"get": {
"summary": "Get Dataset Rows",
@@ -2701,6 +2872,16 @@
},
"description": "Filter evaluations by status: 'active' (default) returns only active evaluations, 'deleted' returns only deleted/archived evaluations, 'all' returns both"
},
+ {
+ "name": "include_runs",
+ "in": "query",
+ "required": false,
+ "schema": {
+ "type": "boolean",
+ "default": false
+ },
+ "description": "If true, include batch runs nested under each evaluation. Each run includes its full report data, status (RUNNING or COMPLETED), and cell status counts."
+ },
{
"name": "page",
"in": "query",
@@ -5040,6 +5221,141 @@
"type": "integer",
"nullable": true,
"description": "ID of the user who created this evaluation"
+ },
+ "dataset_id": {
+ "type": "integer",
+ "nullable": true,
+ "description": "ID of the dataset associated with this evaluation"
+ },
+ "is_blueprint": {
+ "type": "boolean",
+ "description": "Whether this is a blueprint (pipeline definition) or a batch run"
+ },
+ "tags": {
+ "type": "object",
+ "nullable": true,
+ "description": "Tags associated with this evaluation"
+ },
+ "deleted": {
+ "type": "boolean",
+ "description": "Whether this evaluation has been deleted"
+ },
+ "parent_report_id": {
+ "type": "integer",
+ "nullable": true,
+ "description": "ID of the parent blueprint (set for batch runs)"
+ },
+ "score_configuration": {
+ "type": "object",
+ "nullable": true,
+ "description": "Custom scoring configuration for this evaluation"
+ },
+ "runs": {
+ "type": "array",
+ "description": "Batch runs for this evaluation. Only present when include_runs=true.",
+ "items": {
+ "type": "object",
+ "properties": {
+ "id": {
+ "type": "integer",
+ "description": "Unique identifier for the run"
+ },
+ "name": {
+ "type": "string",
+ "description": "Name of the run"
+ },
+ "comment": {
+ "type": "string",
+ "nullable": true,
+ "description": "Optional comment or description"
+ },
+ "created_at": {
+ "type": "string",
+ "format": "date-time",
+ "description": "Timestamp when the run was created"
+ },
+ "updated_at": {
+ "type": "string",
+ "format": "date-time",
+ "description": "Timestamp when the run was last updated"
+ },
+ "workspace_id": {
+ "type": "integer",
+ "description": "ID of the workspace"
+ },
+ "folder_id": {
+ "type": "integer",
+ "nullable": true,
+ "description": "ID of the folder"
+ },
+ "user_id": {
+ "type": "integer",
+ "nullable": true,
+ "description": "ID of the user who created this run"
+ },
+ "dataset_id": {
+ "type": "integer",
+ "nullable": true,
+ "description": "ID of the dataset"
+ },
+ "is_blueprint": {
+ "type": "boolean",
+ "description": "Whether this is a blueprint"
+ },
+ "tags": {
+ "type": "object",
+ "nullable": true,
+ "description": "Tags associated with this run"
+ },
+ "deleted": {
+ "type": "boolean",
+ "description": "Whether this run has been deleted"
+ },
+ "parent_report_id": {
+ "type": "integer",
+ "nullable": true,
+ "description": "ID of the parent blueprint"
+ },
+ "score_configuration": {
+ "type": "object",
+ "nullable": true,
+ "description": "Custom scoring configuration"
+ },
+ "score": {
+ "type": "object",
+ "nullable": true,
+ "description": "Computed score for this run"
+ },
+ "score_matrix": {
+ "type": "array",
+ "nullable": true,
+ "description": "Matrix of scores across evaluation columns"
+ },
+ "score_calculation_error": {
+ "type": "string",
+ "nullable": true,
+ "description": "Error message if score calculation failed"
+ },
+ "status": {
+ "type": "string",
+ "enum": [
+ "RUNNING",
+ "COMPLETED"
+ ],
+ "description": "Current status of the batch run"
+ },
+ "stats": {
+ "type": "object",
+ "description": "Run statistics",
+ "properties": {
+ "status_counts": {
+ "type": "object",
+ "description": "Count of cells by status"
+ }
+ }
+ }
+ }
+ }
}
},
"required": [
diff --git a/reference/get-request.mdx b/reference/get-request.mdx
index ca29828..7d25285 100644
--- a/reference/get-request.mdx
+++ b/reference/get-request.mdx
@@ -11,7 +11,7 @@ Retrieve the full payload of a logged request by its ID, returned as a [prompt b
- **Dataset creation**: Extract request data for use in evaluations
- **Cost analysis**: Review token usage and pricing for individual requests
-The response includes the prompt blueprint (input messages, model configuration, and parameters) along with token counts and timing data.
+The response includes the prompt blueprint (input messages, model configuration, and parameters) along with token counts, timing data, and a `trace_id` field linking to the associated trace (if the request was logged via tracing or OpenTelemetry).
### Authentication
@@ -57,7 +57,8 @@ curl -H "X-API-KEY: your_api_key" \
"price": 0.00123,
"request_start_time": "2024-04-03T20:57:25",
"request_end_time": "2024-04-03T20:57:26",
- "latency_ms": 1000.0
+ "latency_ms": 1000.0,
+ "trace_id": "a1b2c3d4e5f6a7b8c9d0e1f2a3b4c5d6"
}
```
diff --git a/reference/get-trace.mdx b/reference/get-trace.mdx
new file mode 100644
index 0000000..268fefc
--- /dev/null
+++ b/reference/get-trace.mdx
@@ -0,0 +1,29 @@
+---
+title: "Get Trace"
+openapi: "GET /api/public/v2/traces/{trace_id}"
+---
+
+Retrieve all spans for a given trace ID. Each span includes its metadata and, if it generated a request log, the associated `request_log_id`.
+
+This is useful for:
+
+- **Trace inspection**: View the full execution flow of a traced operation
+- **Linking spans to requests**: Find which spans generated PromptLayer request logs
+- **Debugging**: Follow the execution path across multiple LLM calls
+
+### Authentication
+
+This endpoint requires API key authentication via the `X-API-KEY` header.
+
+### Example
+
+```bash
+curl -H "X-API-KEY: your_api_key" \
+ https://api.promptlayer.com/api/public/v2/traces/a1b2c3d4e5f6a7b8c9d0e1f2a3b4c5d6
+```
+
+### Related
+
+- [Get Request](/reference/get-request) - Retrieve a single request (includes `trace_id`)
+- [Traces](/running-requests/traces) - PromptLayer SDK tracing with `@traceable`
+- [OpenTelemetry](/languages/opentelemetry) - Send traces from any OTEL SDK
diff --git a/reference/list-evaluations.mdx b/reference/list-evaluations.mdx
index 6739cc4..e180faa 100644
--- a/reference/list-evaluations.mdx
+++ b/reference/list-evaluations.mdx
@@ -19,3 +19,9 @@ Use the `status` parameter to control which evaluations are returned based on th
- `active` (default): Returns only active evaluations
- `deleted`: Returns only deleted/archived evaluations
- `all`: Returns both active and deleted evaluations
+
+### Including Batch Runs
+
+Use the `include_runs` parameter to include batch run details nested under each evaluation. When set to `true`, each evaluation object in the response will include a `runs` array containing all of its batch runs with their status and cell status counts.
+
+This is useful for programmatically discovering batch run IDs without needing to inspect the UI.
diff --git a/reference/otlp-ingest-traces.mdx b/reference/otlp-ingest-traces.mdx
index e9a25a0..7e888a1 100644
--- a/reference/otlp-ingest-traces.mdx
+++ b/reference/otlp-ingest-traces.mdx
@@ -34,6 +34,26 @@ Spans with [GenAI semantic convention](https://opentelemetry.io/docs/specs/semco
| `gen_ai.request.top_p` | Top-p parameter |
| `gen_ai.response.finish_reasons` | Finish reasons |
+### Event-Based Message Format
+
+In addition to the attribute-based format above, PromptLayer supports the newer [event-based GenAI semantic conventions](https://opentelemetry.io/docs/specs/semconv/gen-ai/gen-ai-events/) where message content is sent as span events rather than span attributes. This format is used by frameworks like [LiveKit](https://docs.livekit.io/) and newer versions of OpenTelemetry GenAI instrumentation.
+
+The following event types are recognized:
+
+| Event Name | Description |
+|---|---|
+| `gen_ai.system.message` | System message |
+| `gen_ai.user.message` | User message |
+| `gen_ai.assistant.message` | Assistant message (including tool calls) |
+| `gen_ai.tool.message` | Tool/function result message |
+| `gen_ai.choice` | Model response/choice |
+
+Event attributes like `gen_ai.system.message.content`, `gen_ai.user.message.content`, and tool call data are automatically extracted and mapped to PromptLayer request logs.
+
+
+When both attribute-based messages (`gen_ai.input.messages`) and event-based messages are present on the same span, attribute-based messages take priority.
+
+
## Linking to Prompt Templates
You can link ingested spans to existing prompt templates in your workspace by adding custom span attributes:
diff --git a/running-requests/traces.mdx b/running-requests/traces.mdx
index 368fa60..7eec922 100644
--- a/running-requests/traces.mdx
+++ b/running-requests/traces.mdx
@@ -5,6 +5,10 @@ icon: "diagram-project"
Traces are a powerful feature in PromptLayer that allow you to monitor and analyze the execution flow of your applications, including LLM requests. Built on OpenTelemetry, Traces provide detailed insights into function calls, their durations, inputs, and outputs.
+
+This page covers tracing with the **PromptLayer SDK** (`@traceable`, `wrapWithSpan`). If you want to send traces from any OpenTelemetry SDK or Collector without using the PromptLayer SDK, see the [OpenTelemetry](/languages/opentelemetry) page. For framework-specific integrations (Vercel AI SDK, OpenAI Agents, Claude Code), see [Integrations](/languages/integrations).
+
+
## Overview
Traces in PromptLayer offer a comprehensive view of your application's performance and behavior. They allow you to: