-
Notifications
You must be signed in to change notification settings - Fork 877
Description
Describe your environment
- OS: Windows 11
- Python version: 3.12.4
opentelemetry-instrumentation-fastapi: 0.61b0.dev0 (main)opentelemetry-instrumentation-httpx: 0.61b0.dev0 (main)opentelemetry-sdk: 1.40.0.dev0 (main)- FastAPI: 0.115.x
What happened?
When using FastAPI's BackgroundTasks, any instrumented work that runs inside a background task (outbound HTTP calls, database queries, etc.) produces child spans whose parent_id points to the originating request span, which has already been closed by the time the background task executes.
This means child spans end later than their parent, producing a broken trace timeline in backends like Jaeger or Grafana Tempo. It also means there is no dedicated span wrapping the background task itself, so there is no way to
distinguish work done during the request/response cycle from work done asynchronously after the response was sent.
Steps to Reproduce
- Install dependencies:
pip install opentelemetry-sdk fastapi uvicorn httpx
pip install opentelemetry-instrumentation-fastapi opentelemetry-instrumentation-httpx
- Create your fast API app; e.g
app.pywith the following:
import httpx
from fastapi import BackgroundTasks, FastAPI
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import SimpleSpanProcessor, ConsoleSpanExporter
from opentelemetry.instrumentation.fastapi import FastAPIInstrumentor
from opentelemetry.instrumentation.httpx import HTTPXClientInstrumentor
provider = TracerProvider()
provider.add_span_processor(SimpleSpanProcessor(ConsoleSpanExporter()))
trace.set_tracer_provider(provider)
HTTPXClientInstrumentor().instrument()
app = FastAPI()
FastAPIInstrumentor.instrument_app(app)
async def background_notify(user_id: int):
async with httpx.AsyncClient() as client:
await client.get("https://httpbin.org/get", timeout=5)
@app.post("/checkout")
async def checkout(background_tasks: BackgroundTasks):
background_tasks.add_task(background_notify, user_id=42)
return {"status": "processing"}- Run the server:
uvicorn app:app --port 8000 - Send a request:
curl -X POST http://localhost:8000/checkout - Observe the span output in the server terminal.
Expected Result
A dedicated span wrapping the background task execution should be created as a child of the request span. Any spans produced inside the task (httpx calls, database queries, etc.) should be children of that wrapper span, not direct children of the already-closed server span.
This would make it possible to identify background work as a distinct unit in a trace and would prevent child spans from extending past their parent's end_time.
Actual Result
The POST /checkout server span closes as soon as the response is sent:
{
"name": "POST /checkout",
"context": {
"trace_id": "0xf48fb97cb6a9fe003ba52b729243ca43",
"span_id": "0xe66e1014d69a4e10"
},
"kind": "SpanKind.SERVER",
"start_time": "2026-02-23T15:02:25.666097Z",
"end_time": "2026-02-23T15:02:25.819538Z"
}The background task then produces an httpx span roughly 2 seconds later. That
span's parent_id points to the already-closed server span:
{
"name": "GET",
"context": {
"trace_id": "0xf48fb97cb6a9fe003ba52b729243ca43",
"span_id": "0x4762a09bd4a43e3e"
},
"kind": "SpanKind.CLIENT",
"parent_id": "0xe66e1014d69a4e10",
"start_time": "2026-02-23T15:02:27.077417Z",
"end_time": "2026-02-23T15:02:28.871654Z"
}The child span ends at 15:02:28, roughly 2 seconds after its parent closed at 15:02:25. There is also no span representing the background task itself.
Additional context
The trace context is being carried into background tasks correctly via contextvars (the same trace_id is present in both the request handler and the background task). The problem is not context loss. The problem is that no wrapper span is created for the background task, so all work done inside it attaches directly to a parent that is already finished.
(If I remember correctly) A similar pattern is handled in opentelemetry-instrumentation-celery, where task execution is wrapped in its own span. Patching starlette.background.BackgroundTask.__call__ inside the FastAPIInstrumentor instrument/uninstrument lifecycle would be one approach to solving this.
Would you like to implement a fix?
No
Tip
React with 👍 to help prioritize this issue. Please use comments to provide useful context, avoiding +1 or me too, to help us triage it. Learn more here.