Skip to content

fix(llmobs): flush span events buffer when it reaches certain size#4524

Merged
gh-worker-dd-mergequeue-cf854d[bot] merged 5 commits intomainfrom
rarguelloF/llmobs-batch-improvements
Apr 20, 2026
Merged

fix(llmobs): flush span events buffer when it reaches certain size#4524
gh-worker-dd-mergequeue-cf854d[bot] merged 5 commits intomainfrom
rarguelloF/llmobs-batch-improvements

Conversation

@rarguelloF
Copy link
Copy Markdown
Contributor

@rarguelloF rarguelloF commented Mar 11, 2026

What does this PR do?

This PR fixes the behavior of the llmobs SDK for big payloads:

  • Fix span events flush logic: before, span events were flushed on a fixed 2-second interval regardless of payload size. Now the buffer tracks its cumulative size and flushes automatically when it reaches the 5MB limit enforced by the backend.
  • Fix Dataset.Push for large payloads: before this PR, the logic was to fall back to bulk CSV upload for large changes, but the backend rejects large multipart requests. This switches to chunking inserts across multiple batch_update calls instead.
  • Remove the global flush timeout: the previous code applied a single 2-second deadline shared across all retries when sending span events, causing later retries to fail immediately if the first attempt was slow. Each transport retry now gets its own independent per-request timeout.

Motivation

Reviewer's Checklist

  • Changed code has unit tests for its functionality at or near 100% coverage.
  • System-Tests covering this feature have been added and enabled with the va.b.c-dev version tag.
  • There is a benchmark for any new code, or changes to existing code.
  • If this interacts with the agent in a new way, a system test has been added.
  • New code is free of linting errors. You can check this by running make lint locally.
  • New code doesn't break existing tests. You can check this by running make test locally.
  • Add an appropriate team label so this PR gets put in the right place for the release notes.
  • All generated files are up to date. You can check this by running make generate locally.
  • Non-trivial go.mod changes, e.g. adding new modules, are reviewed by @DataDog/dd-trace-go-guild. Make sure all nested modules are up to date by running make fix-modules locally.

Unsure? Have a question? Request a review!

@codecov
Copy link
Copy Markdown

codecov Bot commented Mar 11, 2026

Codecov Report

❌ Patch coverage is 84.00000% with 8 lines in your changes missing coverage. Please review.
✅ Project coverage is 60.77%. Comparing base (42ba2e8) to head (8174c0a).

Files with missing lines Patch % Lines
llmobs/dataset/dataset.go 82.85% 3 Missing and 3 partials ⚠️
internal/llmobs/llmobs.go 86.66% 1 Missing and 1 partial ⚠️
Additional details and impacted files
Files with missing lines Coverage Δ
internal/llmobs/transport/transport.go 0.00% <ø> (ø)
internal/llmobs/llmobs.go 78.30% <86.66%> (ø)
llmobs/dataset/dataset.go 73.19% <82.85%> (ø)

... and 418 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@datadog-prod-us1-3
Copy link
Copy Markdown

datadog-prod-us1-3 Bot commented Mar 11, 2026

✅ Tests

🎉 All green!

❄️ No new flaky tests detected
🧪 All tests passed

🎯 Code Coverage (details)
Patch Coverage: 83.67%
Overall Coverage: 60.08%

This comment will be updated automatically if new data arrives.
🔗 Commit SHA: 8174c0a | Docs | Datadog PR Page | Was this helpful? React with 👍/👎 or give us feedback!

@pr-commenter
Copy link
Copy Markdown

pr-commenter Bot commented Mar 11, 2026

Benchmarks

Benchmark execution time: 2026-04-14 13:23:25

Comparing candidate commit 8174c0a in PR branch rarguelloF/llmobs-batch-improvements with baseline commit 42ba2e8 in branch main.

Found 0 performance improvements and 0 performance regressions! Performance is the same for 215 metrics, 9 unstable metrics.

Explanation

This is an A/B test comparing a candidate commit's performance against that of a baseline commit. Performance changes are noted in the tables below as:

  • 🟩 = significantly better candidate vs. baseline
  • 🟥 = significantly worse candidate vs. baseline

We compute a confidence interval (CI) over the relative difference of means between metrics from the candidate and baseline commits, considering the baseline as the reference.

If the CI is entirely outside the configured SIGNIFICANT_IMPACT_THRESHOLD (or the deprecated UNCONFIDENCE_THRESHOLD), the change is considered significant.

Feel free to reach out to #apm-benchmarking-platform on Slack if you have any questions.

More details about the CI and significant changes

You can imagine this CI as a range of values that is likely to contain the true difference of means between the candidate and baseline commits.

CIs of the difference of means are often centered around 0%, because often changes are not that big:

---------------------------------(------|---^--------)-------------------------------->
                              -0.6%    0%  0.3%     +1.2%
                                 |          |        |
         lower bound of the CI --'          |        |
sample mean (center of the CI) -------------'        |
         upper bound of the CI ----------------------'

As described above, a change is considered significant if the CI is entirely outside the configured SIGNIFICANT_IMPACT_THRESHOLD (or the deprecated UNCONFIDENCE_THRESHOLD).

For instance, for an execution time metric, this confidence interval indicates a significantly worse performance:

----------------------------------------|---------|---(---------^---------)---------->
                                       0%        1%  1.3%      2.2%      3.1%
                                                  |   |         |         |
       significant impact threshold --------------'   |         |         |
                      lower bound of CI --------------'         |         |
       sample mean (center of the CI) --------------------------'         |
                      upper bound of CI ----------------------------------'

@rarguelloF rarguelloF marked this pull request as ready for review April 14, 2026 13:05
@rarguelloF rarguelloF requested review from a team as code owners April 14, 2026 13:05
@rarguelloF
Copy link
Copy Markdown
Contributor Author

/merge

@gh-worker-devflow-routing-ef8351
Copy link
Copy Markdown

gh-worker-devflow-routing-ef8351 Bot commented Apr 20, 2026

View all feedbacks in Devflow UI.

2026-04-20 08:14:18 UTC ℹ️ Start processing command /merge


2026-04-20 08:14:23 UTC ℹ️ MergeQueue: pull request added to the queue

The expected merge time in main is approximately 15m (p90).


2026-04-20 08:27:31 UTC ℹ️ MergeQueue: This merge request was merged

@gh-worker-dd-mergequeue-cf854d gh-worker-dd-mergequeue-cf854d Bot merged commit 48352bb into main Apr 20, 2026
190 checks passed
@gh-worker-dd-mergequeue-cf854d gh-worker-dd-mergequeue-cf854d Bot deleted the rarguelloF/llmobs-batch-improvements branch April 20, 2026 08:27
darccio pushed a commit that referenced this pull request Apr 24, 2026
…4524)

<!--
* New contributors are highly encouraged to read our
  [CONTRIBUTING](/CONTRIBUTING.md) documentation.
* Commit and PR titles should be prefixed with the general area of the pull request's change.

-->
### What does this PR do?

<!--
* A brief description of the change being made with this pull request.
* If the description here cannot be expressed in a succinct form, consider
  opening multiple pull requests instead of a single one.
-->

This PR fixes the behavior of the llmobs SDK for big payloads:

- Fix span events flush logic: before, span events were flushed on a fixed 2-second interval regardless of payload size. Now the buffer tracks its cumulative size and flushes automatically when it reaches the 5MB limit enforced by the backend.
- Fix `Dataset.Push` for large payloads: before this PR, the logic was to fall back to bulk CSV upload for large changes, but the backend rejects large multipart requests. This switches to chunking inserts across multiple `batch_update` calls instead.
- Remove the global flush timeout: the previous code applied a single 2-second deadline shared across all retries when sending span events, causing later retries to fail immediately if the first attempt was slow. Each transport retry now gets its own independent per-request timeout.

### Motivation

<!--
* What inspired you to submit this pull request?
* Link any related GitHub issues or PRs here.
* If this resolves a GitHub issue, include "Fixes #XXXX" to link the issue and auto-close it on merge.
-->

### Reviewer's Checklist
<!--
* Authors can use this list as a reference to ensure that there are no problems
  during the review but the signing off is to be done by the reviewer(s).
-->

- [x] Changed code has unit tests for its functionality at or near 100% coverage.
- [ ] [System-Tests](https://github.com/DataDog/system-tests/) covering this feature have been added and enabled with the va.b.c-dev version tag.
- [ ] There is a benchmark for any new code, or changes to existing code.
- [ ] If this interacts with the agent in a new way, a system test has been added.
- [ ] New code is free of linting errors. You can check this by running `make lint` locally.
- [ ] New code doesn't break existing tests. You can check this by running `make test` locally.
- [ ] Add an appropriate team label so this PR gets put in the right place for the release notes.
- [ ] All generated files are up to date. You can check this by running `make generate` locally.
- [ ] Non-trivial go.mod changes, e.g. adding new modules, are reviewed by @DataDog/dd-trace-go-guild. Make sure all nested modules are up to date by running `make fix-modules` locally.

Unsure? Have a question? Request a review!


Co-authored-by: rodrigo.arguello <rodrigo.arguello@datadoghq.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants