Skip to content

refactor(openfeature): use shared ffe-system-test-data submodule for UFC fixtures#4604

Draft
leoromanovsky wants to merge 1 commit intomainfrom
leo.romanovsky/centralize-ffe-fixtures
Draft

refactor(openfeature): use shared ffe-system-test-data submodule for UFC fixtures#4604
leoromanovsky wants to merge 1 commit intomainfrom
leo.romanovsky/centralize-ffe-fixtures

Conversation

@leoromanovsky
Copy link
Copy Markdown
Contributor

Motivation

Part of the cross-repo effort to centralize UFC evaluation fixtures into a shared ffe-system-test-data repository. This eliminates fixture duplication across SDK repos and ensures all SDKs test against the same canonical fixture set.

Related: https://github.com/DataDog/ffe-system-test-data/pull/4

Changes

  • Add ffe-system-test-data as a git submodule at openfeature/ffe-system-test-data/
  • Update evaluator_test.go fixture paths from testdata/ to ffe-system-test-data/
  • Remove local openfeature/testdata/ directory (25 fixture files, now sourced from shared repo)
  • Add .github/dependabot.yml to auto-PR when submodule gets new fixtures

Decisions

  • Submodule currently points at the feature branch commit of ffe-system-test-data (PR various improvements #4). Once that PR merges to main, submodule ref will be on main.
  • Added dependabot config for gitsubmodule ecosystem to keep fixtures up to date automatically.

Test plan

  • go test ./openfeature/... passes with submodule fixtures
  • CI passes on this PR
  • Merge ffe-system-test-data PR various improvements #4 first, then update submodule ref to main

- Add ffe-system-test-data as git submodule at openfeature/ffe-system-test-data
- Update evaluator_test.go fixture paths from testdata/ to ffe-system-test-data/
- Remove local openfeature/testdata/ fixtures (now sourced from shared repo)
- Add .github/dependabot.yml to auto-PR on submodule updates
@codecov
Copy link
Copy Markdown

codecov bot commented Mar 26, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 60.62%. Comparing base (944319d) to head (0130701).
⚠️ Report is 1 commits behind head on main.

Additional details and impacted files

see 437 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@datadog-datadog-prod-us1-2
Copy link
Copy Markdown

datadog-datadog-prod-us1-2 bot commented Mar 26, 2026

⚠️ Tests

Fix all issues with BitsAI or with Cursor

⚠️ Warnings

🧪 1 Test failed

TestEvaluateFlag_JSONFixtures from github.com/DataDog/dd-trace-go/v2/openfeature   View in Datadog   (Fix with Cursor)
Failed

=== RUN   TestEvaluateFlag_JSONFixtures
    evaluator_test.go:641: open ffe-system-test-data/ufc-config.json: no such file or directory
--- FAIL: TestEvaluateFlag_JSONFixtures (0.00s)

ℹ️ Info

No other issues found (see more)

❄️ No new flaky tests detected

🎯 Code Coverage (details)
Patch Coverage: 100.00%
Overall Coverage: 60.01% (-0.04%)

This comment will be updated automatically if new data arrives.
🔗 Commit SHA: 0130701 | Docs | Datadog PR Page | Was this helpful? React with 👍/👎 or give us feedback!

@pr-commenter
Copy link
Copy Markdown

pr-commenter bot commented Mar 26, 2026

Benchmarks

Benchmark execution time: 2026-03-26 04:17:29

Comparing candidate commit 0130701 in PR branch leo.romanovsky/centralize-ffe-fixtures with baseline commit 944319d in branch main.

Found 0 performance improvements and 0 performance regressions! Performance is the same for 216 metrics, 8 unstable metrics.

Explanation

This is an A/B test comparing a candidate commit's performance against that of a baseline commit. Performance changes are noted in the tables below as:

  • 🟩 = significantly better candidate vs. baseline
  • 🟥 = significantly worse candidate vs. baseline

We compute a confidence interval (CI) over the relative difference of means between metrics from the candidate and baseline commits, considering the baseline as the reference.

If the CI is entirely outside the configured SIGNIFICANT_IMPACT_THRESHOLD (or the deprecated UNCONFIDENCE_THRESHOLD), the change is considered significant.

Feel free to reach out to #apm-benchmarking-platform on Slack if you have any questions.

More details about the CI and significant changes

You can imagine this CI as a range of values that is likely to contain the true difference of means between the candidate and baseline commits.

CIs of the difference of means are often centered around 0%, because often changes are not that big:

---------------------------------(------|---^--------)-------------------------------->
                              -0.6%    0%  0.3%     +1.2%
                                 |          |        |
         lower bound of the CI --'          |        |
sample mean (center of the CI) -------------'        |
         upper bound of the CI ----------------------'

As described above, a change is considered significant if the CI is entirely outside the configured SIGNIFICANT_IMPACT_THRESHOLD (or the deprecated UNCONFIDENCE_THRESHOLD).

For instance, for an execution time metric, this confidence interval indicates a significantly worse performance:

----------------------------------------|---------|---(---------^---------)---------->
                                       0%        1%  1.3%      2.2%      3.1%
                                                  |   |         |         |
       significant impact threshold --------------'   |         |         |
                      lower bound of CI --------------'         |         |
       sample mean (center of the CI) --------------------------'         |
                      upper bound of CI ----------------------------------'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant