Skip to content

bench: lazily initialize script benchmarks#5190

Open
iammdzaidalam wants to merge 2 commits intoboa-dev:mainfrom
iammdzaidalam:fix/5169-lazy-bench-init
Open

bench: lazily initialize script benchmarks#5190
iammdzaidalam wants to merge 2 commits intoboa-dev:mainfrom
iammdzaidalam:fix/5169-lazy-bench-init

Conversation

@iammdzaidalam
Copy link
Copy Markdown
Contributor

Closes #5169

Summary

Defer script benchmark setup until the selected benchmark actually runs.

benches/benches/scripts.rs was eagerly reading, parsing, compiling, and evaluating every script during registration, so filtered runs could still fail on unrelated entries before reaching the requested benchmark.

This moves that setup behind the benchmark closure and caches the prepared state per benchmark, so unmatched scripts are no longer initialized.

Changes

  • add a small PreparedScriptBench helper for cached per-benchmark state
  • move script file reading into lazy setup
  • move Context creation, runtime registration, parse/compile/evaluate, and main lookup into lazy setup
  • keep benchmark discovery and existing v8-benches group config unchanged
  • cache the prepared script once per matched benchmark so setup is not repeated during measurement

Verification

Ran locally:

  • cargo fmt --check
  • cargo check -p boa_benches
  • cargo bench -p boa_benches -- --list
  • cargo bench -p boa_benches -- call-loop

Also temporarily added logging in the lazy init path to verify behavior:

  • call-loop only initialized basic/call-loop.js
  • a nonexistent filter initialized nothing
  • deltablue only initialized v8-benches/deltablue.js

So filtered runs no longer initialize unrelated scripts first.

@iammdzaidalam iammdzaidalam requested a review from a team as a code owner March 20, 2026 22:33
@github-actions github-actions bot added the Waiting On Review Waiting on reviews from the maintainers label Mar 20, 2026
@github-actions github-actions bot added this to the v1.0.0 milestone Mar 20, 2026
@github-actions github-actions bot added C-Benchmark Issues and PRs related to the benchmark subsystem. C-Builtins PRs and Issues related to builtins/intrinsics and removed Waiting On Review Waiting on reviews from the maintainers labels Mar 20, 2026
@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 20, 2026

Test262 conformance changes

Test result main count PR count difference
Total 53,125 53,125 0
Passed 51,049 51,049 0
Ignored 1,482 1,482 0
Failed 594 594 0
Panics 0 0 0
Conformance 96.09% 96.09% 0.00%

Tested main commit: df1f0ff7f1f05506b56b66295eccc663efedc66d
Tested PR commit: 61ed8cd59199c6963babc2d1810e818d7672725c
Compare commits: df1f0ff...61ed8cd

@jedel1043 jedel1043 removed the C-Builtins PRs and Issues related to builtins/intrinsics label Mar 20, 2026
@codecov
Copy link
Copy Markdown

codecov bot commented Mar 20, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 59.71%. Comparing base (6ddc2b4) to head (61ed8cd).
⚠️ Report is 950 commits behind head on main.

Additional details and impacted files
@@             Coverage Diff             @@
##             main    #5190       +/-   ##
===========================================
+ Coverage   47.24%   59.71%   +12.46%     
===========================================
  Files         476      590      +114     
  Lines       46892    63694    +16802     
===========================================
+ Hits        22154    38033    +15879     
- Misses      24738    25661      +923     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

.unwrap_or_else(|| panic!("'main' is not a function in script: {}", path.display()))
.clone();
group.bench_function("Execution", move |b| {
let prepared = prepared.get_or_insert_with(|| prepare_script_bench(&path));
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we should put the initialization code inside the benchmark, it'll just pollute the results.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Was about to say. The reason we use the main function is to benchmark specific bits of VM, not the initialization and parsing (and optimization, etc).

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, I was mainly trying to avoid the eager init issue but putting it inside the benchmark isnt the right tradeoff here... thinking of instead filtering before registration so only matching scripts get initialized, and keeping the setup outside bench_function like before

does that sound like the right direction?

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah that's a better approach

@iammdzaidalam iammdzaidalam force-pushed the fix/5169-lazy-bench-init branch from eb961ba to 61ed8cd Compare April 10, 2026 18:03
@github-actions github-actions bot added C-Dependencies Pull requests that update a dependency file Waiting On Review Waiting on reviews from the maintainers labels Apr 10, 2026
@iammdzaidalam
Copy link
Copy Markdown
Contributor Author

hey @jedel1043, Please take a look when u get free.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

C-Benchmark Issues and PRs related to the benchmark subsystem. C-Dependencies Pull requests that update a dependency file Waiting On Review Waiting on reviews from the maintainers

Projects

None yet

Development

Successfully merging this pull request may close these issues.

boa_benches: filtered script benchmark runs still eagerly initialize unrelated scripts

3 participants