feat: add benchmark page to example app#162
feat: add benchmark page to example app#162AlshehriAli0 wants to merge 2 commits intopatrickkabwe:mainfrom
Conversation
Add a benchmark tab to the example app that runs 19 file system operations across NitroFS, Expo FileSystem, and @dr.pogodin/react-native-fs. Tests include: read/write at various sizes, exists, stat, copy, rename, readdir, base64, parallel writes/reads, mixed concurrent ops, rapid sequential writes, and synchronous path operations. Results displayed in a table with 50-iteration averages, speedup ratios, and green highlighting for the fastest library per test.
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review infoConfiguration used: defaults Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughAdds Expo support to the example app and a new benchmarking feature. App UI gains a tab bar to switch between explorer and benchmark modes. New benchmark utilities and tests compare NitroFS, Expo FileSystem, and RNFS across many file operations. Build configs and package deps updated for Expo. Changes
Sequence DiagramsequenceDiagram
participant User
participant App as App.tsx
participant BenchmarkPage
participant Runner as benchmark-runner
participant Tests as benchmark-tests
participant FS as Nitro/Expo/RNFS
User->>App: open Benchmark tab / tap Run / Run All
App->>BenchmarkPage: render benchmark UI
BenchmarkPage->>Runner: request runSingle(test) or runAll()
Runner->>Tests: obtain test definition (setup, fn, teardown, iterations)
loop per test
Runner->>FS: execute setup hook (per implementation)
loop per library (nitro, expo, rnfs)
Runner->>Runner: call measure(fn, iterations)
loop iterations
Runner->>FS: perform library benchmark operation
FS-->>Runner: resolve
end
Runner-->>BenchmarkPage: return MeasureResult
end
Runner->>FS: execute teardown hook
Runner->>BenchmarkPage: update results, compute speedups, mark fastest
end
BenchmarkPage->>User: display results and highlights
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@example/src/components/benchmark-page.tsx`:
- Around line 20-40: runSingle currently calls test.setup/measure/test.teardown
sequentially but if measure throws the teardown is skipped; refactor each
benchmark phase in runSingle (and similarly in runAll) to wrap the trio for each
backend (nitro, expo, rnfs) in try { if (test.setup) await test.setup();
result.X = await measure(test.X, test.iterations); } finally { if
(test.teardown) await test.teardown(); } so teardown always runs even on failure
(refer to function runSingle, runAll, benchmarkTests, measure, and the
test.setup/test.teardown symbols).
In `@example/src/utils/benchmark-runner.ts`:
- Around line 8-22: The measure function lacks validation for the iterations
parameter causing avg/min/max to be invalid when iterations <= 0; update the top
of measure (the measure function) to validate iterations (e.g., if
(!Number.isInteger(iterations) || iterations <= 0) throw new Error('iterations
must be a positive integer')) or return an empty MeasureResult as your API
prefers, so you never compute reduce/Math.min/Math.max on an empty times array
and avoid NaN/Infinity.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (8)
example/.gitignoreexample/App.tsxexample/babel.config.jsexample/metro.config.jsexample/package.jsonexample/src/components/benchmark-page.tsxexample/src/utils/benchmark-runner.tsexample/src/utils/benchmark-tests.ts
| const runSingle = useCallback(async (testId: string) => { | ||
| const test = benchmarkTests.find(t => t.id === testId); | ||
| if (!test) return; | ||
|
|
||
| setRunningId(testId); | ||
|
|
||
| const result: BenchmarkResult = { nitro: null, expo: null, rnfs: null }; | ||
|
|
||
| try { | ||
| if (test.setup) await test.setup(); | ||
| result.nitro = await measure(test.nitro, test.iterations); | ||
| if (test.teardown) await test.teardown(); | ||
|
|
||
| if (test.setup) await test.setup(); | ||
| result.expo = await measure(test.expo, test.iterations); | ||
| if (test.teardown) await test.teardown(); | ||
|
|
||
| if (test.setup) await test.setup(); | ||
| result.rnfs = await measure(test.rnfs, test.iterations); | ||
| if (test.teardown) await test.teardown(); | ||
| } catch (error) { |
There was a problem hiding this comment.
Ensure teardown always runs on failures.
If a benchmark throws, teardown is skipped and can pollute subsequent runs. Wrap each setup/measure/teardown sequence in try/finally to guarantee cleanup.
🛠️ Suggested pattern (apply to nitro/expo/rnfs in runSingle/runAll)
- if (test.setup) await test.setup();
- result.nitro = await measure(test.nitro, test.iterations);
- if (test.teardown) await test.teardown();
+ if (test.setup) await test.setup();
+ try {
+ result.nitro = await measure(test.nitro, test.iterations);
+ } finally {
+ if (test.teardown) await test.teardown();
+ }Also applies to: 56-68
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@example/src/components/benchmark-page.tsx` around lines 20 - 40, runSingle
currently calls test.setup/measure/test.teardown sequentially but if measure
throws the teardown is skipped; refactor each benchmark phase in runSingle (and
similarly in runAll) to wrap the trio for each backend (nitro, expo, rnfs) in
try { if (test.setup) await test.setup(); result.X = await measure(test.X,
test.iterations); } finally { if (test.teardown) await test.teardown(); } so
teardown always runs even on failure (refer to function runSingle, runAll,
benchmarkTests, measure, and the test.setup/test.teardown symbols).
| export async function measure( | ||
| fn: () => Promise<void>, | ||
| iterations: number, | ||
| ): Promise<MeasureResult> { | ||
| const times: number[] = []; | ||
| for (let i = 0; i < iterations; i++) { | ||
| const start = performance.now(); | ||
| await fn(); | ||
| const elapsed = performance.now() - start; | ||
| times.push(elapsed); | ||
| } | ||
| const avg = times.reduce((a, b) => a + b, 0) / times.length; | ||
| const min = Math.min(...times); | ||
| const max = Math.max(...times); | ||
| return { avg, min, max, times }; |
There was a problem hiding this comment.
Guard against non‑positive iterations.
If iterations is 0 (or negative), avg/min/max become invalid. Add a defensive check to avoid NaN/Infinity.
🛠️ Suggested fix
export async function measure(
fn: () => Promise<void>,
iterations: number,
): Promise<MeasureResult> {
+ if (iterations <= 0) {
+ throw new Error('iterations must be > 0');
+ }
const times: number[] = [];
for (let i = 0; i < iterations; i++) {
const start = performance.now();
await fn();
const elapsed = performance.now() - start;
times.push(elapsed);
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@example/src/utils/benchmark-runner.ts` around lines 8 - 22, The measure
function lacks validation for the iterations parameter causing avg/min/max to be
invalid when iterations <= 0; update the top of measure (the measure function)
to validate iterations (e.g., if (!Number.isInteger(iterations) || iterations <=
0) throw new Error('iterations must be a positive integer')) or return an empty
MeasureResult as your API prefers, so you never compute reduce/Math.min/Math.max
on an empty times array and avoid NaN/Infinity.
Summary
Test plan
Summary by CodeRabbit
New Features
Chores / Configuration