Skip to content

feat: add benchmark page to example app#162

Open
AlshehriAli0 wants to merge 2 commits intopatrickkabwe:mainfrom
AlshehriAli0:feat/benchmark-page
Open

feat: add benchmark page to example app#162
AlshehriAli0 wants to merge 2 commits intopatrickkabwe:mainfrom
AlshehriAli0:feat/benchmark-page

Conversation

@AlshehriAli0
Copy link

@AlshehriAli0 AlshehriAli0 commented Feb 23, 2026

Summary

  • Adds a benchmark tab to the example app comparing NitroFS vs Expo FileSystem vs @dr.pogodin/react-native-fs
  • Helps identify performance bottlenecks in NitroFS by benchmarking against industry-standard file system libraries
  • 19 tests covering read/write (1KB–1MB), exists, stat, copy, rename, readdir, base64, parallel ops, and sync path ops
  • Each test runs 50 iterations and displays averages in a table with speedup ratios and green highlighting for the winner

Test plan

  • Build and run on a physical iOS device (simulator uses host hardware, skewing results)
  • Switch to Benchmark tab
  • Tap "Run All" and verify all 19 tests complete with timing results
  • Verify fastest library highlighted green per row

Summary by CodeRabbit

  • New Features

    • Tab-based navigation to switch between Explorer and Benchmark views.
    • Benchmark dashboard with a comprehensive suite comparing storage implementations across multiple operations (read, write, copy, delete, concurrent, and more).
  • Chores / Configuration

    • Added Expo-related project configuration and presets plus example dependencies to enable Expo-based example builds.
    • Appended Expo ignore patterns to example .gitignore.

Add a benchmark tab to the example app that runs 19 file system operations
across NitroFS, Expo FileSystem, and @dr.pogodin/react-native-fs.

Tests include: read/write at various sizes, exists, stat, copy, rename,
readdir, base64, parallel writes/reads, mixed concurrent ops, rapid
sequential writes, and synchronous path operations.

Results displayed in a table with 50-iteration averages, speedup ratios,
and green highlighting for the fastest library per test.
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 23, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d89dabe and 1ea4c5b.

📒 Files selected for processing (1)
  • example/package.json

📝 Walkthrough

Walkthrough

Adds Expo support to the example app and a new benchmarking feature. App UI gains a tab bar to switch between explorer and benchmark modes. New benchmark utilities and tests compare NitroFS, Expo FileSystem, and RNFS across many file operations. Build configs and package deps updated for Expo.

Changes

Cohort / File(s) Summary
Configuration & Dependencies
example/.gitignore, example/babel.config.js, example/metro.config.js, example/package.json
Append Expo ignore patterns to .gitignore; switch Babel preset to babel-preset-expo; import getDefaultConfig from expo/metro-config; add expo, expo-file-system, @dr.pogodin/react-native-fs, and babel-preset-expo to package manifest.
App UI
example/App.tsx, example/src/components/benchmark-page.tsx
App.tsx adds tab-based navigation (explorer vs benchmark). New BenchmarkPage component renders benchmark controls, per-test rows, run/run-all actions, timing results, speedup calculations, and fastest-result highlighting.
Benchmark logic & tests
example/src/utils/benchmark-runner.ts, example/src/utils/benchmark-tests.ts
Add measure() utility and types (MeasureResult, BenchmarkResult, BenchmarkTest) plus a comprehensive benchmarkTests suite covering write/read/metadata/dir/copy/rename/parallel/path operations with per-test setup/teardown across NitroFS, Expo, and RNFS.

Sequence Diagram

sequenceDiagram
    participant User
    participant App as App.tsx
    participant BenchmarkPage
    participant Runner as benchmark-runner
    participant Tests as benchmark-tests
    participant FS as Nitro/Expo/RNFS

    User->>App: open Benchmark tab / tap Run / Run All
    App->>BenchmarkPage: render benchmark UI
    BenchmarkPage->>Runner: request runSingle(test) or runAll()
    Runner->>Tests: obtain test definition (setup, fn, teardown, iterations)
    loop per test
        Runner->>FS: execute setup hook (per implementation)
        loop per library (nitro, expo, rnfs)
            Runner->>Runner: call measure(fn, iterations)
            loop iterations
                Runner->>FS: perform library benchmark operation
                FS-->>Runner: resolve
            end
            Runner-->>BenchmarkPage: return MeasureResult
        end
        Runner->>FS: execute teardown hook
        Runner->>BenchmarkPage: update results, compute speedups, mark fastest
    end
    BenchmarkPage->>User: display results and highlights
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Poem

🐇 I hopped into tabs with a curious cheer,
I timed every write, read, and steer.
Nitro, Expo, RNFS in a row—
I counted the hops and watched them go.
Little rabbit clap, the numbers glow.

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and specifically summarizes the main change: adding a benchmark page feature to the example app, which aligns with all major modifications across the changeset.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@example/src/components/benchmark-page.tsx`:
- Around line 20-40: runSingle currently calls test.setup/measure/test.teardown
sequentially but if measure throws the teardown is skipped; refactor each
benchmark phase in runSingle (and similarly in runAll) to wrap the trio for each
backend (nitro, expo, rnfs) in try { if (test.setup) await test.setup();
result.X = await measure(test.X, test.iterations); } finally { if
(test.teardown) await test.teardown(); } so teardown always runs even on failure
(refer to function runSingle, runAll, benchmarkTests, measure, and the
test.setup/test.teardown symbols).

In `@example/src/utils/benchmark-runner.ts`:
- Around line 8-22: The measure function lacks validation for the iterations
parameter causing avg/min/max to be invalid when iterations <= 0; update the top
of measure (the measure function) to validate iterations (e.g., if
(!Number.isInteger(iterations) || iterations <= 0) throw new Error('iterations
must be a positive integer')) or return an empty MeasureResult as your API
prefers, so you never compute reduce/Math.min/Math.max on an empty times array
and avoid NaN/Infinity.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c6caa36 and d89dabe.

📒 Files selected for processing (8)
  • example/.gitignore
  • example/App.tsx
  • example/babel.config.js
  • example/metro.config.js
  • example/package.json
  • example/src/components/benchmark-page.tsx
  • example/src/utils/benchmark-runner.ts
  • example/src/utils/benchmark-tests.ts

Comment on lines +20 to +40
const runSingle = useCallback(async (testId: string) => {
const test = benchmarkTests.find(t => t.id === testId);
if (!test) return;

setRunningId(testId);

const result: BenchmarkResult = { nitro: null, expo: null, rnfs: null };

try {
if (test.setup) await test.setup();
result.nitro = await measure(test.nitro, test.iterations);
if (test.teardown) await test.teardown();

if (test.setup) await test.setup();
result.expo = await measure(test.expo, test.iterations);
if (test.teardown) await test.teardown();

if (test.setup) await test.setup();
result.rnfs = await measure(test.rnfs, test.iterations);
if (test.teardown) await test.teardown();
} catch (error) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Ensure teardown always runs on failures.

If a benchmark throws, teardown is skipped and can pollute subsequent runs. Wrap each setup/measure/teardown sequence in try/finally to guarantee cleanup.

🛠️ Suggested pattern (apply to nitro/expo/rnfs in runSingle/runAll)
-      if (test.setup) await test.setup();
-      result.nitro = await measure(test.nitro, test.iterations);
-      if (test.teardown) await test.teardown();
+      if (test.setup) await test.setup();
+      try {
+        result.nitro = await measure(test.nitro, test.iterations);
+      } finally {
+        if (test.teardown) await test.teardown();
+      }

Also applies to: 56-68

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@example/src/components/benchmark-page.tsx` around lines 20 - 40, runSingle
currently calls test.setup/measure/test.teardown sequentially but if measure
throws the teardown is skipped; refactor each benchmark phase in runSingle (and
similarly in runAll) to wrap the trio for each backend (nitro, expo, rnfs) in
try { if (test.setup) await test.setup(); result.X = await measure(test.X,
test.iterations); } finally { if (test.teardown) await test.teardown(); } so
teardown always runs even on failure (refer to function runSingle, runAll,
benchmarkTests, measure, and the test.setup/test.teardown symbols).

Comment on lines +8 to +22
export async function measure(
fn: () => Promise<void>,
iterations: number,
): Promise<MeasureResult> {
const times: number[] = [];
for (let i = 0; i < iterations; i++) {
const start = performance.now();
await fn();
const elapsed = performance.now() - start;
times.push(elapsed);
}
const avg = times.reduce((a, b) => a + b, 0) / times.length;
const min = Math.min(...times);
const max = Math.max(...times);
return { avg, min, max, times };
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Guard against non‑positive iterations.

If iterations is 0 (or negative), avg/min/max become invalid. Add a defensive check to avoid NaN/Infinity.

🛠️ Suggested fix
 export async function measure(
   fn: () => Promise<void>,
   iterations: number,
 ): Promise<MeasureResult> {
+  if (iterations <= 0) {
+    throw new Error('iterations must be > 0');
+  }
   const times: number[] = [];
   for (let i = 0; i < iterations; i++) {
     const start = performance.now();
     await fn();
     const elapsed = performance.now() - start;
     times.push(elapsed);
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@example/src/utils/benchmark-runner.ts` around lines 8 - 22, The measure
function lacks validation for the iterations parameter causing avg/min/max to be
invalid when iterations <= 0; update the top of measure (the measure function)
to validate iterations (e.g., if (!Number.isInteger(iterations) || iterations <=
0) throw new Error('iterations must be a positive integer')) or return an empty
MeasureResult as your API prefers, so you never compute reduce/Math.min/Math.max
on an empty times array and avoid NaN/Infinity.

@AlshehriAli0
Copy link
Author

AlshehriAli0 commented Feb 23, 2026

#119 @patrickkabwe

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant