The Most Transparent WebAssembly Runtime Benchmark Suite
This repository contains the official benchmark suite for wasmruntime.com. All tests are:
- Transparent: Open-source methodology and test cases
- Reproducible: Docker-based environment, anyone can run
- Automated: Weekly updates via GitHub Actions
- Authoritative: Industry-standard test workloads
See the full interactive dashboard at wasmruntime.com/benchmarks
| Runtime | Cold Start | Fibonacci(40) | Memory Ops | Peak Memory |
|---|---|---|---|---|
| Wasmtime | 5.2 ms | 1,823 ms | 45 ms | 12 MB |
| Wasmer | 8.1 ms | 1,756 ms | 52 ms | 18 MB |
| WasmEdge | 6.8 ms | 1,912 ms | 48 ms | 15 MB |
| wazero | 12.3 ms | 2,104 ms | 61 ms | 8 MB |
| Wasm3 | 0.8 ms | 8,450 ms | 210 ms | 4 MB |
β οΈ These are placeholder values. Run./scripts/run_all.shto generate real data.
| Runtime | Type | Language | Why Included |
|---|---|---|---|
| Wasmtime | JIT | Rust | Industry standard, Bytecode Alliance |
| Wasmer | JIT/AOT | Rust | Universal runtime, WASIX support |
| WasmEdge | JIT/AOT | C++ | Cloud-native focus, CNCF project |
| wazero | JIT | Go | Pure Go, zero dependencies |
| Wasm3 | Interpreter | C | Ultra-fast cold start, IoT |
| Metric | Description | Unit | Why It Matters |
|---|---|---|---|
| Cold Start | Module load + instantiation | ms | Critical for Serverless/FaaS |
| Fibonacci(40) | CPU-intensive recursion | ms | Measures JIT/AOT peak performance |
| Memory Ops | 1MB read/write cycles | ms | I/O bound workload simulation |
| Peak Memory | Maximum RSS during execution | MB | Resource footprint |
| Workload | Type | Description |
|---|---|---|
noop.wasm |
Minimal | Empty function, pure cold start |
fibonacci.wasm |
CPU-bound | Recursive Fibonacci(40) |
memory_ops.wasm |
Memory-bound | 1MB buffer read/write |
- Docker (recommended) OR
- Rust toolchain (
rustup,cargo,wasm32-wasitarget) - Python 3.8+ with
psutil(pip install -r requirements.txt) - At least one Wasm runtime installed
# Clone the repository
git clone https://github.com/aspect-build/wasm-benchmarks.git
cd wasm-benchmarks
# Initialize wasm-score submodule (optional, for MVP)
git submodule update --init --recursive
# Build and run all benchmarks
docker build -t wasm-benchmarks .
docker run --rm wasm-benchmarks
# Or run MVP script (custom-suite + wasm-score)
docker run --rm wasm-benchmarks bash scripts/run_all_mvp.sh# 1. Install runtimes
./runtimes/install.sh
# 2. Build test workloads
cd workloads && ./build.sh && cd ..
# 3. Install Python dependencies
pip install -r requirements.txt
# 4. Run benchmarks with comparison (add --memory for peak memory metrics)
python3 scripts/run_benchmark.py --compare --memory
# 5. View results
cat results/latest.json
cat results/comparison.json# 1. Initialize wasm-score submodule
git submodule update --init --recursive
# 2. Install runtimes and build workloads (see Option 1)
# 3. Run MVP script (add --memory flag to enable memory measurement)
bash scripts/run_all_mvp.sh --memory
# 4. View aggregated results
cat results/aggregated.json
cat results/comparison.json# Run custom-suite only (add --memory for peak memory metrics)
python3 scripts/run_benchmark.py --runtimes wasmtime,wasmer --compare --memory
# Run wasm-score only (requires submodule)
python3 scripts/run_wasmscore.py --suites core-wasmscore
# Aggregate results
python3 scripts/aggregate.py \
--custom results/latest.json \
--wasmscore results/wasmscore_results.json \
--output results/aggregated.json \
--comparison results/comparison.jsonThe benchmark suite supports measuring peak memory usage (RSS) for each runtime. This feature is enabled by default in CI/CD but optional for local runs (as it makes benchmarks slower).
- Enable memory measurement: Add
--memoryflag torun_benchmark.py - Memory metrics included:
peak_memory_mb,peak_memory_min_mb,peak_memory_max_mb - Note: Memory measurement uses
psutiland adds ~10-20% overhead to benchmark execution time
Quick Test:
# Run quick validation test
bash scripts/quick_test.shFull Test Steps: See TEST_STEPS.md for comprehensive testing instructions including:
- Environment setup
- Quick tests
- Full test workflows
- Result verification
- Troubleshooting guide
See docs/METHODOLOGY.md for:
- Detailed test environment specifications
- Warm-up and iteration parameters
- Statistical analysis approach
- Version pinning strategy
wasm-benchmarks/
βββ README.md # This file
βββ docs/ # Documentation
β βββ METHODOLOGY.md # Test methodology
β βββ MVP_QUICK_START.md # Quick start guide
β βββ BENCHMARK_SCOPE_ANALYSIS.md # Scope analysis
β βββ STRATEGIC_ANALYSIS.md # Strategic analysis
β βββ ... # Other documentation
βββ LICENSE # MIT License
βββ Dockerfile # Reproducible test environment
βββ .github/
β βββ workflows/
β βββ benchmark.yml # Weekly automated runs
βββ runtimes/
β βββ install.sh # Install all runtimes
β βββ versions.json # Pinned versions
βββ workloads/
β βββ src/ # Rust source files
β β βββ fibonacci.rs
β β βββ memory_ops.rs
β β βββ noop.rs
β βββ build.sh # Compile to .wasm
β βββ *.wasm # Compiled workloads
βββ scripts/
β βββ run_benchmark.py # Custom-suite benchmark script
β βββ run_wasmscore.py # Wasm-score integration script
β βββ aggregate.py # Data aggregation script
β βββ run_all_mvp.sh # MVP runner (custom + wasm-score)
β βββ run_all.sh # Convenience wrapper
β βββ generate_report.py # JSON β Markdown (future)
βββ wasm-score/ # Git submodule (wasm-score benchmarks)
β βββ benchmarks/ # Official test suites
βββ requirements.txt # Python dependencies
βββ results/
βββ latest.json # Most recent results
βββ history/ # Historical data
We welcome contributions! Please:
- Fork this repository
- Add new workloads to
workloads/src/ - Add new runtimes to
runtimes/install.sh - Submit a Pull Request
MIT License - see LICENSE
- wasmruntime.com - Interactive runtime comparison
- WebAssembly Specification
- WASI Specification
Maintained by WasmX | Data powers wasmruntime.com/benchmarks