Skip to content

The industry-standard benchmark suite for WebAssembly Runtimes (Wasmtime, Wasmer, WasmEdge, Wazero). Includes reproducible methodology, scripts, and performance standards. Powered by WasmRuntime.com

License

Notifications You must be signed in to change notification settings

wasmruntime-io/wasm-runtime-benchmarks

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

7 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸš€ Wasm Runtime Benchmarks

The Most Transparent WebAssembly Runtime Benchmark Suite

License: MIT Benchmark

This repository contains the official benchmark suite for wasmruntime.com. All tests are:

  • Transparent: Open-source methodology and test cases
  • Reproducible: Docker-based environment, anyone can run
  • Automated: Weekly updates via GitHub Actions
  • Authoritative: Industry-standard test workloads

πŸ“Š Latest Results

See the full interactive dashboard at wasmruntime.com/benchmarks

Runtime Cold Start Fibonacci(40) Memory Ops Peak Memory
Wasmtime 5.2 ms 1,823 ms 45 ms 12 MB
Wasmer 8.1 ms 1,756 ms 52 ms 18 MB
WasmEdge 6.8 ms 1,912 ms 48 ms 15 MB
wazero 12.3 ms 2,104 ms 61 ms 8 MB
Wasm3 0.8 ms 8,450 ms 210 ms 4 MB

⚠️ These are placeholder values. Run ./scripts/run_all.sh to generate real data.

🎯 What We Test

Runtimes (5 Core)

Runtime Type Language Why Included
Wasmtime JIT Rust Industry standard, Bytecode Alliance
Wasmer JIT/AOT Rust Universal runtime, WASIX support
WasmEdge JIT/AOT C++ Cloud-native focus, CNCF project
wazero JIT Go Pure Go, zero dependencies
Wasm3 Interpreter C Ultra-fast cold start, IoT

Metrics (4 Core)

Metric Description Unit Why It Matters
Cold Start Module load + instantiation ms Critical for Serverless/FaaS
Fibonacci(40) CPU-intensive recursion ms Measures JIT/AOT peak performance
Memory Ops 1MB read/write cycles ms I/O bound workload simulation
Peak Memory Maximum RSS during execution MB Resource footprint

Workloads (3 Core)

Workload Type Description
noop.wasm Minimal Empty function, pure cold start
fibonacci.wasm CPU-bound Recursive Fibonacci(40)
memory_ops.wasm Memory-bound 1MB buffer read/write

πŸš€ Quick Start

Prerequisites

  • Docker (recommended) OR
  • Rust toolchain (rustup, cargo, wasm32-wasi target)
  • Python 3.8+ with psutil (pip install -r requirements.txt)
  • At least one Wasm runtime installed

Run with Docker (Recommended)

# Clone the repository
git clone https://github.com/aspect-build/wasm-benchmarks.git
cd wasm-benchmarks

# Initialize wasm-score submodule (optional, for MVP)
git submodule update --init --recursive

# Build and run all benchmarks
docker build -t wasm-benchmarks .
docker run --rm wasm-benchmarks

# Or run MVP script (custom-suite + wasm-score)
docker run --rm wasm-benchmarks bash scripts/run_all_mvp.sh

Run Locally

Option 1: Custom Suite Only (Fastest)

# 1. Install runtimes
./runtimes/install.sh

# 2. Build test workloads
cd workloads && ./build.sh && cd ..

# 3. Install Python dependencies
pip install -r requirements.txt

# 4. Run benchmarks with comparison (add --memory for peak memory metrics)
python3 scripts/run_benchmark.py --compare --memory

# 5. View results
cat results/latest.json
cat results/comparison.json

Option 2: Full MVP (Custom Suite + Wasm-Score)

# 1. Initialize wasm-score submodule
git submodule update --init --recursive

# 2. Install runtimes and build workloads (see Option 1)

# 3. Run MVP script (add --memory flag to enable memory measurement)
bash scripts/run_all_mvp.sh --memory

# 4. View aggregated results
cat results/aggregated.json
cat results/comparison.json

Option 3: Individual Scripts

# Run custom-suite only (add --memory for peak memory metrics)
python3 scripts/run_benchmark.py --runtimes wasmtime,wasmer --compare --memory

# Run wasm-score only (requires submodule)
python3 scripts/run_wasmscore.py --suites core-wasmscore

# Aggregate results
python3 scripts/aggregate.py \
    --custom results/latest.json \
    --wasmscore results/wasmscore_results.json \
    --output results/aggregated.json \
    --comparison results/comparison.json

Memory Measurement

The benchmark suite supports measuring peak memory usage (RSS) for each runtime. This feature is enabled by default in CI/CD but optional for local runs (as it makes benchmarks slower).

  • Enable memory measurement: Add --memory flag to run_benchmark.py
  • Memory metrics included: peak_memory_mb, peak_memory_min_mb, peak_memory_max_mb
  • Note: Memory measurement uses psutil and adds ~10-20% overhead to benchmark execution time

πŸ§ͺ Testing

Quick Test:

# Run quick validation test
bash scripts/quick_test.sh

Full Test Steps: See TEST_STEPS.md for comprehensive testing instructions including:

  • Environment setup
  • Quick tests
  • Full test workflows
  • Result verification
  • Troubleshooting guide

πŸ“– Methodology

See docs/METHODOLOGY.md for:

  • Detailed test environment specifications
  • Warm-up and iteration parameters
  • Statistical analysis approach
  • Version pinning strategy

πŸ“ Project Structure

wasm-benchmarks/
β”œβ”€β”€ README.md                    # This file
β”œβ”€β”€ docs/                         # Documentation
β”‚   β”œβ”€β”€ METHODOLOGY.md            # Test methodology
β”‚   β”œβ”€β”€ MVP_QUICK_START.md        # Quick start guide
β”‚   β”œβ”€β”€ BENCHMARK_SCOPE_ANALYSIS.md # Scope analysis
β”‚   β”œβ”€β”€ STRATEGIC_ANALYSIS.md     # Strategic analysis
β”‚   └── ...                       # Other documentation
β”œβ”€β”€ LICENSE                      # MIT License
β”œβ”€β”€ Dockerfile                   # Reproducible test environment
β”œβ”€β”€ .github/
β”‚   └── workflows/
β”‚       └── benchmark.yml        # Weekly automated runs
β”œβ”€β”€ runtimes/
β”‚   β”œβ”€β”€ install.sh               # Install all runtimes
β”‚   └── versions.json            # Pinned versions
β”œβ”€β”€ workloads/
β”‚   β”œβ”€β”€ src/                     # Rust source files
β”‚   β”‚   β”œβ”€β”€ fibonacci.rs
β”‚   β”‚   β”œβ”€β”€ memory_ops.rs
β”‚   β”‚   └── noop.rs
β”‚   β”œβ”€β”€ build.sh                 # Compile to .wasm
β”‚   └── *.wasm                   # Compiled workloads
β”œβ”€β”€ scripts/
β”‚   β”œβ”€β”€ run_benchmark.py         # Custom-suite benchmark script
β”‚   β”œβ”€β”€ run_wasmscore.py         # Wasm-score integration script
β”‚   β”œβ”€β”€ aggregate.py             # Data aggregation script
β”‚   β”œβ”€β”€ run_all_mvp.sh           # MVP runner (custom + wasm-score)
β”‚   β”œβ”€β”€ run_all.sh               # Convenience wrapper
β”‚   └── generate_report.py       # JSON β†’ Markdown (future)
β”œβ”€β”€ wasm-score/                  # Git submodule (wasm-score benchmarks)
β”‚   └── benchmarks/              # Official test suites
└── requirements.txt             # Python dependencies
└── results/
    β”œβ”€β”€ latest.json              # Most recent results
    └── history/                 # Historical data

🀝 Contributing

We welcome contributions! Please:

  1. Fork this repository
  2. Add new workloads to workloads/src/
  3. Add new runtimes to runtimes/install.sh
  4. Submit a Pull Request

πŸ“œ License

MIT License - see LICENSE

πŸ”— Related


Maintained by WasmX | Data powers wasmruntime.com/benchmarks

About

The industry-standard benchmark suite for WebAssembly Runtimes (Wasmtime, Wasmer, WasmEdge, Wazero). Includes reproducible methodology, scripts, and performance standards. Powered by WasmRuntime.com

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published