Project: Lecture Performance Benchmarking & Monitoring
Motivation
As our lecture repositories evolve — with code changes, software upgrades in environment.yml, and new releases (via publish-* tags) — we currently have no systematic way to track how these changes affect build and execution performance over time.
We need a monitoring system that can:
- Track notebook execution times across releases and environment changes
- Correlate performance changes with specific code commits,
environment.yml updates, and publish-* releases
- Detect regressions early when software upgrades cause slowdowns
- Provide historical trends for each lecture across all lecture repositories
Scope
This project will serve all lecture repositories including:
lecture-python-intro
lecture-python-programming.myst
lecture-python-advanced.myst
lecture-jax
lecture-stats
lecture-tools-techniques
lecture-dynamics
- and others
Proposed Approach
Create a new repository lecture-benchmark that houses:
- A benchmarking tool (possibly integrated with jupyter-book as a Sphinx extension, or a standalone tool) that captures per-notebook execution metrics during builds
- A storage system for benchmark results (git-friendly format like JSON/CSV, or a lightweight database)
- Reporting & visualization — either as part of the jupyter-book build (benchmark pages) or as a standalone dashboard
- GitHub Actions integration — automatically run benchmarks on
publish-* releases and environment.yml changes
Key Design Decisions
- Integration vs standalone: Should we build a Sphinx/jupyter-book extension (enabling benchmark result pages in the built lectures) or a separate CLI tool?
- Storage format: Git-friendly flat files (JSON/CSV) vs lightweight DB (SQLite)?
- Metrics to capture: Execution time, memory usage, cell-level timings, output sizes?
- Triggering: On every cache build? On publish only? On environment.yml changes?
Related Work
The existing benchmarks/ repository contains hardware profiling and JAX-specific benchmarks. This project would be complementary — focused on lecture-level performance tracking over time rather than micro-benchmarks.
Next Steps
Project: Lecture Performance Benchmarking & Monitoring
Motivation
As our lecture repositories evolve — with code changes, software upgrades in
environment.yml, and new releases (viapublish-*tags) — we currently have no systematic way to track how these changes affect build and execution performance over time.We need a monitoring system that can:
environment.ymlupdates, andpublish-*releasesScope
This project will serve all lecture repositories including:
lecture-python-introlecture-python-programming.mystlecture-python-advanced.mystlecture-jaxlecture-statslecture-tools-techniqueslecture-dynamicsProposed Approach
Create a new repository
lecture-benchmarkthat houses:publish-*releases andenvironment.ymlchangesKey Design Decisions
Related Work
The existing
benchmarks/repository contains hardware profiling and JAX-specific benchmarks. This project would be complementary — focused on lecture-level performance tracking over time rather than micro-benchmarks.Next Steps
tool-lecture-benchmarkrepository with a detailedPLAN.md