Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 4 additions & 3 deletions .github/workflows/ci.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,6 @@ jobs:
test:
name: Test
runs-on: ubuntu-latest
needs: lint
strategy:
fail-fast: false
matrix:
Expand All @@ -58,7 +57,7 @@ jobs:
cache: 'pip'

- name: Install test dependencies
run: python -m pip install -e ".[dev]"
run: python -m pip install -e . -r requirements-dev.txt

- name: Audit dependencies
run: python -m pip_audit
Expand All @@ -78,7 +77,9 @@ jobs:
build:
name: Build
runs-on: ubuntu-latest
needs: test
needs:
- lint
- test
steps:
- name: Checkout code
uses: actions/checkout@v6
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/pages.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -30,10 +30,10 @@ jobs:
with:
python-version: '3.11'

- name: Create venv and install docs dependencies
- name: Install docs dependencies
run: |
python -m venv .venv
.venv/bin/pip install -e ".[docs]"
.venv/bin/pip install zensical

- name: Build site
run: .venv/bin/zensical build
Expand Down
12 changes: 10 additions & 2 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -15,18 +15,26 @@ repos:
- --fix=lf
- id: check-yaml
- id: check-toml
- id: detect-private-key
- id: requirements-txt-fixer
files: ^requirements-dev\.txt$

- repo: https://github.com/adrienverge/yamllint
rev: v1.38.0
hooks:
- id: yamllint

- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.15.5
rev: v0.15.8
hooks:
- id: ruff-format
- id: ruff-check

- repo: https://github.com/codespell-project/codespell
rev: v2.4.2
hooks:
- id: codespell

- repo: https://github.com/pre-commit/mirrors-mypy
rev: v1.19.1
hooks:
Expand All @@ -49,7 +57,7 @@ repos:
additional_dependencies: ["bandit[toml]"]

- repo: https://github.com/semgrep/pre-commit
rev: v1.154.0
rev: v1.156.0
hooks:
- id: semgrep
args: ["--config", "p/python", "--error"]
4 changes: 2 additions & 2 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ init: ## Set up dev env. Prompts for Python, or: make init PYTHON=python3.10
fi; \
if [ ! -d "$(VENV_DIR)" ]; then $$py -m venv "$(VENV_DIR)" >/dev/null; fi
@$(VENV_BIN)/pip install --upgrade pip >/dev/null
@$(VENV_BIN)/pip install -e ".[dev,docs]" >/dev/null
@$(VENV_BIN)/pip install -e . -r requirements-dev.txt zensical >/dev/null
@$(VENV_BIN)/pip install pre-commit >/dev/null
@$(VENV_BIN)/pre-commit install >/dev/null

Expand Down Expand Up @@ -118,7 +118,7 @@ test-verbose: check-venv ## Run BDD tests with full scenario/step output (for de
@$(VENV_BIN)/coverage report --show-missing

# ============================================================================
# Docs Targets (Zensical; docs deps installed by make init)
# Docs Targets
# ============================================================================

##@ Docs
Expand Down
26 changes: 21 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
</a>
</p>

<p align="center"><strong>TimeRun</strong> β€” <em>Python package for time measurement.</em></p>
<p align="center"><strong>TimeRun</strong> β€” <em>Structured timing for Python.</em></p>

<p align="center">
<a href="https://pypi.org/project/timerun/"><img alt="Version" src="https://img.shields.io/pypi/v/timerun.svg"></a>
Expand All @@ -14,9 +14,9 @@
<a href="https://pepy.tech/project/timerun"><img alt="Total Downloads" src="https://static.pepy.tech/badge/timerun"></a>
</p>

TimeRun is a **single-file** Python package with **no dependencies** beyond the standard library. It records **wall-clock time** and **CPU time** for code blocks or function calls and supports optional **metadata** (e.g. run id, tags) per measurement.
TimeRun is a **single-file** Python package with **no dependencies** beyond the standard library. It records **wall-clock time** and **CPU time** when you measure **a block** or **function calls** (one `Measurement` per block or per call) and supports optional **metadata** (e.g. run id, tags) and **callbacks** (`on_start` / `on_end`) per measurement.

For the full value proposition and positioning, see [Why TimeRun](https://hh-mwb.github.io/timerun/about/) on the docs site.
For positioning and the full value proposition, see [Overview](https://hh-mwb.github.io/timerun/overview/) on the docs site.

## Installation

Expand Down Expand Up @@ -72,9 +72,11 @@ datetime.timedelta(microseconds=8)

*Note: Argument `maxlen` caps how many measurements are kept (e.g. `@Timer(maxlen=10)`). By default the deque is unbounded.*

### Callbacks on Start and End
### Callbacks

Optional `on_start` and `on_end` callbacks run once per measurement. Both receive the measurement instance (`on_start` before timings are set, `on_end` after). Typical uses are logging, forwarding to OpenTelemetry, or enqueueing to a metrics pipeline.
Optional `on_start` and `on_end` callbacks run once per measurement. Both receive the `Measurement` instance β€” `on_start` before timings are set, `on_end` after. For example:

Print elapsed time when a block finishes:

```python
>>> from timerun import Timer
Expand All @@ -84,6 +86,20 @@ Optional `on_start` and `on_end` callbacks run once per measurement. Both receiv
0:00:00.000008
```

Attach a trace id before each call starts:

```python
>>> from uuid import uuid4
>>> from timerun import Timer
>>> @Timer(on_start=lambda m: m.metadata.update(trace_id=uuid4().hex))
... def func():
... return
...
>>> func()
>>> func.measurements[-1].metadata
{'trace_id': '8aa2c000c98843738a2f0d5d3600d052'}
```

## Contributing

Contributions are welcome. See [CONTRIBUTING.md](https://github.com/HH-MWB/timerun/blob/main/CONTRIBUTING.md) for setup, testing, and pull request guidelines.
Expand Down
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
---
title: Analyze results
---

# Analyze results

**Problem:** You have many measurements (e.g. from repeated runs or a decorator's `measurements` deque) and want to summarize or compare β€” mean, variance, confidence intervals.
Expand Down Expand Up @@ -27,7 +31,7 @@ measurements = list(my_func.measurements)

## What to extract

Each measurement has **wall time** and **CPU time**; use the one that matches your question (e.g. wall for latency, CPU for compute-bound work). Use `wall_time.duration` (nanoseconds, int) or `wall_time.timedelta` for float seconds. You can also use **metadata** to group or filter before computing stats (e.g. by `run_id`, `stage`) so you get per-group summaries.
Each measurement has **wall time** and **CPU time**; use the one that matches your question (e.g. wall for latency, CPU for compute-bound work). Use `wall_time.duration` (nanoseconds, int) or `wall_time.timedelta.total_seconds()` for float seconds. You can also use **metadata** to group or filter before computing stats (e.g. by `run_id`, `stage`) so you get per-group summaries.

```python
durations_ns = [m.wall_time.duration for m in measurements]
Expand Down Expand Up @@ -107,6 +111,4 @@ plt.show()

This plots the mean as a point with an error bar spanning the confidence interval. For more on confidence intervals and benchmarking, see your preferred stats or benchmarking reference.

**Back to:** [Recipes](index.md)

**See also:** For the `measurements` deque and `maxlen`, see [Measure functions](../guide/measure-functions.md). For collecting in `on_end`, see [Callbacks](../guide/callbacks.md).
**See also:** [Measure function calls](../guide/measure-functions.md) for the `measurements` deque and `maxlen`. [Callbacks](../guide/callbacks.md) for collecting measurements in `on_end`.
14 changes: 14 additions & 0 deletions docs/cookbook/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
---
title: Cookbook
---

# Cookbook

Real-world patterns for using TimeRun: use metadata effectively, share results with your stack, time web traffic, and analyze timing data.

You already know the API from the [Guide](../guide/index.md): timer overview, measure a block, measure function calls, metadata, and callbacks. Here we show how to apply it to concrete problems.

1. **[Use metadata effectively](metadata.md)** β€” Add context (e.g. request id, stage) to every measurement by mutating metadata in `on_start`.
2. **[Share results](share-results.md)** β€” Send measurements to logs, files, OpenTelemetry, or Prometheus using `on_end`.
3. **[Time web requests](web-framework.md)** β€” Wrap HTTP requests with `Timer` in FastAPI, Flask, or Django.
4. **[Analyze results](analyze-results.md)** β€” Collect measurements and compute summaries or confidence intervals with standard tools.
78 changes: 78 additions & 0 deletions docs/cookbook/metadata.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
---
title: Use metadata effectively
---

# Use metadata effectively

**Problem:** You want context on every measurement (e.g. request id, stage, experiment id) without repeating it in every `Timer()` call.

**Idea:** Metadata is attached to each measurement. You can **mutate `measurement.metadata` in `on_start`** (or inside the block) to add or change keys for that run. Each measurement gets its own copy of the initial metadata when the run starts, so mutating it in `on_start` only affects that measurement.

## Why this works

Each measurement gets its own deep copy of the metadata dict, so mutations in `on_start` or the block affect only that run. See [Guide: Metadata](../guide/metadata.md) for copy and isolation rules.

## Example: add run context in `on_start`

Omit metadata (or pass a dict); an empty dict is the default when you pass `None`. Fill it per run in `on_start` from context vars or thread-local storage:

```python
from contextvars import ContextVar
from timerun import Timer

request_id_ctx: ContextVar[str] = ContextVar("request_id", default="")

def add_request_id(m):
m.metadata["request_id"] = request_id_ctx.get()

with Timer(on_start=add_request_id) as m:
pass # your code

# m.metadata now includes "request_id" for this run
print(m.metadata) # e.g. {"request_id": "req-abc"}
```

## Example: set tags inside the block

When context is fixed at the start of the run (request id, stage), **`on_start` is often clearer** than mutating inside the block. Mutating `m.metadata` in the block is still valid when values depend on **work you do inside the timed region** (outcome, branch taken, or a value known only after some steps):

```python
with Timer(metadata={"stage": "ingest"}) as m:
do_work()
if some_condition:
m.metadata["tag"] = "slow_path"
# m.metadata is {"stage": "ingest", "tag": "slow_path"} when relevant
```

## Example: invocation count with a closure

Use a factory that returns an `on_start` callback with its own counter so each measurement gets a monotonic call number (e.g. for a decorated hot path):

```python
from timerun import Timer


def make_invocation_callback():
count = 0 # (1)!

def set_invocation(m):
nonlocal count # (2)!
count += 1
m.metadata["invocation"] = count

return set_invocation # (3)!


on_start = make_invocation_callback()
for _ in range(3):
with Timer(on_start=on_start) as m:
pass # your code

# After each block, invocation is 1, 2, 3, ...; last m.metadata["invocation"] is 3
```

1. Counter lives in the closure; each factory call gets its own independent sequence.
2. `nonlocal` updates the enclosing `count` so every invocation of `set_invocation` sees the same running total.
3. Return the inner function so `Timer` receives a stable `on_start` callback with shared state.

**See also:** [Guide: Metadata](../guide/metadata.md) for passing `metadata={...}` and copy rules.
Original file line number Diff line number Diff line change
@@ -1,12 +1,16 @@
---
title: Share results
---

# Share results

**Problem:** You need to get measurements out of the process β€” to a log, a file, OpenTelemetry, or a metrics backend.
**Problem:** You need to get measurements out of the process β€” to a log, a file, OpenTelemetry, Prometheus, or another metrics backend.

**Idea:** Use **`on_end`** (and optionally `on_start`) to push each measurement out when the run finishes. The callback receives the `Measurement` with `wall_time`, `cpu_time`, and `metadata` set.

## Log

```python
```python hl_lines="16"
import logging
from timerun import Timer

Expand Down Expand Up @@ -61,23 +65,50 @@ from timerun import Timer
# tracer = get_tracer(__name__)

def on_start(m):
m.metadata["span"] = tracer.start_span("timerun")
m.metadata["span"] = tracer.start_span("timerun") # (1)!

def on_end(m):
span = m.metadata.get("span")
span = m.metadata.get("span") # (2)!
if span is None:
return # If on_start didn't set a span, skip.
span.set_attribute("wall_time_ns", m.wall_time.duration)
span.set_attribute("cpu_time_ns", m.cpu_time.duration)
for k, v in m.metadata.items():
if k != "span" and v is not None:
span.set_attribute(k, str(v))
span.end()
span.end() # (3)!

with Timer(on_start=on_start, on_end=on_end):
do_work()
```

**Next:** [Analyze results](analyze-results.md)
1. Start the span before the timed work runs so nested operations can attach to the same trace context if your tracer supports it.
2. Retrieve the span object you stashed on the `Measurement`; guard in case `on_start` failed or was skipped.
3. End the span after attributes are set so duration and metadata are recorded on the same span.

## Prometheus

Use the [Prometheus Python client](https://github.com/prometheus/client_python) (`pip install prometheus-client`). Register a histogram (or summary) and observe wall-clock seconds in `on_end`:

```python
from prometheus_client import Histogram
from timerun import Timer

OPERATION_SECONDS = Histogram(
"timerun_operation_seconds",
"Wall time for timed operations (seconds)",
buckets=(0.001, 0.005, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 1.0, 2.5, 5.0, float("inf")),
)


def observe_wall_time(m):
OPERATION_SECONDS.observe(m.wall_time.timedelta.total_seconds())


with Timer(on_end=observe_wall_time):
do_work()
```

Expose metrics from your process with `start_http_server` or your framework’s integration so Prometheus can scrape them.

For callback basics, see [Reference: Callbacks](../guide/callbacks.md). For the OpenTelemetry API, see the [OpenTelemetry Python docs](https://opentelemetry.io/docs/languages/python/).
**See also:** [Guide: Callbacks](../guide/callbacks.md) for when callbacks run. For the OpenTelemetry API, see the [OpenTelemetry Python docs](https://opentelemetry.io/docs/languages/python/).
Loading
Loading