Skip to content

docs(test): add explicit parametrize IDs and README testing instructions#37

Merged
ghinks merged 1 commit intomainfrom
feat/integration-test-improvements
Mar 11, 2026
Merged

docs(test): add explicit parametrize IDs and README testing instructions#37
ghinks merged 1 commit intomainfrom
feat/integration-test-improvements

Conversation

@ghinks
Copy link
Owner

@ghinks ghinks commented Mar 11, 2026

What

  • Adds explicit ids to the @pytest.mark.parametrize decorator in tests/test_integration.py so each fetch variant has a stable, human-readable node ID (fetch-default, fetch-with-dates, fetch-reset-db, fetch-config) instead of auto-generated ones based on lambda string representations.
  • Expands the README Running Tests section with commands to run unit tests only, integration tests only, and each integration test individually.

Why

Without explicit IDs, pytest generates test IDs like test_fetch_examples_integration[<lambda>-Successfully saved0]. These are fragile — adding or reordering a parametrize variant silently shifts the numeric suffixes and breaks any command that targets a specific test by ID. Named IDs are stable regardless of ordering.

Changes

tests/test_integration.py

  • Added ids=["fetch-default", "fetch-with-dates", "fetch-reset-db", "fetch-config"] to @pytest.mark.parametrize

README.md

  • Added uv run pytest -m "not integration" for unit tests only
  • Added uv run pytest -m integration for integration tests only
  • Added per-test commands for each of the 8 integration test cases using stable node IDs

Testing

# Unit tests only
uv run pytest -m "not integration"

# Verify integration test IDs
uv run pytest --collect-only -q tests/test_integration.py

Co-authored-by: Copilot 223556219+Copilot@users.noreply.github.com

Rewrite test_integration.py with a proper fixture-based structure:
- session-scoped github_env and date_windows fixtures for shared setup
- module-scoped fetched_workspace fixture that runs fetch once and
  reuses the database across all classify tests
- date windows computed dynamically relative to today rather than
  hardcoded values
- four parametrised fetch variants (default, with-dates, reset-db, config)
- dedicated classify tests: table output, stricter threshold, JSON output,
  and --exclude-primary-merged

Expand the README Development section with instructions for running unit
tests only, integration tests only, and each individual integration test
by its pytest node ID.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 55150e70af

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +202 to +205
"--start",
date_windows.classify_start,
"--end",
date_windows.classify_end,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Stabilize classify tests against low-activity windows

These classify invocations rely on the CLI default --min-samples 30 while the baseline window is fixed to roughly today-182d through today-30d; when expressjs/express has fewer than 30 merged PRs in that window, classify returns an insufficient-data error and _run_cli fails the test. This makes the new integration suite time-dependent and can break without code changes, so the tests should either pass a smaller explicit --min-samples or widen the baseline window.

Useful? React with 👍 / 👎.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes I think that is a valid point. Can we modify the number of samples to 10 for integration tests.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looking...

@ghinks ghinks changed the title test: overhaul integration tests and expand README testing docs docs(test): add explicit parametrize IDs and README testing instructions Mar 11, 2026
@ghinks ghinks merged commit 55150e7 into main Mar 11, 2026
1 check passed
@ghinks ghinks deleted the feat/integration-test-improvements branch March 11, 2026 10:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant