Skip to content

validator: fail when integration is missing either unit or integration tests #40

@TheRealAgentK

Description

@TheRealAgentK

Problem

scripts/validate_integration.py currently only checks that a tests/ folder exists with at least one test_*.py file:

# validate_integration.py:380
self.add_error("Missing test file: tests/test_*.py")

That doesn't enforce the two-tier convention the integrations repo has standardised on:

  • test_*_unit.py — mocked, CI-safe, run automatically
  • test_*_integration.py — live API, opt-in via -m integration

This is wired into pyproject.toml:

python_files = ["test_*_unit.py"]
markers = [
    "unit: Pure unit tests — mocked, no credentials needed, safe for CI",
    "integration: Integration tests — require real API credentials, never run in CI",
    ...
]

…and documented in both writing-unit-tests and writing-integration-tests skills.

In practice integrations can ship with only one tier (or neither tier matching the naming convention) and CI still passes — leaving us with either no live verification against the real API, or no fast mocked feedback in CI, depending on which side is missing.

Proposal

Extend _check_tests (or add a new check) so that for each integration:

  1. Error if there is no file matching test_*_unit.py in tests/.
  2. Error if there is no file matching test_*_integration.py in tests/.
  3. Bonus warning if a test_*_unit.py file contains zero pytest.mark.unit markers, or a test_*_integration.py file contains zero pytest.mark.integration markers — to catch correctly-named files that don't actually run under the right marker.

Concrete error messages:

  • "Missing tests/test_*_unit.py — every integration must have at least one mocked, CI-safe unit test file"
  • "Missing tests/test_*_integration.py — every integration must have at least one live integration test (it can be skipped when credentials are absent)"

Why both should be required

  • Unit tests: catch regressions on every push without burning API credits or hitting upstream services.
  • Integration tests: prove the integration actually works end-to-end against the real API. Mocked tests can pass while production fails (e.g., the #316 / supadata 401 case, or upstream API shape changes).

Treating either as optional means we ship integrations whose test signal is incomplete in one direction or the other.

Migration

Many existing integrations in autohive-integrations only have one tier today (often just a test_<name>.py file that's neither). Suggested rollout:

  1. Ship the new check behind a --strict-tests flag first.
  2. Inventory which integrations would fail; track migration in a tracking issue (good good-first-issue material).
  3. Once the long tail is migrated, flip --strict-tests to default and remove the flag.

This avoids breaking every PR overnight while still giving the validator the eventual ability to enforce the convention.

Acceptance criteria

  • validate_integration.py <integration-with-no-unit-tests> exits non-zero with a clear error.
  • validate_integration.py <integration-with-no-integration-tests> exits non-zero with a clear error.
  • An integration with both tiers passes cleanly.
  • Unit tests in this repo cover all three branches (unit-only, integration-only, both).
  • Documentation in autohive-integrations CONTRIBUTING.md references the new requirement.

Context

Surfaced while working on autohive-ai/autohive-integrations#280 and #316. The Supadata integration originally shipped with a test_supadata_transcribe.py testbed that wasn't even pytest — it would have passed any "tests/ folder exists" check while providing zero real coverage.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions