refactor(cafa-eval): T-CONTEXTS partial — CafaEvalRunContext#75
Merged
refactor(cafa-eval): T-CONTEXTS partial — CafaEvalRunContext#75
Conversation
Introduces ``CafaEvalRunContext`` frozen dataclass in ``_run_cafa_eval_driver.py`` to bundle the 16 per-call inputs that the per-setting NK/LK/PK loop consumes (artifact paths, reranker bundles, delta cohort, scoring snapshot). ``evaluate_all_settings`` signature collapses 18 args → 3 (``session``, ``ctx``, ``emit``). The orchestrator builds the context inline in ``RunCafaEvaluationOperation.execute``; otherwise the loop body reads ``ctx.<field>`` instead of locals. Sizes: - _run_cafa_eval_driver.py: +20 LOC (dataclass) - run_cafa_evaluation.py: -2 LOC (one less call-site arg list) - Smell baseline: 79 -> 78 (`evaluate_all_settings` retired from params>6 list; offset by methods+1 for the dataclass __init__ count; net params>6 24 → 23) Combined with PR #73 (in flight, KnnEnrichmentContext) the params>6 ratchet should land at ~22 once both merge. Local-first 5 verde (ruff + flake8 + pytest 1162 + check_smells).
3077f41 to
9ccec02
Compare
4 tasks
frapercan
added a commit
that referenced
this pull request
May 8, 2026
…#77) ## Acceptance criteria (master plan §24 Fase 1 — T-CONTEXTS partial) Third incremental Parameter Object slice (after #73 KnnEnrichmentContext, #75 CafaEvalRunContext). ## Changes - New private ``_RerankerRegistration`` frozen dataclass bundling the nine non-session inputs both register endpoints (multipart + by-reference) feed into ``_register_model``. - ``_register_model`` signature collapses 10 args → 2 (session, reg). - Both call-sites build the registration inline. - Helper is private — only the two router endpoints in this file use it. Tests touch the endpoints by HTTP, never the helper directly, so no test churn. ## Smell budget 78 → 77 offenders. `_register_model` retired from params>6 (23 → 22). ## Test plan - [x] `poetry run ruff check protea scripts` - [x] `poetry run flake8 protea/` - [x] `poetry run pytest tests/ --ignore=tests/test_jobs_pg.py` (1163 passed, 11 skipped) - [x] `poetry run python scripts/check_smells.py` (77 known, none new)
4 tasks
frapercan
added a commit
that referenced
this pull request
May 8, 2026
## Acceptance criteria (master plan §24 Fase 1 — T-CONTEXTS partial) Fourth incremental Parameter Object slice. Tackles the last remaining 16-arg offender in core (after #73, #75, #77). ## Changes - ``protea/core/parquet_export.py``: - New ``@dataclass(frozen=True) ParquetExportContext`` bundling 15 per-call inputs grouped into 3 sections (source shards, dataset identity, publishing). - ``export_reranker_parquets`` signature collapses 16 args → 1 (``ctx``). - Body destructures the context once at the top so the rest of the implementation stays diff-minimal. - ``protea/core/training_dump_helpers.py``: production caller updated. - ``tests/test_parquet_export_boundary.py``: only test that invokes it updated. ## Smell budget 77 → 75 offenders. **params>6: 22 → 20** (`export_reranker_parquets` retired plus 1 knock-on improvement as the dataclass centralises type annotations). ## Test plan - [x] `poetry run ruff check protea scripts` - [x] `poetry run flake8 protea/` - [x] `poetry run pytest tests/ --ignore=tests/test_jobs_pg.py` (1163 passed, 11 skipped) - [x] `poetry run python scripts/check_smells.py` (75 known, none new) ## T-CONTEXTS progress (4 slices total) - #73: KnnEnrichmentContext — enrich_v6_features 9 → 4 args - #75: CafaEvalRunContext — evaluate_all_settings 18 → 3 args - #77: _RerankerRegistration — _register_model 10 → 2 args - this PR: ParquetExportContext — export_reranker_parquets 16 → 1 arg Combined: 53 → 10 args across the four entry points; baseline params>6: 24 → 20 (cumulative drop -4).
4 tasks
frapercan
added a commit
that referenced
this pull request
May 8, 2026
## Acceptance criteria (master plan §24 Fase 1 — T-CONTEXTS partial) Fifth incremental Parameter Object slice. Tackles the remaining 16-arg offender in core training pipeline. ## Changes - `protea/core/training_dump_helpers.py`: - New ``KnnTransferContext`` frozen dataclass bundling the 12 per-call data inputs (queries, references, ontology maps, optional enrichment helpers). - ``_knn_transfer_and_label`` signature collapses 16 args → 5 (``session``, ``p``, ``ctx``, plus existing ``sequence_context`` / ``stream_output``). - Two production call sites updated (train branch + test-stream branch). - `tests/test_knn_streaming_smoke.py`: shared test runner updated. ## Smell budget 77 → 75 offenders. **params>6: 22 → 20** (`_knn_transfer_and_label` retired plus 1 knock-on). ## Test plan - [x] `poetry run ruff check protea scripts` - [x] `poetry run flake8 protea/` - [x] `poetry run pytest tests/ --ignore=tests/test_jobs_pg.py` (1163 passed, 11 skipped) - [x] `poetry run python scripts/check_smells.py` (75 known, none new) ## T-CONTEXTS progress (5 slices total, 4 in flight pending CI) - #73 KnnEnrichmentContext — enrich_v6_features 9 → 4 args - #75 CafaEvalRunContext — evaluate_all_settings 18 → 3 args - #77 _RerankerRegistration — _register_model 10 → 2 args - #78 ParquetExportContext — export_reranker_parquets 16 → 1 arg - this PR KnnTransferContext — _knn_transfer_and_label 16 → 5 args Combined: 69 args → 15 across the 5 entry points.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Acceptance criteria (master plan §24 Fase 1 — T-CONTEXTS partial)
Second incremental Parameter Object slice (after PR #73's
KnnEnrichmentContext): tackles the top params>6 offender,evaluate_all_settings, which I introduced in PR #72 (round 2d).Changes
protea/core/operations/_run_cafa_eval_driver.py:@dataclass(frozen=True) CafaEvalRunContextbundling 16 per-call inputs (artifact paths, reranker bundles, delta cohort, scoring snapshot).evaluate_all_settingssignature collapses 18 args → 3 (session,ctx,emit).ctx.<field>instead of locals.protea/core/operations/run_cafa_evaluation.py:Smell budget
79 → 78 offenders.
evaluate_all_settingsretired from params>6 (24 → 23). Combined with PR #73 (KnnEnrichmentContext, in flight) the ratchet should land at ~22 once both merge.Test plan
poetry run ruff check protea scriptspoetry run flake8 protea/poetry run pytest tests/ --ignore=tests/test_jobs_pg.py(1162 passed, 11 skipped)poetry run python scripts/check_smells.py(78 known, none new)Coordination
Disjoint from PR #73 (different files). Both are local Parameter Object slices that don't yet require the cross-repo
protea-contractsv0.2.0 bump.