Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 4 additions & 6 deletions training/scripts/evaluate_baseline_metrics.py
Original file line number Diff line number Diff line change
Expand Up @@ -261,9 +261,9 @@ def main():
evaluator = BaselineMetricsEvaluator()

if args.dry_run:
return _extracted_from_main_25(evaluator)
return _run_dry_run_evaluation(evaluator)
if args.scan_all:
return _extracted_from_main_68(evaluator)
return _scan_all_datasets(evaluator)
if args.scan_all_s3:
return _scan_s3_datasets(evaluator, args.scan_all_s3)
if args.input_file:
Expand Down Expand Up @@ -410,8 +410,7 @@ def _scan_s3_datasets(evaluator, s3_prefix: str) -> int:
return 0


# TODO Rename this here and in `main`
def _extracted_from_main_68(evaluator):
def _scan_all_datasets(evaluator):
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

πŸ› οΈ Refactor suggestion | 🟠 Major

Add type hints to function signature.

The function signature lacks explicit type annotations for the parameter and return type. As per coding guidelines, prefer explicit types.

πŸ“ Proposed fix to add type hints
-def _scan_all_datasets(evaluator):
+def _scan_all_datasets(evaluator: BaselineMetricsEvaluator) -> int:
πŸ“ Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def _scan_all_datasets(evaluator):
def _scan_all_datasets(evaluator: BaselineMetricsEvaluator) -> int:
πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@training/scripts/evaluate_baseline_metrics.py` at line 413, Add explicit type
annotations to the _scan_all_datasets function: annotate the parameter as the
concrete Evaluator type used in your codebase (e.g., Evaluator) or typing.Any if
no such class is exported, and add an explicit return type such as
Iterable[Dataset] or Iterator[Dataset] (or the exact collection element type the
function yields/returns); also import the necessary names from typing
(Iterable/Iterator/Any) and the module that defines Evaluator/Dataset so the
signature reads something like _scan_all_datasets(evaluator: Evaluator) ->
Iterable[Dataset].

logger.info("Scanning all dataset directories...")
all_results: dict[str, Any] = {}

Expand Down Expand Up @@ -441,8 +440,7 @@ def _extracted_from_main_68(evaluator):
return 0


# TODO Rename this here and in `main`
def _extracted_from_main_25(evaluator):
def _run_dry_run_evaluation(evaluator):
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

πŸ› οΈ Refactor suggestion | 🟠 Major

Add type hints to function signature.

The function signature lacks explicit type annotations for the parameter and return type. As per coding guidelines, prefer explicit types.

πŸ“ Proposed fix to add type hints
-def _run_dry_run_evaluation(evaluator):
+def _run_dry_run_evaluation(evaluator: BaselineMetricsEvaluator) -> int:
πŸ“ Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def _run_dry_run_evaluation(evaluator):
def _run_dry_run_evaluation(evaluator: BaselineMetricsEvaluator) -> int:
πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@training/scripts/evaluate_baseline_metrics.py` at line 443, Add explicit type
annotations to the _run_dry_run_evaluation function: annotate the evaluator
parameter with its concrete type (e.g., Evaluator or the specific
class/interface used in this module) or use typing.Any if the concrete type is
not accessible, and annotate the return type (likely -> None). Also add the
necessary import (from typing import Any) or import the Evaluator type where
appropriate so the signature becomes _run_dry_run_evaluation(evaluator:
Evaluator) -> None (or evaluator: Any -> None).

logger.info("Running DRY RUN evaluation...")
mock_data = [
{
Expand Down