Skip to content

🧪 QA: Add test for SubtitleProcessor edge case#122

Open
daggerstuff wants to merge 1 commit intostagingfrom
test/subtitle-processor-format-markdown-2962032702155572427
Open

🧪 QA: Add test for SubtitleProcessor edge case#122
daggerstuff wants to merge 1 commit intostagingfrom
test/subtitle-processor-format-markdown-2962032702155572427

Conversation

@daggerstuff
Copy link
Copy Markdown
Owner

@daggerstuff daggerstuff commented Apr 1, 2026

💡 What: Added test cases for SubtitleProcessor.format_as_markdown
🎯 Why: Covers missing edge cases when metadata dictionary is empty and ensures the normal case is also covered correctly.
✅ Verification: Runs successfully via pytest.


PR created automatically by Jules for task 2962032702155572427 started by @daggerstuff

Summary by Sourcery

Add unit tests to cover SubtitleProcessor.format_as_markdown behavior for normal and missing metadata cases.

Tests:

  • Add tests verifying format_as_markdown includes provided metadata and text in the markdown output.
  • Add tests verifying format_as_markdown falls back to default values when metadata is missing.

Summary by CodeRabbit

  • Tests
    • Added test cases to verify subtitle processing and formatting functionality handles both complete and empty metadata scenarios correctly.

Co-authored-by: daggerstuff <261005129+daggerstuff@users.noreply.github.com>
@google-labs-jules
Copy link
Copy Markdown
Contributor

👋 Jules, reporting for duty! I'm here to lend a hand with this pull request.

When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down.

I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job!

For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with @jules. You can find this option in the Pull Request section of your global Jules UI settings. You can always switch back!

New to Jules? Learn more at jules.google/docs.


For security, I will only act on instructions from the user who triggered this task.

@vercel
Copy link
Copy Markdown

vercel bot commented Apr 1, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
ai Error Error Apr 1, 2026 1:09am

Copilot AI review requested due to automatic review settings April 1, 2026 01:09
@chatgpt-codex-connector
Copy link
Copy Markdown

You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard.

@sourcery-ai
Copy link
Copy Markdown

sourcery-ai bot commented Apr 1, 2026

Reviewer's Guide

Adds unit tests for SubtitleProcessor.format_as_markdown to cover both normal metadata usage and the empty-metadata edge case, improving regression protection around subtitle-to-Markdown formatting behavior.

File-Level Changes

Change Details Files
Add tests covering SubtitleProcessor.format_as_markdown for both populated and empty metadata dictionaries to validate expected Markdown output content.
  • Introduce test_format_as_markdown_edge_cases to verify that title, channel, URL, date, and full text are present in the generated Markdown for a typical metadata payload.
  • Introduce test_format_as_markdown_missing_metadata to verify that default placeholder values are used for title and channel and that Source and Date lines render even when metadata is empty.
tests/utils/test_subtitle_processor.py

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 1, 2026

📝 Walkthrough

Walkthrough

Two new test cases were added to the SubtitleProcessor.format_as_markdown test suite. One verifies correct formatting when complete metadata is provided; the other validates default placeholder behavior when metadata is empty.

Changes

Cohort / File(s) Summary
Test Cases for Subtitle Processing
tests/utils/test_subtitle_processor.py
Added two new test cases to SubtitleProcessor.format_as_markdown: one validates markdown output with complete metadata (title, channel, url, date), and another verifies handling of empty metadata with default placeholders and markdown field formatting.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Poem

🐰 Two tests hopped into the fold,

Checking metadata, bold and cold,

Unknown titles, channels true—

Format markdown through and through! ✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title references adding tests for SubtitleProcessor edge case, which directly aligns with the changeset that adds two new test cases for SubtitleProcessor.format_as_markdown.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch test/subtitle-processor-format-markdown-2962032702155572427

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No issues found across 1 file

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
tests/utils/test_subtitle_processor.py (1)

39-42: Tighten empty-metadata assertions to catch formatting regressions

Current substring checks are valid but permissive. Since utils/subtitle_processor.py defines a structured markdown header, asserting a stricter prefix (or exact metadata block) would better catch accidental formatting drift.

Proposed test hardening
 def test_format_as_markdown_missing_metadata():
     text = "Just some text."
     metadata = {}

     md = SubtitleProcessor.format_as_markdown(text, metadata)
-    assert "Unknown Title" in md
-    assert "Unknown Channel" in md
-    assert "**Source:** \n" in md
-    assert "**Date:** \n" in md
+    assert md.startswith(
+        "# Unknown Title\n\n"
+        "**Channel:** Unknown Channel\n"
+        "**Source:** \n"
+        "**Date:** \n\n"
+    )
+    assert "## Transcript\n\nJust some text." in md
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/utils/test_subtitle_processor.py` around lines 39 - 42, The test is too
permissive—tighten the assertions in tests/utils/test_subtitle_processor.py that
currently check the md string for loose substrings (md variable) and instead
assert the exact metadata block or a strict prefix that matches the structured
markdown header produced by utils/subtitle_processor.py (e.g., assert the
metadata block equals or md.startswith the exact header lines for Title,
Channel, Source and Date when empty). Update the four assertions to compare the
full expected metadata block (or use startswith with the exact multi-line
header) so formatting regressions will fail the test.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@tests/utils/test_subtitle_processor.py`:
- Around line 39-42: The test is too permissive—tighten the assertions in
tests/utils/test_subtitle_processor.py that currently check the md string for
loose substrings (md variable) and instead assert the exact metadata block or a
strict prefix that matches the structured markdown header produced by
utils/subtitle_processor.py (e.g., assert the metadata block equals or
md.startswith the exact header lines for Title, Channel, Source and Date when
empty). Update the four assertions to compare the full expected metadata block
(or use startswith with the exact multi-line header) so formatting regressions
will fail the test.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 7ea3b5f9-f1b3-4d22-a31f-e44b3e6c74f6

📥 Commits

Reviewing files that changed from the base of the PR and between 2e5eb05 and bc02484.

📒 Files selected for processing (1)
  • tests/utils/test_subtitle_processor.py

Copy link
Copy Markdown

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've left some high level feedback:

  • The two new tests share a lot of setup and assertion patterns; consider using pytest.mark.parametrize or a helper function to reduce duplication and make it easier to add more format_as_markdown scenarios later.
  • The assertion on the exact substring "**Source:** \n" makes the test tightly coupled to formatting details; it may be more robust to assert on the presence of the label and the following content separately so minor formatting changes don’t cause spurious failures.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- The two new tests share a lot of setup and assertion patterns; consider using `pytest.mark.parametrize` or a helper function to reduce duplication and make it easier to add more `format_as_markdown` scenarios later.
- The assertion on the exact substring `"**Source:** \n"` makes the test tightly coupled to formatting details; it may be more robust to assert on the presence of the label and the following content separately so minor formatting changes don’t cause spurious failures.

Fix all in Cursor


Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds pytest coverage for SubtitleProcessor.format_as_markdown, including the “metadata present” path and the edge case where metadata is empty.

Changes:

  • Added a test validating markdown output includes provided title/channel/url/date and the transcript text.
  • Added a test validating default/fallback fields when metadata is an empty dict.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

result = SubtitleProcessor.clean_vtt(vtt_content)
assert result == "Hello Hello world"

def test_format_as_markdown_edge_cases():
Copy link

Copilot AI Apr 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test appears to be the normal/happy-path case (metadata fully populated) rather than an edge case. Renaming it to something like test_format_as_markdown_with_metadata would make the intent clearer and help future readers quickly find true edge-case coverage.

Suggested change
def test_format_as_markdown_edge_cases():
def test_format_as_markdown_with_metadata():

Copilot uses AI. Check for mistakes.
Comment on lines +41 to +42
assert "**Source:** \n" in md
assert "**Date:** \n" in md
Copy link

Copilot AI Apr 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These assertions are fairly brittle because they depend on an exact newline/whitespace sequence. If format_as_markdown changes formatting slightly (extra spaces, \\r\\n, additional text after the label), the test will fail despite correct behavior. Prefer asserting on the presence/structure more flexibly (e.g., checking that the labels exist, or normalizing line endings/whitespace before asserting, or using a regex that allows optional whitespace).

Suggested change
assert "**Source:** \n" in md
assert "**Date:** \n" in md
assert "**Source:**" in md
assert "**Date:**" in md

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants