Skip to content

net: Mark localnet tests#4751

Open
yossisegev wants to merge 1 commit intoRedHatQE:mainfrom
yossisegev:mark-additional-tests
Open

net: Mark localnet tests#4751
yossisegev wants to merge 1 commit intoRedHatQE:mainfrom
yossisegev:mark-additional-tests

Conversation

@yossisegev
Copy link
Copy Markdown
Contributor

@yossisegev yossisegev commented May 5, 2026

One immediate benefit from marking the localnet tests is that localnet is not supported on cloud-providers clusters, so it makes it easier to filter them out when running on such clusters (for example - heterogeneous multi-arch clusters are currently only applicable such clusters).

Assisted by: Claude Code

Signed-off-by: Yossi Segev <ysegev@redhat.com>
Assisted by: Claude Code
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented May 5, 2026

📝 Walkthrough

Walkthrough

This PR adds a new pytest marker called localnet to identify tests that use node localnet topology, then applies the marker at module scope to five test files in tests/network/localnet/ to enable test filtering and selection.

Changes

Localnet Pytest Marker Configuration

Layer / File(s) Summary
Marker Definition
pytest.ini
Registers the localnet marker in pytest configuration with description "Tests that use node localnet topology".
Test Module Application
tests/network/localnet/ipam/test_connectivity.py, tests/network/localnet/migration_stuntime/test_migration_stuntime.py, tests/network/localnet/test_default_bridge.py, tests/network/localnet/test_ovs_bridge.py
Each module adds pytestmark = pytest.mark.localnet to apply the marker to all contained tests.
Existing Markers Preserved
tests/network/localnet/test_jumbo_frames.py
The localnet marker is appended to the existing pytestmark list containing special_infra and jumbo_frame markers.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~3 minutes

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Description check ⚠️ Warning The pull request description is incomplete and lacks critical sections required by the template. Add missing sections: short description, more details, which issue(s) this fixes, special notes for reviewer, and jira-ticket URL. The current description only explains the benefit without following the required template structure.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title 'net: Mark localnet tests' accurately describes the main change: adding the localnet pytest marker to test modules.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Tip

💬 Introducing Slack Agent: The best way for teams to turn conversations into code.

Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.

  • Generate code and open pull requests
  • Plan features and break down work
  • Investigate incidents and troubleshoot customer tickets together
  • Automate recurring tasks and respond to alerts with triggers
  • Summarize progress and report instantly

Built for teams:

  • Shared memory across your entire org—no repeating context
  • Per-thread sandboxes to safely plan and execute work
  • Governance built-in—scoped access, auditability, and budget controls

One agent for your entire SDLC. Right inside Slack.

👉 Get started


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@yossisegev
Copy link
Copy Markdown
Contributor Author

/build-and-push-container

@openshift-virtualization-qe-bot-5
Copy link
Copy Markdown

Report bugs in Issues

Welcome! 🎉

This pull request will be automatically processed with the following features:

🔄 Automatic Actions

  • Reviewer Assignment: Reviewers are automatically assigned based on the OWNERS file in the repository root
  • Size Labeling: PR size labels (XS, S, M, L, XL, XXL) are automatically applied based on changes
  • Issue Creation: A tracking issue is created for this PR and will be closed when the PR is merged or closed
  • Branch Labeling: Branch-specific labels are applied to track the target branch
  • Auto-verification: Auto-verified users have their PRs automatically marked as verified
  • Labels: Enabled categories: branch, can-be-merged, cherry-pick, has-conflicts, hold, needs-rebase, size, verified, wip

📋 Available Commands

PR Status Management

  • /wip - Mark PR as work in progress (adds WIP: prefix to title)
  • /wip cancel - Remove work in progress status
  • /hold - Block PR merging (approvers only)
  • /hold cancel - Unblock PR merging
  • /verified - Mark PR as verified
  • /verified cancel - Remove verification status
  • /reprocess - Trigger complete PR workflow reprocessing (useful if webhook failed or configuration changed)
  • /regenerate-welcome - Regenerate this welcome message

Review & Approval

  • /lgtm - Approve changes (looks good to me)
  • /approve - Approve PR (approvers only)
  • /assign-reviewers - Assign reviewers based on OWNERS file
  • /assign-reviewer @username - Assign specific reviewer
  • /check-can-merge - Check if PR meets merge requirements

Testing & Validation

  • /retest tox - Run Python test suite with tox
  • /retest build-container - Rebuild and test container image
  • /retest verify-bugs-are-open - verify-bugs-are-open
  • /retest all - Run all available tests

Container Operations

  • /build-and-push-container - Build and push container image (tagged with PR number)
    • Supports additional build arguments: /build-and-push-container --build-arg KEY=value

Cherry-pick Operations

  • /cherry-pick <branch> - Schedule cherry-pick to target branch when PR is merged
    • Multiple branches: /cherry-pick branch1 branch2 branch3

Label Management

  • /<label-name> - Add a label to the PR
  • /<label-name> cancel - Remove a label from the PR

✅ Merge Requirements

This PR will be automatically approved when the following conditions are met:

  1. Approval: /approve from at least one approver
  2. LGTM Count: Minimum 2 /lgtm from reviewers
  3. Status Checks: All required status checks must pass
  4. No Blockers: No wip, hold, has-conflicts labels and PR must be mergeable (no conflicts)
  5. Verified: PR must be marked as verified

📊 Review Process

Approvers and Reviewers

Approvers:

  • EdDev
  • dshchedr
  • myakove
  • rnetser
  • vsibirsk

Reviewers:

  • Anatw
  • EdDev
  • RoniKishner
  • azhivovk
  • dshchedr
  • frenzyfriday
  • nirdothan
  • orelmisan
  • rnetser
  • servolkov
  • vsibirsk
  • yossisegev
Available Labels
  • hold
  • verified
  • wip
  • lgtm
  • approve
AI Features
  • Cherry-Pick Conflict Resolution: Enabled (claude/claude-opus-4-6[1m])

💡 Tips

  • WIP Status: Use /wip when your PR is not ready for review
  • Verification: The verified label is removed on new commits unless the push is detected as a clean rebase
  • Cherry-picking: Cherry-pick labels are processed when the PR is merged
  • Container Builds: Container images are automatically tagged with the PR number
  • Permission Levels: Some commands require approver permissions
  • Auto-verified Users: Certain users have automatic verification and merge privileges

For more information, please refer to the project documentation or contact the maintainers.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
pytest.ini (1)

1-118: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

HIGH: PR description is missing all required template sections.

The PR description provided contains no template sections. Per the repo's PR template (.github/pull_request_template.md), the following sections are required even if left empty:

  • ##### What this PR does / why we need it: — must have meaningful content
  • ##### Which issue(s) this PR fixes:
  • ##### Special notes for reviewer:
  • ##### jira-ticket:

Please restore them. As per coding guidelines: "Flag as HIGH if the section is missing, empty, whitespace-only..."

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@pytest.ini` around lines 1 - 118, The PR description is missing required
template sections from .github/pull_request_template.md; update the PR
description to include the headings "##### What this PR does / why we need it:",
"##### Which issue(s) this PR fixes:", "##### Special notes for reviewer:", and
"##### jira-ticket:" and fill them with meaningful (non-empty, non-whitespace)
content or explicit “None” where applicable so the PR passes the repo
validation; ensure the entries describe purpose, linked issues, reviewer notes,
and jira ticket ID (or state none).
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Outside diff comments:
In `@pytest.ini`:
- Around line 1-118: The PR description is missing required template sections
from .github/pull_request_template.md; update the PR description to include the
headings "##### What this PR does / why we need it:", "##### Which issue(s) this
PR fixes:", "##### Special notes for reviewer:", and "##### jira-ticket:" and
fill them with meaningful (non-empty, non-whitespace) content or explicit “None”
where applicable so the PR passes the repo validation; ensure the entries
describe purpose, linked issues, reviewer notes, and jira ticket ID (or state
none).

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: 3e574678-2ff7-45ac-92d8-0ec24a6f2b6d

📥 Commits

Reviewing files that changed from the base of the PR and between d0120cb and 25580f7.

📒 Files selected for processing (6)
  • pytest.ini
  • tests/network/localnet/ipam/test_connectivity.py
  • tests/network/localnet/migration_stuntime/test_migration_stuntime.py
  • tests/network/localnet/test_default_bridge.py
  • tests/network/localnet/test_jumbo_frames.py
  • tests/network/localnet/test_ovs_bridge.py

@openshift-virtualization-qe-bot-4
Copy link
Copy Markdown

New container for quay.io/openshift-cnv/openshift-virtualization-tests:pr-4751 published

@yossisegev
Copy link
Copy Markdown
Contributor Author

/verified

Verified that localnet tests are not collected when filtering-out the marker (-m not localnet).

Without filtering-out - all localnet tests are collected:

$ uv run pytest --verbose --jira -rs -s -o log_cli=true --junitxml=xunit_results.xml --pytest-log-file=pytest-tests.log --leftovers-collector --tb=native --skip-deprecated-api-test --cluster-sanity-skip-storage-check tests/network/localnet --collect-only
...
======================================================================================================== 7/8 tests collected (1 deselected) in 0.04s =========================================================================================================

With filtering-out - no test collected:

$ uv run pytest --verbose --jira -rs -s -o log_cli=true --junitxml=xunit_results.xml --pytest-log-file=pytest-tests.log --leftovers-collector --tb=native --skip-deprecated-api-test --cluster-sanity-skip-storage-check tests/network/localnet -m not localnet --collect-only
...
================================================================================================================ no tests collected in 0.01s =================================================================================================================

@openshift-virtualization-qe-bot-3
Copy link
Copy Markdown
Contributor

@coderabbitai

Test execution plan request details

CRITICAL: You MUST post an inline review comment on the first changed line of the first file.
The inline comment should contain the Test Execution Plan with the smoke tests recommendation.
Do NOT submit a formal review - just post the inline comment directly.

As an expert software testing engineer, analyze all modified files in this PR and create a targeted test execution plan.
You will post an inline review comment with the test execution plan on the first changed file.
If you fail to run or post a comment, retry.

Analysis Requirements:

  1. Examine code changes in each modified file
  2. Identify affected code paths, functions, and classes
  3. Analyze pytest-specific elements: fixtures (scope, dependencies), parametrization, markers, conftest changes
  4. Trace test dependencies through imports, shared utilities, and fixture inheritance
  5. Detect new tests introduced in the PR
  6. Smoke Test Impact Analysis: Determine if any changes could affect smoke tests by checking:
    • Changes to files/functions used by tests marked with @pytest.mark.smoke
    • Changes to fixtures or utilities imported by smoke tests
    • Changes to conftest.py files that may affect smoke test execution
    • Changes to core infrastructure code (utilities/, libs/) that smoke tests depend on

Your deliverable:
Your change request comment will be based on the following requirements:

Test Execution Plan

  • Run smoke tests: True / False (Based on your smoke test impact analysis - True if any changes may affect smoke tests)
  • path/to/test_file.py - When the entire test file needs verification
  • path/to/test_file.py::TestClass::test_method - When specific test(s) needed
  • path/to/test_file.py::test_function - When specific test(s) needed
  • -m marker - When specific marker(s) can be used to cover multiple cases.

Guidelines:

  • Include only tests directly affected by the changes
  • Use a full file path only if ALL tests in that file require verification
  • Use file path + test name if only specific tests are needed
  • If a test marker can cover multiple files/tests, provide the marker
  • Balance coverage vs over-testing - Keep descriptions minimal

Hardware-Related Checks (SR-IOV, GPU, DPDK):

When PR modifies fixtures for hardware-specific resources:

  • Collection Safety: Fixtures MUST have existence checks (return None when hardware unavailable)
  • Test Plan: MUST verify both WITH and WITHOUT hardware:
    • Run affected tests on cluster WITH hardware
    • Verify collection succeeds on cluster WITHOUT hardware

CRITICAL WORKFLOW COMPLETION RULES:

When responding to this test execution plan request, you MUST follow these rules EXACTLY:

  1. YOUR ONLY DELIVERABLE: Post an inline review comment containing the test execution plan on the first changed line
  2. THEN STOP IMMEDIATELY - Do NOT generate any additional response
  3. FALLBACK ONLY: If submitting the review fails after retrying, post as a regular PR comment
  4. SILENCE = SUCCESS: After successfully submitting the review, your task is complete. No confirmation needed.

ABSOLUTE PROHIBITIONS (violating these creates empty/meaningless reviews):

  • ❌ Do NOT post acknowledgment messages like "Test execution plan posted", "Review posted successfully", "I've successfully posted"
  • ❌ Do NOT mention review IDs, URLs, or confirmation of posting in the PR thread
  • ❌ Do NOT add any follow-up comments after submitting the review
  • ❌ Do NOT reply to confirm task completion
  • ❌ Do NOT explain what you did - just do it and stop

Remember: The pull request review is visible to users. Additional comments are redundant noise.

CRITICAL: Before setting "Run smoke tests: True", you MUST use shell scripts to:
- Trace the actual fixture dependency chain from smoke tests to changed fixtures
- Verify that smoke tests actually import/use changed utilities or functions
- Confirm the dependency path exists; do NOT assume based on scope or semantics
- Be conservative: Session-scoped fixtures or infrastructure-sounding names do NOT
automatically mean smoke test impact. Only flag smoke test impact when you can
demonstrate a concrete dependency path.
WRONG: "This session-scoped storage fixture might affect smoke tests"
RIGHT: "Smoke test X uses fixture Y, which depends on the changed fixture Z"

Comment thread pytest.ini
rwx_default_storage: Tests that require RWX storage
descheduler: Tests that require kube-descheduler on nodes
remote_cluster: Tests that require a remote cluster
localnet: Tests that use node localnet topology
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test Execution Plan

  • Run smoke tests: False (No dependency path found from smoke tests to changed files — changes are marker-only additions with no fixture, utility, or conftest modifications)

Verify marker filtering works correctly:

  • -m localnet — Confirm all 7/8 localnet tests are collected across all modified files
  • -m not localnet — Confirm no localnet tests are collected (as validated by PR author)

Individual files to spot-check collection:

  • tests/network/localnet/ipam/test_connectivity.py
  • tests/network/localnet/migration_stuntime/test_migration_stuntime.py
  • tests/network/localnet/test_default_bridge.py
  • tests/network/localnet/test_jumbo_frames.py
  • tests/network/localnet/test_ovs_bridge.py

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented May 6, 2026

Tip

For best results, initiate chat on the files or code changes.

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Get PR diff to find first changed file and first changed line number
gh pr diff 4751 --repo RedHatQE/openshift-virtualization-tests | head -100

# Check if any smoke tests import or use these localnet test files
rg -l "pytest.mark.smoke" --type py | head -20

# Check if any smoke tests exist in network/localnet directory
rg "pytest.mark.smoke" tests/network/localnet/ --type py -n 2>/dev/null || echo "No smoke tests in localnet"

Length of output: 3216


🏁 Script executed:

#!/bin/bash
# Post inline review comment on pytest.ini at the first changed line (diff position 4)
gh api \
  --method POST \
  -H "Accept: application/vnd.github+json" \
  /repos/RedHatQE/openshift-virtualization-tests/pulls/4751/comments \
  -f body='**Test Execution Plan**

- **Run smoke tests: False** _(No dependency path found from smoke tests to changed files — changes are marker-only additions with no fixture, utility, or conftest modifications)_

Verify marker filtering works correctly:
- `-m localnet` — Confirm all 7/8 localnet tests are collected across all modified files
- `-m not localnet` — Confirm no localnet tests are collected (as validated by PR author)

Individual files to spot-check collection:
- `tests/network/localnet/ipam/test_connectivity.py`
- `tests/network/localnet/migration_stuntime/test_migration_stuntime.py`
- `tests/network/localnet/test_default_bridge.py`
- `tests/network/localnet/test_jumbo_frames.py`
- `tests/network/localnet/test_ovs_bridge.py`' \
  -f commit_id="$(gh pr view 4751 --repo RedHatQE/openshift-virtualization-tests --json headRefOid --jq '.headRefOid')" \
  -f path='pytest.ini' \
  -F position=4

Length of output: 3487


Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants