Skip to content

⚡ Bolt: Avoid thread pool overhead for small rule updates#182

Closed
abhimehro wants to merge 4 commits intomainfrom
bolt/optimize-thread-pool-overhead-10412449787616114386
Closed

⚡ Bolt: Avoid thread pool overhead for small rule updates#182
abhimehro wants to merge 4 commits intomainfrom
bolt/optimize-thread-pool-overhead-10412449787616114386

Conversation

@abhimehro
Copy link
Owner

This PR implements a performance optimization to avoid the overhead of creating a ThreadPoolExecutor when processing a single batch of rules. It also includes a fix for a flaky test in tests/test_plan_details.py.

Changes

  • Modified main.py: push_rules now checks total_batches. If 1, it runs synchronously. If > 1, it uses the thread pool.
  • Added tests/test_push_rules_perf.py: A new test file to verify the optimization and prevent regression.
  • Modified tests/test_plan_details.py: Fixed import-related flakiness by moving import main inside test functions.
  • Updated .jules/bolt.md: added a journal entry about thread pool overhead.

Performance

  • Eliminates thread creation overhead for folders with < 500 rules (default batch size).
  • Measuring this directly is hard in CI, but conceptually it removes ~0.5-2ms overhead per small folder on modern CPUs, and reduces memory churn.

Verification

  • Run uv run pytest tests/test_push_rules_perf.py to verify the optimization logic.
  • Run uv run pytest to verify all tests pass.

PR created automatically by Jules for task 10412449787616114386 started by @abhimehro

This commit introduces a performance optimization to `push_rules` by bypassing `ThreadPoolExecutor` when there is only a single batch of rules to process.

- **What:** Checks if `total_batches == 1` and processes the batch synchronously.
- **Why:** Creating a thread pool incurs overhead (thread creation, context switching) which exceeds the benefit of parallelization for a single task. This is common for smaller folders.
- **Impact:** Reduces CPU usage and latency for small updates.
- **Verification:** Added `tests/test_push_rules_perf.py` which verifies that `ThreadPoolExecutor` is not created for single-batch updates but is used for multi-batch updates.

Also fixed a flaky test `tests/test_plan_details.py` by moving imports inside test functions to avoid module state pollution from other tests.

Co-authored-by: abhimehro <84992105+abhimehro@users.noreply.github.com>
Copilot AI review requested due to automatic review settings February 9, 2026 14:57
@google-labs-jules
Copy link

👋 Jules, reporting for duty! I'm here to lend a hand with this pull request.

When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down.

I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job!

For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with @jules. You can find this option in the Pull Request section of your global Jules UI settings. You can always switch back!

New to Jules? Learn more at jules.google/docs.


For security, I will only act on instructions from the user who triggered this task.

@trunk-io
Copy link

trunk-io bot commented Feb 9, 2026

Merging to main in this repository is managed by Trunk.

  • To merge this pull request, check the box to the left or comment /trunk merge below.

@github-actions
Copy link

github-actions bot commented Feb 9, 2026

👋 Development Partner is reviewing this PR. Will provide feedback shortly.

@github-actions github-actions bot added the python label Feb 9, 2026
@gemini-code-assist
Copy link

Summary of Changes

Hello @abhimehro, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a performance enhancement to the rule pushing mechanism by intelligently managing thread pool usage. For small batches of rules, the system now avoids the overhead of creating a ThreadPoolExecutor, leading to minor but consistent performance gains. Additionally, it resolves a flakiness issue in existing tests and adds new tests to ensure the performance optimization behaves as expected and prevents future regressions.

Highlights

  • Performance Optimization: The push_rules function in main.py now intelligently bypasses the ThreadPoolExecutor for single batches of rules, reducing overhead for small updates and improving efficiency.
  • Flaky Test Fix: Addressed flakiness in tests/test_plan_details.py by localizing the import main statement within individual test functions, ensuring more reliable test execution.
  • New Performance Tests: Introduced tests/test_push_rules_perf.py to specifically validate the thread pool optimization logic, ensuring it behaves as expected and preventing future regressions.
  • Documentation Update: A new entry was added to .jules/bolt.md detailing the learning and action related to thread pool overhead for small workloads, serving as a valuable knowledge capture.
Changelog
  • .jules/bolt.md
    • Added a new journal entry for "Thread Pool Overhead on Small Batches" dated 2026-01-28, explaining the learning and action taken regarding thread pool overhead for small tasks.
  • main.py
    • Modified the push_rules function to conditionally use concurrent.futures.ThreadPoolExecutor. If total_batches is 1, the batch is processed synchronously to avoid thread pool creation overhead. Otherwise, the existing thread pool logic is used.
  • tests/test_plan_details.py
    • Moved the import main statement from the module level into individual test functions (test_print_plan_details_no_colors, test_print_plan_details_empty_folders, test_print_plan_details_with_colors) to resolve import-related flakiness.
  • tests/test_push_rules_perf.py
    • Added a new test file containing TestPushRulesPerf class.
    • Added test_single_batch_avoids_thread_pool to verify that ThreadPoolExecutor is not called when processing a single batch of rules.
    • Added test_multi_batch_uses_thread_pool to confirm that ThreadPoolExecutor is utilized when processing multiple batches of rules.
Activity
  • The pull request was automatically created by Jules for task 10412449787616114386, initiated by @abhimehro.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

**Learning:** Re-validating resource properties (like DNS/IP) when using *cached content* is pure overhead. If the content is served from memory (proven safe at fetch time), checking the *current* state of the source is disconnected from the data being used.
**Action:** When using a multi-stage pipeline (Warmup -> Process), ensure validation state persists alongside the data cache. Avoid clearing validation caches between stages if the data cache is not also cleared.

## 2026-01-28 - [Thread Pool Overhead on Small Batches]

Check notice

Code scanning / Remark-lint (reported by Codacy)

Warn when references to undefined definitions are found. Note

[no-undefined-references] Found reference to undefined definition
**Learning:** Re-validating resource properties (like DNS/IP) when using *cached content* is pure overhead. If the content is served from memory (proven safe at fetch time), checking the *current* state of the source is disconnected from the data being used.
**Action:** When using a multi-stage pipeline (Warmup -> Process), ensure validation state persists alongside the data cache. Avoid clearing validation caches between stages if the data cache is not also cleared.

## 2026-01-28 - [Thread Pool Overhead on Small Batches]

Check notice

Code scanning / Remark-lint (reported by Codacy)

Warn when shortcut reference links are used. Note

[no-shortcut-reference-link] Use the trailing [] on reference links

def test_print_plan_details_no_colors(capsys):
"""Test print_plan_details output when colors are disabled."""
import main

Check warning

Code scanning / Prospector (reported by Codacy)

Import outside toplevel (main) (import-outside-toplevel) Warning test

Import outside toplevel (main) (import-outside-toplevel)

def test_print_plan_details_empty_folders(capsys):
"""Test print_plan_details with no folders."""
import main

Check warning

Code scanning / Prospector (reported by Codacy)

Import outside toplevel (main) (import-outside-toplevel) Warning test

Import outside toplevel (main) (import-outside-toplevel)

def test_print_plan_details_with_colors(capsys):
"""Test print_plan_details output when colors are enabled."""
import main

Check warning

Code scanning / Prospector (reported by Codacy)

Import outside toplevel (main) (import-outside-toplevel) Warning test

Import outside toplevel (main) (import-outside-toplevel)
@@ -0,0 +1,89 @@
import unittest
from unittest.mock import MagicMock, patch
import concurrent.futures

Check warning

Code scanning / Prospector (reported by Codacy)

Unused import concurrent.futures (unused-import) Warning test

Unused import concurrent.futures (unused-import)
import unittest
from unittest.mock import MagicMock, patch
import concurrent.futures
import sys

Check warning

Code scanning / Prospector (reported by Codacy)

Unused import sys (unused-import) Warning test

Unused import sys (unused-import)
from unittest.mock import MagicMock, patch
import concurrent.futures
import sys
import os

Check warning

Code scanning / Prospector (reported by Codacy)

Unused import os (unused-import) Warning test

Unused import os (unused-import)

def test_print_plan_details_no_colors(capsys):
"""Test print_plan_details output when colors are disabled."""
import main

Check warning

Code scanning / Pylintpython3 (reported by Codacy)

Import outside toplevel (main) Warning test

Import outside toplevel (main)

def test_print_plan_details_empty_folders(capsys):
"""Test print_plan_details with no folders."""
import main

Check warning

Code scanning / Pylintpython3 (reported by Codacy)

Import outside toplevel (main) Warning test

Import outside toplevel (main)

def test_print_plan_details_with_colors(capsys):
"""Test print_plan_details output when colors are enabled."""
import main

Check warning

Code scanning / Pylintpython3 (reported by Codacy)

Import outside toplevel (main) Warning test

Import outside toplevel (main)
@@ -0,0 +1,89 @@
import unittest

Check warning

Code scanning / Pylintpython3 (reported by Codacy)

Missing module docstring Warning test

Missing module docstring
import sys
import os

# pytest adds root to sys.path usually, but to be safe we can use relative import or assume root is in path

Check warning

Code scanning / Pylintpython3 (reported by Codacy)

Line too long (107/100) Warning test

Line too long (107/100)

import main

class TestPushRulesPerf(unittest.TestCase):

Check warning

Code scanning / Pylintpython3 (reported by Codacy)

Missing class docstring Warning test

Missing class docstring
# Assert: ThreadPoolExecutor should NOT be called
mock_executor.assert_not_called()

# Verify functionality: client.post should be called once (via process_batch inside loop or direct)

Check warning

Code scanning / Pylintpython3 (reported by Codacy)

Line too long (107/100) Warning test

Line too long (107/100)
@@ -0,0 +1,89 @@
import unittest
from unittest.mock import MagicMock, patch
import concurrent.futures

Check notice

Code scanning / Pylintpython3 (reported by Codacy)

Unused import concurrent.futures Note test

Unused import concurrent.futures
import unittest
from unittest.mock import MagicMock, patch
import concurrent.futures
import sys

Check notice

Code scanning / Pylintpython3 (reported by Codacy)

Unused import sys Note test

Unused import sys
from unittest.mock import MagicMock, patch
import concurrent.futures
import sys
import os

Check notice

Code scanning / Pylintpython3 (reported by Codacy)

Unused import os Note test

Unused import os
@@ -0,0 +1,89 @@
import unittest

Check warning

Code scanning / Pylint (reported by Codacy)

Missing module docstring Warning test

Missing module docstring
import sys
import os

# pytest adds root to sys.path usually, but to be safe we can use relative import or assume root is in path

Check warning

Code scanning / Pylint (reported by Codacy)

Line too long (107/100) Warning test

Line too long (107/100)

import main

class TestPushRulesPerf(unittest.TestCase):

Check warning

Code scanning / Pylint (reported by Codacy)

Missing class docstring Warning test

Missing class docstring
# Assert: ThreadPoolExecutor should NOT be called
mock_executor.assert_not_called()

# Verify functionality: client.post should be called once (via process_batch inside loop or direct)

Check warning

Code scanning / Pylint (reported by Codacy)

Line too long (107/100) Warning test

Line too long (107/100)
@@ -0,0 +1,89 @@
import unittest
from unittest.mock import MagicMock, patch
import concurrent.futures

Check notice

Code scanning / Pylint (reported by Codacy)

Unused import concurrent.futures Note test

Unused import concurrent.futures
import unittest
from unittest.mock import MagicMock, patch
import concurrent.futures
import sys

Check notice

Code scanning / Pylint (reported by Codacy)

Unused import sys Note test

Unused import sys
from unittest.mock import MagicMock, patch
import concurrent.futures
import sys
import os

Check notice

Code scanning / Pylint (reported by Codacy)

Unused import os Note test

Unused import os
Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a valuable performance optimization by avoiding thread pool overhead for single-batch rule updates. The addition of performance tests is excellent for preventing regressions, and the fix for the flaky test improves the reliability of the test suite. My main feedback is a suggestion to refactor a small amount of duplicated code that was introduced with the optimization, to improve long-term maintainability.

Comment on lines +1154 to +1182
if total_batches == 1:
# Avoid thread pool overhead for single batch (very common for small folders)
result = process_batch(1, batches[0])
if result:
successful_batches += 1
existing_rules.update(result)
render_progress_bar(
successful_batches,
total_batches,
f"Folder {sanitize_for_log(folder_name)}",
)
else:
with concurrent.futures.ThreadPoolExecutor(max_workers=3) as executor:
futures = {
executor.submit(process_batch, i, batch): i
for i, batch in enumerate(batches, 1)
}

for future in concurrent.futures.as_completed(futures):
result = future.result()
if result:
successful_batches += 1
existing_rules.update(result)

render_progress_bar(
successful_batches,
total_batches,
f"Folder {sanitize_for_log(folder_name)}",
)
for future in concurrent.futures.as_completed(futures):
result = future.result()
if result:
successful_batches += 1
existing_rules.update(result)

render_progress_bar(
successful_batches,
total_batches,
f"Folder {sanitize_for_log(folder_name)}",
)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

While this optimization is a great improvement, it has introduced some code duplication. The logic for processing a batch result (updating counters, updating existing_rules, and rendering the progress bar) is now repeated in both the single-batch if block and the multi-batch else block.

To improve maintainability and adhere to the DRY (Don't Repeat Yourself) principle, you can extract this common logic into a nested helper function. This makes the code cleaner and easier to modify in the future.

    def _handle_batch_result(result: Optional[List[str]]):
        nonlocal successful_batches
        if result:
            successful_batches += 1
            existing_rules.update(result)
        render_progress_bar(
            successful_batches,
            total_batches,
            f"Folder {sanitize_for_log(folder_name)}",
        )

    if total_batches == 1:
        # Avoid thread pool overhead for single batch (very common for small folders)
        result = process_batch(1, batches[0])
        _handle_batch_result(result)
    else:
        with concurrent.futures.ThreadPoolExecutor(max_workers=3) as executor:
            futures = {
                executor.submit(process_batch, i, batch): i
                for i, batch in enumerate(batches, 1)
            }

            for future in concurrent.futures.as_completed(futures):
                result = future.result()
                _handle_batch_result(result)

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR optimizes push_rules() to avoid ThreadPoolExecutor overhead when only a single rules batch is being pushed, and updates tests/documentation accordingly.

Changes:

  • main.push_rules() now runs synchronously when total_batches == 1, otherwise uses a thread pool.
  • Adds a regression test file to validate the single-batch optimization behavior.
  • Fixes tests/test_plan_details.py flakiness by moving import main inside test functions; adds a Bolt journal entry.

Reviewed changes

Copilot reviewed 4 out of 4 changed files in this pull request and generated 4 comments.

File Description
main.py Avoids thread pool creation for single-batch rule pushes.
tests/test_push_rules_perf.py Adds tests intended to assert the single-batch fast path vs multi-batch thread pool path.
tests/test_plan_details.py Moves import main into tests to reduce import-related flakiness.
.jules/bolt.md Documents the thread pool overhead learning/decision.

Comment on lines +7 to +12
# pytest adds root to sys.path usually, but to be safe we can use relative import or assume root is in path
# if running via 'uv run pytest', root is in path.

import main

class TestPushRulesPerf(unittest.TestCase):
Copy link

Copilot AI Feb 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test imports main at module import time. Since main.py executes import-time logic (e.g., load_dotenv() and computing USE_COLORS from isatty()), this can make the test order-dependent and reintroduce the kind of import-related flakiness this PR fixes elsewhere. Consider moving import main into each test (or setUp) and setting needed env/tty state before import/reload.

Copilot uses AI. Check for mistakes.
Comment on lines +68 to +75
# Mock submit to return a future
mock_future = MagicMock()
mock_future.result.return_value = ["some_rules"]
mock_executor_instance.submit.return_value = mock_future

# Mock as_completed to return 2 futures (2 batches)
mock_as_completed.return_value = [mock_future, mock_future]

Copy link

Copilot AI Feb 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

test_multi_batch_uses_thread_pool returns the same mock_future from every submit() call. Because push_rules stores futures in a dict keyed by the future object, reusing the same future collapses multiple batches into one entry and no longer models real execution. Use distinct futures (e.g., submit.side_effect = [future1, future2]) and optionally assert submit was called total_batches times.

Copilot uses AI. Check for mistakes.
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Copilot AI review requested due to automatic review settings February 9, 2026 17:08
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
@github-actions
Copy link

github-actions bot commented Feb 9, 2026

👋 Development Partner is reviewing this PR. Will provide feedback shortly.

1 similar comment
@github-actions
Copy link

github-actions bot commented Feb 9, 2026

👋 Development Partner is reviewing this PR. Will provide feedback shortly.

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 4 out of 4 changed files in this pull request and generated no new comments.

Copilot AI added a commit that referenced this pull request Feb 9, 2026
… imports, and improve test coverage

Co-authored-by: abhimehro <84992105+abhimehro@users.noreply.github.com>
This commit adds a newline to README.md to trigger a new CI run, as the previous run failed due to GitHub Actions infrastructure issues (500/502 errors during git fetch).

Co-authored-by: abhimehro <84992105+abhimehro@users.noreply.github.com>
@github-actions
Copy link

github-actions bot commented Feb 9, 2026

👋 Development Partner is reviewing this PR. Will provide feedback shortly.

@abhimehro
Copy link
Owner Author

Closing: superseded by #192 (latest thread pool overhead optimization PR).

@abhimehro abhimehro closed this Feb 13, 2026
@abhimehro abhimehro deleted the bolt/optimize-thread-pool-overhead-10412449787616114386 branch February 13, 2026 02:09
@google-labs-jules
Copy link

Closing: superseded by #192 (latest thread pool overhead optimization PR).

Understood. Acknowledging that this work is now obsolete and stopping work on this task.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants