Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions .jules/bolt.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,3 +39,7 @@
## 2026-01-27 - Redundant Validation for Cached Data
**Learning:** Re-validating resource properties (like DNS/IP) when using *cached content* is pure overhead. If the content is served from memory (proven safe at fetch time), checking the *current* state of the source is disconnected from the data being used.
**Action:** When using a multi-stage pipeline (Warmup -> Process), ensure validation state persists alongside the data cache. Avoid clearing validation caches between stages if the data cache is not also cleared.

## 2026-01-28 - [Thread Pool Overhead on Small Batches]

Check notice

Code scanning / Remark-lint (reported by Codacy)

Warn when references to undefined definitions are found. Note

[no-undefined-references] Found reference to undefined definition

Check notice

Code scanning / Remark-lint (reported by Codacy)

Warn when shortcut reference links are used. Note

[no-shortcut-reference-link] Use the trailing [] on reference links
**Learning:** Creating a `ThreadPoolExecutor` has measurable overhead (thread creation, context switching). For small tasks (e.g., a single batch of API requests), the overhead of the thread pool can exceed the benefit of parallelization, especially when the task itself is just a single synchronous I/O call.
**Action:** Always check if the workload justifies the overhead of a thread pool. For single-item or very small workloads, bypass the pool and execute synchronously.
44 changes: 28 additions & 16 deletions main.py
Original file line number Diff line number Diff line change
Expand Up @@ -1151,23 +1151,35 @@ def process_batch(batch_idx: int, batch_data: List[str]) -> Optional[List[str]]:

# Optimization 3: Parallelize batch processing
# Using 3 workers to speed up writes without hitting aggressive rate limits.
with concurrent.futures.ThreadPoolExecutor(max_workers=3) as executor:
futures = {
executor.submit(process_batch, i, batch): i
for i, batch in enumerate(batches, 1)
}
if total_batches == 1:
# Avoid thread pool overhead for single batch (very common for small folders)
result = process_batch(1, batches[0])
if result:
successful_batches += 1
existing_rules.update(result)
render_progress_bar(
successful_batches,
total_batches,
f"Folder {sanitize_for_log(folder_name)}",
)
else:
with concurrent.futures.ThreadPoolExecutor(max_workers=3) as executor:
futures = {
executor.submit(process_batch, i, batch): i
for i, batch in enumerate(batches, 1)
}

for future in concurrent.futures.as_completed(futures):
result = future.result()
if result:
successful_batches += 1
existing_rules.update(result)

render_progress_bar(
successful_batches,
total_batches,
f"Folder {sanitize_for_log(folder_name)}",
)
for future in concurrent.futures.as_completed(futures):
result = future.result()
if result:
successful_batches += 1
existing_rules.update(result)

render_progress_bar(
successful_batches,
total_batches,
f"Folder {sanitize_for_log(folder_name)}",
)
Comment on lines +1154 to +1182

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

While this optimization is a great improvement, it has introduced some code duplication. The logic for processing a batch result (updating counters, updating existing_rules, and rendering the progress bar) is now repeated in both the single-batch if block and the multi-batch else block.

To improve maintainability and adhere to the DRY (Don't Repeat Yourself) principle, you can extract this common logic into a nested helper function. This makes the code cleaner and easier to modify in the future.

    def _handle_batch_result(result: Optional[List[str]]):
        nonlocal successful_batches
        if result:
            successful_batches += 1
            existing_rules.update(result)
        render_progress_bar(
            successful_batches,
            total_batches,
            f"Folder {sanitize_for_log(folder_name)}",
        )

    if total_batches == 1:
        # Avoid thread pool overhead for single batch (very common for small folders)
        result = process_batch(1, batches[0])
        _handle_batch_result(result)
    else:
        with concurrent.futures.ThreadPoolExecutor(max_workers=3) as executor:
            futures = {
                executor.submit(process_batch, i, batch): i
                for i, batch in enumerate(batches, 1)
            }

            for future in concurrent.futures.as_completed(futures):
                result = future.result()
                _handle_batch_result(result)


if successful_batches == total_batches:
if USE_COLORS:
Expand Down
5 changes: 3 additions & 2 deletions tests/test_plan_details.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,10 @@

from unittest.mock import patch

import main


def test_print_plan_details_no_colors(capsys):
"""Test print_plan_details output when colors are disabled."""
import main

Check warning

Code scanning / Prospector (reported by Codacy)

Import outside toplevel (main) (import-outside-toplevel) Warning test

Import outside toplevel (main) (import-outside-toplevel)

Check warning

Code scanning / Pylintpython3 (reported by Codacy)

Import outside toplevel (main) Warning test

Import outside toplevel (main)
with patch("main.USE_COLORS", False):
plan_entry = {
"profile": "test_profile",
Expand All @@ -29,6 +28,7 @@

def test_print_plan_details_empty_folders(capsys):
"""Test print_plan_details with no folders."""
import main

Check warning

Code scanning / Prospector (reported by Codacy)

Import outside toplevel (main) (import-outside-toplevel) Warning test

Import outside toplevel (main) (import-outside-toplevel)

Check warning

Code scanning / Pylintpython3 (reported by Codacy)

Import outside toplevel (main) Warning test

Import outside toplevel (main)
with patch("main.USE_COLORS", False):
plan_entry = {"profile": "test_profile", "folders": []}
main.print_plan_details(plan_entry)
Expand All @@ -42,6 +42,7 @@

def test_print_plan_details_with_colors(capsys):
"""Test print_plan_details output when colors are enabled."""
import main

Check warning

Code scanning / Prospector (reported by Codacy)

Import outside toplevel (main) (import-outside-toplevel) Warning test

Import outside toplevel (main) (import-outside-toplevel)

Check warning

Code scanning / Pylintpython3 (reported by Codacy)

Import outside toplevel (main) Warning test

Import outside toplevel (main)
with patch("main.USE_COLORS", True):
plan_entry = {
"profile": "test_profile",
Expand Down
89 changes: 89 additions & 0 deletions tests/test_push_rules_perf.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,89 @@
import unittest

Check warning

Code scanning / Pylint (reported by Codacy)

Missing module docstring Warning test

Missing module docstring

Check warning

Code scanning / Pylintpython3 (reported by Codacy)

Missing module docstring Warning test

Missing module docstring
from unittest.mock import MagicMock, patch
import concurrent.futures
import sys
import os

# pytest adds root to sys.path usually, but to be safe we can use relative import or assume root is in path

Check warning

Code scanning / Pylint (reported by Codacy)

Line too long (107/100) Warning test

Line too long (107/100)

Check warning

Code scanning / Pylintpython3 (reported by Codacy)

Line too long (107/100) Warning test

Line too long (107/100)
# if running via 'uv run pytest', root is in path.

import main

class TestPushRulesPerf(unittest.TestCase):

Check warning

Code scanning / Pylint (reported by Codacy)

Missing class docstring Warning test

Missing class docstring

Check warning

Code scanning / Pylintpython3 (reported by Codacy)

Missing class docstring Warning test

Missing class docstring
Comment on lines +7 to +12
Copy link

Copilot AI Feb 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test imports main at module import time. Since main.py executes import-time logic (e.g., load_dotenv() and computing USE_COLORS from isatty()), this can make the test order-dependent and reintroduce the kind of import-related flakiness this PR fixes elsewhere. Consider moving import main into each test (or setUp) and setting needed env/tty state before import/reload.

Copilot uses AI. Check for mistakes.
def setUp(self):
self.mock_client = MagicMock()
self.mock_client.post.return_value.status_code = 200
self.mock_client.post.return_value.raise_for_status = MagicMock()

@patch('concurrent.futures.as_completed')
@patch('concurrent.futures.ThreadPoolExecutor')
def test_single_batch_avoids_thread_pool(self, mock_executor, mock_as_completed):
"""Test that single batch pushes avoid creating a ThreadPoolExecutor."""
# Setup: < 500 rules (BATCH_SIZE is 500)
rules = [f"rule{i}.com" for i in range(100)]
existing_rules = set()

# Mock the executor context manager
mock_executor_instance = mock_executor.return_value
mock_executor_instance.__enter__.return_value = mock_executor_instance

# Mock submit to return a future
mock_future = MagicMock()
mock_future.result.return_value = ["rule0.com"] # Dummy result
mock_executor_instance.submit.return_value = mock_future

# Mock as_completed to return the future once (so loop runs once)
mock_as_completed.return_value = [mock_future]

# Execute
main.push_rules(
profile_id="test_profile",
folder_name="test_folder",
folder_id="test_id",
do=0,
status=1,
hostnames=rules,
existing_rules=existing_rules,
client=self.mock_client
)

# Assert: ThreadPoolExecutor should NOT be called
mock_executor.assert_not_called()

# Verify functionality: client.post should be called once (via process_batch inside loop or direct)

Check warning

Code scanning / Pylint (reported by Codacy)

Line too long (107/100) Warning test

Line too long (107/100)

Check warning

Code scanning / Pylintpython3 (reported by Codacy)

Line too long (107/100) Warning test

Line too long (107/100)
self.assertEqual(self.mock_client.post.call_count, 1)

@patch('concurrent.futures.as_completed')
@patch('concurrent.futures.ThreadPoolExecutor')
def test_multi_batch_uses_thread_pool(self, mock_executor, mock_as_completed):
"""Test that multiple batch pushes DO create a ThreadPoolExecutor."""
# Setup: > 500 rules
rules = [f"rule{i}.com" for i in range(1000)]
existing_rules = set()

# Mock the executor context manager
mock_executor_instance = mock_executor.return_value
mock_executor_instance.__enter__.return_value = mock_executor_instance

# Mock submit to return a future
mock_future = MagicMock()
mock_future.result.return_value = ["some_rules"]
mock_executor_instance.submit.return_value = mock_future

# Mock as_completed to return 2 futures (2 batches)
mock_as_completed.return_value = [mock_future, mock_future]

Comment on lines +68 to +75
Copy link

Copilot AI Feb 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

test_multi_batch_uses_thread_pool returns the same mock_future from every submit() call. Because push_rules stores futures in a dict keyed by the future object, reusing the same future collapses multiple batches into one entry and no longer models real execution. Use distinct futures (e.g., submit.side_effect = [future1, future2]) and optionally assert submit was called total_batches times.

Copilot uses AI. Check for mistakes.
# Execute
main.push_rules(
profile_id="test_profile",
folder_name="test_folder",
folder_id="test_id",
do=0,
status=1,
hostnames=rules,
existing_rules=existing_rules,
client=self.mock_client
)

# Assert: ThreadPoolExecutor SHOULD be called
mock_executor.assert_called()
Loading