Pr 9#10
Conversation
… mode handling Changes: - Add [tool.ruff] config with target-version py310 and a curated rule set (E/F/W, I, UP, B, SIM, RUF) so future drift is caught in CI lint. - PEP 585 sweep across all node modules: drop legacy typing.Dict / List / Tuple / Optional in favor of dict / list / tuple / `X | None`. Annotate class-level mutable defaults as ClassVar to satisfy RUF012. - PromptCombinerNode: replace the magic-string mode chain with a CombineMode StrEnum + match statement. The dropdown choices in INPUT_TYPES are now derived from the same Literal alias used in the function signature, so the UI and the type contract can't drift apart. - Smoke tests for PromptCombinerNode (14 tests) covering enum mapping, all three modes, edge cases, and the unknown-mode error path. Brings combiner coverage from 28% to 96%. - Tidy preexisting issues surfaced by the new lint rules: B904 except chaining in style_presets, RUF013 implicit Optional, RUF059 unused unpack, SIM117 nested-with consolidation in tests. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
…ests)
Direct response to the v1.1.6 packaging gap, where the prompt-chain nodes
(combiner, refiner, negative) were authored under nodes/ but never wired
into NODE_CLASS_MAPPINGS, so they silently failed to ship.
- __init__.py now raises RuntimeError at import time if NODE_CLASS_MAPPINGS
and NODE_DISPLAY_NAME_MAPPINGS drift apart. ComfyUI startup logs surface
the mismatch instead of dropping the node from the menu silently.
- New tests/unit/test_node_registration.py contract suite (8 tests):
* Expected 5 keys are registered.
* Display names cover every class.
* Each registered class satisfies the ComfyUI node interface
(INPUT_TYPES classmethod, RETURN_TYPES, FUNCTION, CATEGORY,
and the FUNCTION attribute resolves to a callable method).
* Every nodes/*_node.py module on disk is reachable from the mapping —
this is the regression guard that would have caught v1.1.6.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Direct response to the v1.1.x → v1.3.0 silent-skip pattern, where pyproject.toml was bumped four times without ever pushing the matching git tag, so .github/workflows/publish.yml never fired and the registry stayed on v1.1.6. - scripts/check_version_tag.py: stdlib-only (tomllib) script that fails if pyproject.toml's version has no matching git tag locally. - .pre-commit-config.yaml: wires ruff (check + format) and the local version-tag-guard hook. The guard fires only on pyproject.toml edits. - test.yml: new verify-tag-exists job that fails the main-branch build when pyproject.toml's version has no matching tag in the repo. Belt and suspenders for the pre-commit hook. - test.yml validate job: same comfy-cli telemetry fix already applied to publish.yml; suppresses the interactive opt-in prompt in CI. - pytest.ini + test.yml: lower coverage gate from 70 (never enforced; failed on every main push since chain nodes were added) to 50 (the actual current floor). Comment marks it as a starting point for future test growth. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Hardening release. No public API changes — same 5 NODE_CLASS_MAPPINGS keys as v1.3.0. The bump exists so the registry can serve a build that includes the version-tag guard, the registration contract tests, the mapping drift self-check, the modernized typing, and the CI fixes. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
|
Warning Rate limit exceeded
To keep reviews running without waiting, you can enable usage-based add-on for your organization. This allows additional reviews beyond the hourly cap. Account admins can enable it under billing. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (12)
📝 WalkthroughWalkthroughThis PR modernizes type annotations across the codebase to use PEP 604 built-in generics ( Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Review rate limit: 0/1 reviews remaining, refill in 47 minutes and 40 seconds.Comment |
There was a problem hiding this comment.
Code Review
This pull request modernizes the codebase by updating type hints to PEP 585/604 standards, adding registration safety guards, and implementing CI version drift prevention. Key changes include refactoring the OllamaClient to use a shared module-level cache, improving prompt cleaning regex to support DeepSeek and Markdown formats, and fixing seed derivation in the PromptRefinerNode for multi-pass operations. Additionally, a new pre-commit hook and script were added to ensure git tags match version bumps in pyproject.toml. Feedback includes a suggestion to return a copy of the shared model cache to prevent accidental mutation and a recommendation to specify UTF-8 encoding when opening template files.
| if YAML_AVAILABLE and template_path.exists(): | ||
| try: | ||
| with open(template_path, "r") as f: | ||
| with open(template_path) as f: |
There was a problem hiding this comment.
Code Review Roast 🔥Verdict: No Issues Found | Recommendation: Merge Oh wait, this PR is actually clean. I had my flamethrower warmed up and everything. Incremental Review (since v1.3.1)
📊 Overall: Ready to merge. Files Reviewed (19 files)All 19 files from previous reviewReviewed by minimax- m2.5 incremental review Reviewed by minimax-m2.5 · 1,409,635 tokens |
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
tests/unit/test_ollama_client.py (1)
40-43:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winTighten the LoRA ordering assertion.
The current check only proves the first two items are from the LoRA set; it will still pass if their order regresses. Compare the prefix against the expected sorted order so the test actually guards the prioritization logic.
✅ Suggested assertion update
- assert models[0] in ("my-lora-v1", "limbicnation-style") - assert models[1] in ("my-lora-v1", "limbicnation-style") + assert models[:2] == sorted(["my-lora-v1", "limbicnation-style"]) assert models[2] == "qwen3:8b"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/test_ollama_client.py` around lines 40 - 43, Tighten the LoRA ordering assertion in the test by asserting the first two elements of the models list equal the expected sorted LoRA prefix instead of just checking membership; update the checks that currently use models[0] and models[1] to a single assertion like assert models[:2] == ["limbicnation-style", "my-lora-v1"] (keep the existing assert models[2] == "qwen3:8b" intact) so the test enforces the exact prioritized ordering of the LoRA models.nodes/prompt_combiner_node.py (1)
208-219:⚠️ Potential issue | 🟠 Major | 🏗️ Heavy liftAvoid leaving a blocked worker thread behind on timeout.
If
next(it)stalls, this branch returnsNonewhile the daemon thread keeps running with no cancellation path. Repeated timeouts can accumulate stuck threads/sockets under load.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@nodes/prompt_combiner_node.py` around lines 208 - 219, The current _weighted_average path can return early when total_weight == 0 while a worker iterator (it) or its daemon thread started earlier may still be running; wrap the code that consumes next(it) in a try/finally (or context) and ensure you always call getattr(it, "close", None)() (and signal/join the worker thread if you have a thread object) before returning so the worker/iterator is cancelled and joined on all branches; update _weighted_average to close the iterator and stop/join any associated thread when returning the ", ".join result to prevent leaking blocked threads/sockets.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@nodes/prompt_combiner_node.py`:
- Around line 167-177: The code returns the single prompt before validating the
provided mode, so invalid CombineMode values are ignored when len(prompts) == 1;
move the CombineMode(mode) validation (the try/except that references
CombineMode and mode) to run before the single-prompt fast path (before checking
if len(prompts) == 1) so an invalid mode returns the same error string as in the
existing except branch instead of silently accepting it.
- Around line 6-15: The CombineMode enum uses StrEnum which is unavailable in
Python 3.10; replace its base with a Python‑3.10 compatible pattern by importing
Enum and making CombineMode inherit from str and Enum (e.g., class
CombineMode(str, Enum)) so members remain string values; update the import
(remove StrEnum, add Enum) and keep the existing member names BLEND, CONCAT,
WEIGHTED_AVERAGE to preserve behavior.
In `@scripts/check_version_tag.py`:
- Around line 14-17: The pre-commit hook runs with language: system so the
fallback import in scripts/check_version_tag.py (try import tomllib except
import tomli as tomllib) can fail on older Pythons unless tomli is provided;
update the version-tag-guard hook in .pre-commit-config.yaml to include
additional_dependencies: ["tomli>=1.1.0"] so the tomli package is installed for
the hook and the import in scripts/check_version_tag.py succeeds.
In `@tests/unit/test_node_registration.py`:
- Around line 19-27: The sys.path insertion should use the repository root so
Python can locate the local "nodes" package: replace the insertion of
str(_NODES_PKG) with str(_REPO_ROOT) (and update the membership check
accordingly) so the module-level setup that imports nodes (symbols: _REPO_ROOT,
_NODES_PKG, sys.path, import nodes) reliably loads the local package.
In `@tests/unit/test_prompt_refiner.py`:
- Around line 11-60: Run ruff format on this test file to fix formatting issues
flagged by CI: reformat tests/unit/test_prompt_refiner.py (which contains tests
test_per_pass_seed_increments and test_random_seed_none_for_all_passes that
exercise PromptRefinerNode.refine and patching of
OllamaClient.generate_streaming) by running `ruff format` (or `ruff format
tests/unit/test_prompt_refiner.py`) and commit the rewritten file so ruff format
--check passes in CI.
---
Outside diff comments:
In `@nodes/prompt_combiner_node.py`:
- Around line 208-219: The current _weighted_average path can return early when
total_weight == 0 while a worker iterator (it) or its daemon thread started
earlier may still be running; wrap the code that consumes next(it) in a
try/finally (or context) and ensure you always call getattr(it, "close", None)()
(and signal/join the worker thread if you have a thread object) before returning
so the worker/iterator is cancelled and joined on all branches; update
_weighted_average to close the iterator and stop/join any associated thread when
returning the ", ".join result to prevent leaking blocked threads/sockets.
In `@tests/unit/test_ollama_client.py`:
- Around line 40-43: Tighten the LoRA ordering assertion in the test by
asserting the first two elements of the models list equal the expected sorted
LoRA prefix instead of just checking membership; update the checks that
currently use models[0] and models[1] to a single assertion like assert
models[:2] == ["limbicnation-style", "my-lora-v1"] (keep the existing assert
models[2] == "qwen3:8b" intact) so the test enforces the exact prioritized
ordering of the LoRA models.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 12b52eaa-79d5-43a6-81ec-7db58c528a9d
📒 Files selected for processing (19)
.github/workflows/test.yml.pre-commit-config.yaml__init__.pynodes/adapters/ollama_client.pynodes/negative_prompt_node.pynodes/prompt_combiner_node.pynodes/prompt_generator_node.pynodes/prompt_refiner_node.pynodes/style_applier_node.pypyproject.tomlpytest.iniscripts/check_version_tag.pystyle_presets.pytests/unit/test_extract_final_prompt.pytests/unit/test_node_registration.pytests/unit/test_ollama_client.pytests/unit/test_prompt_combiner.pytests/unit/test_prompt_refiner.pytests/unit/test_style_presets.py
| from enum import StrEnum | ||
| from typing import Any, Literal, get_args | ||
|
|
||
|
|
||
| class CombineMode(StrEnum): | ||
| """Supported strategies for combining multiple prompts.""" | ||
|
|
||
| BLEND = "blend" | ||
| CONCAT = "concat" | ||
| WEIGHTED_AVERAGE = "weighted_average" |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, locate and read the file to confirm its current state
find . -name "prompt_combiner_node.py" -type fRepository: Limbicnation/ComfyUI-PromptGenerator
Length of output: 112
🏁 Script executed:
# Check Python version compatibility for StrEnum
python3 -c "import sys; print(f'Python {sys.version}')"Repository: Limbicnation/ComfyUI-PromptGenerator
Length of output: 137
🏁 Script executed:
# Verify when StrEnum was introduced in Python
web_search
StrEnum Python version when introducedRepository: Limbicnation/ComfyUI-PromptGenerator
Length of output: 177
🏁 Script executed:
cat -n nodes/prompt_combiner_node.py | head -50Repository: Limbicnation/ComfyUI-PromptGenerator
Length of output: 1828
🌐 Web query:
StrEnum Python 3.10 compatibility when was it introduced
💡 Result:
StrEnum was introduced in Python 3.11 as part of the enum module. It is not available in Python 3.10 or earlier versions. In Python 3.10, developers often emulate it by subclassing both str and enum.Enum, but this approach has behavioral differences and breaking changes in 3.11 compared to the official StrEnum. Third-party backports like StrEnum on PyPI exist for pre-3.11 compatibility, but they are not drop-in replacements for the standard library version.
Citations:
- 1: https://docs.python.org/3.11/library/enum.html?highlight=strenum
- 2: https://docs.python.org/3/library/enum.html
- 3: https://strenum.readthedocs.io/en/stable
- 4: https://pypi.org/project/StrEnum/
- 5: https://docs.astral.sh/ruff/rules/replace-str-enum/
- 6: Inconsistent behaviour of Enums which are also strings in/before 3.10 python/cpython#135370
- 7: need StrEnum in enum.py python/cpython#85982
- 8: https://stackoverflow.com/questions/75040733/is-there-a-way-to-use-strenum-in-earlier-python-versions
🏁 Script executed:
# Find CI configuration files
find . -name "*.yml" -o -name "*.yaml" | head -20Repository: Limbicnation/ComfyUI-PromptGenerator
Length of output: 212
🏁 Script executed:
# Check if there's a requirements file or pyproject.toml specifying Python version
cat pyproject.toml 2>/dev/null | grep -i python || echo "No pyproject.toml found"Repository: Limbicnation/ComfyUI-PromptGenerator
Length of output: 235
🏁 Script executed:
# Check the full PromptCombinerNode class for structure compliance
sed -n '22,150p' nodes/prompt_combiner_node.pyRepository: Limbicnation/ComfyUI-PromptGenerator
Length of output: 4057
🏁 Script executed:
# Check if ruff is configured and what issues it finds
cat pyproject.toml | grep -A 10 "\[tool.ruff" || echo "No ruff config found"Repository: Limbicnation/ComfyUI-PromptGenerator
Length of output: 408
🏁 Script executed:
# Check if there's a ruff.toml or similar
ls -la ruff.toml .ruff.toml 2>/dev/null || echo "No ruff.toml found"Repository: Limbicnation/ComfyUI-PromptGenerator
Length of output: 99
🏁 Script executed:
# Verify the exact Python version requirement from the CI test workflow
cat .github/workflows/test.yml | grep -i "python-version" -A 2 -B 2Repository: Limbicnation/ComfyUI-PromptGenerator
Length of output: 446
Replace StrEnum with a Python 3.10-compatible enum.
StrEnum is unavailable in Python 3.10, which is the project's minimum target version and the CI runtime. This blocks module import and test collection on the project's configured Python version.
🐛 Suggested compatibility fix
-from enum import StrEnum
+from enum import Enum
@@
-class CombineMode(StrEnum):
+class CombineMode(str, Enum):📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| from enum import StrEnum | |
| from typing import Any, Literal, get_args | |
| class CombineMode(StrEnum): | |
| """Supported strategies for combining multiple prompts.""" | |
| BLEND = "blend" | |
| CONCAT = "concat" | |
| WEIGHTED_AVERAGE = "weighted_average" | |
| from enum import Enum | |
| from typing import Any, Literal, get_args | |
| class CombineMode(str, Enum): | |
| """Supported strategies for combining multiple prompts.""" | |
| BLEND = "blend" | |
| CONCAT = "concat" | |
| WEIGHTED_AVERAGE = "weighted_average" |
🧰 Tools
🪛 GitHub Actions: Test
[error] 6-6: Test collection failed with ImportError: cannot import name 'StrEnum' from 'enum' (/opt/hostedtoolcache/Python/3.10.20/x64/lib/python3.10/enum.py).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@nodes/prompt_combiner_node.py` around lines 6 - 15, The CombineMode enum uses
StrEnum which is unavailable in Python 3.10; replace its base with a Python‑3.10
compatible pattern by importing Enum and making CombineMode inherit from str and
Enum (e.g., class CombineMode(str, Enum)) so members remain string values;
update the import (remove StrEnum, add Enum) and keep the existing member names
BLEND, CONCAT, WEIGHTED_AVERAGE to preserve behavior.
| try: | ||
| import tomllib | ||
| except ModuleNotFoundError: # pragma: no cover | ||
| import tomli as tomllib # type: ignore[no-redef] |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
rg -n --hidden --glob '!**/.git/**' --glob 'pyproject.toml' --glob 'poetry.lock' --glob 'uv.lock' --glob 'Pipfile.lock' --glob 'requirements*' --glob '*.in' '\btomli\b' .Repository: Limbicnation/ComfyUI-PromptGenerator
Length of output: 181
🏁 Script executed:
#!/bin/bash
# Check for pre-commit configuration files
fd -e yaml -e yml -n '(\.pre-commit|pre-commit)' .
# Also check for the hooks configuration
if [ -f .pre-commit-hooks.yaml ]; then
echo "=== .pre-commit-hooks.yaml ==="
cat -n .pre-commit-hooks.yaml | head -50
fi
if [ -f .pre-commit-config.yaml ]; then
echo "=== .pre-commit-config.yaml ==="
cat -n .pre-commit-config.yaml | head -50
fiRepository: Limbicnation/ComfyUI-PromptGenerator
Length of output: 898
Add additional_dependencies to the pre-commit hook to ensure tomli is available.
The hook is configured with language: system, which means it runs with the system Python interpreter and does NOT automatically include the dev dependencies from pyproject.toml. Even though tomli is declared as a dev dependency for Python <3.11, it will not be available unless explicitly added via additional_dependencies in the hook configuration. Add additional_dependencies: ["tomli>=1.1.0"] to the version-tag-guard hook in .pre-commit-config.yaml to ensure reproducibility on Python <3.10 and 3.9 systems.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/check_version_tag.py` around lines 14 - 17, The pre-commit hook runs
with language: system so the fallback import in scripts/check_version_tag.py
(try import tomllib except import tomli as tomllib) can fail on older Pythons
unless tomli is provided; update the version-tag-guard hook in
.pre-commit-config.yaml to include additional_dependencies: ["tomli>=1.1.0"] so
the tomli package is installed for the hook and the import in
scripts/check_version_tag.py succeeds.
| # Guarantee we import the local nodes/ package, not a system-wide ComfyUI nodes package. | ||
| _REPO_ROOT = Path(__file__).resolve().parents[2] | ||
| _NODES_PKG = _REPO_ROOT / "nodes" | ||
| if str(_NODES_PKG) not in sys.path: | ||
| sys.path.insert(0, str(_NODES_PKG)) | ||
|
|
||
| import nodes # noqa: E402 | ||
|
|
||
| assert nodes.__file__.startswith(str(_REPO_ROOT)), f"Imported wrong nodes package: {nodes.__file__}" |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
python - <<'PY'
from importlib.machinery import PathFinder
from pathlib import Path
repo_root = Path.cwd()
nodes_dir = repo_root / "nodes"
print("repo_root:", PathFinder.find_spec("nodes", [str(repo_root)]))
print("nodes_dir :", PathFinder.find_spec("nodes", [str(nodes_dir)]))
PYRepository: Limbicnation/ComfyUI-PromptGenerator
Length of output: 316
🏁 Script executed:
head -30 tests/unit/test_node_registration.pyRepository: Limbicnation/ComfyUI-PromptGenerator
Length of output: 954
Use _REPO_ROOT instead of _NODES_PKG for sys.path insertion.
Inserting _NODES_PKG (the nodes/ directory itself) into sys.path does not reliably resolve the nodes package. Python's import machinery finds nodes by searching parent directories, so the local package must be imported from _REPO_ROOT instead.
Suggested fix
-if str(_NODES_PKG) not in sys.path:
- sys.path.insert(0, str(_NODES_PKG))
+if str(_REPO_ROOT) not in sys.path:
+ sys.path.insert(0, str(_REPO_ROOT))📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Guarantee we import the local nodes/ package, not a system-wide ComfyUI nodes package. | |
| _REPO_ROOT = Path(__file__).resolve().parents[2] | |
| _NODES_PKG = _REPO_ROOT / "nodes" | |
| if str(_NODES_PKG) not in sys.path: | |
| sys.path.insert(0, str(_NODES_PKG)) | |
| import nodes # noqa: E402 | |
| assert nodes.__file__.startswith(str(_REPO_ROOT)), f"Imported wrong nodes package: {nodes.__file__}" | |
| # Guarantee we import the local nodes/ package, not a system-wide ComfyUI nodes package. | |
| _REPO_ROOT = Path(__file__).resolve().parents[2] | |
| _NODES_PKG = _REPO_ROOT / "nodes" | |
| if str(_REPO_ROOT) not in sys.path: | |
| sys.path.insert(0, str(_REPO_ROOT)) | |
| import nodes # noqa: E402 | |
| assert nodes.__file__.startswith(str(_REPO_ROOT)), f"Imported wrong nodes package: {nodes.__file__}" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/unit/test_node_registration.py` around lines 19 - 27, The sys.path
insertion should use the repository root so Python can locate the local "nodes"
package: replace the insertion of str(_NODES_PKG) with str(_REPO_ROOT) (and
update the membership check accordingly) so the module-level setup that imports
nodes (symbols: _REPO_ROOT, _NODES_PKG, sys.path, import nodes) reliably loads
the local package.
- fix(scripts): tomllib fallback for Python 3.10 compat - fix(ci): remove verify-tag-exists from test.yml (publish.yml already guards) - fix(tests): ensure local nodes/ import in test_node_registration.py - fix(adapter): shared module-level model cache across OllamaClient instances - fix(adapter): exact model tag match in check_health() to avoid prefix conflation - fix(refiner): per-pass seed increment so multi-pass refinement varies - fix(combiner): validate mode before single-prompt fast path - feat(generator): strip DeepSeek <think> and markdown <details> blocks - test: add prompt_refiner seed tests, cache sharing tests, exact match tests - chore(deps): add dev extras with tomli fallback
Summary by CodeRabbit
Release Notes
New Features
Bug Fixes
Tests
Chores