Fix: handle precomputed crops and safe no-grad cleanup in auto3dseg#8803
Fix: handle precomputed crops and safe no-grad cleanup in auto3dseg#8803bluehyena wants to merge 1 commit intoProject-MONAI:devfrom
Conversation
Signed-off-by: Jun Hyeok Lee <bluehyena123@naver.com>
📝 WalkthroughWalkthroughThe changes fix a bug where Estimated code review effortmonai/auto3dseg/analyzer.py: tests/apps/test_auto3dseg.py: 🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
🧹 Nitpick comments (2)
monai/auto3dseg/analyzer.py (2)
236-252: Misplaced docstring.The docstring block is placed after the validation code. Move it to immediately follow the
def __call__(self, data):line for proper function documentation.Proposed fix
The docstring (lines 236-252) should be moved to right after line 219 (
def __call__(self, data):), before the validation code.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@monai/auto3dseg/analyzer.py` around lines 236 - 252, The function-level docstring for __call__ is misplaced after validation; move the entire docstring block currently referencing ImageStatsKeys, SampleOperations, and report_format to immediately follow the def __call__(self, data): signature so it becomes the function's official docstring (before any validation or logic), preserving the existing text and formatting.
271-273: Considerstrict=Truefor zip.Adding
strict=Truecatches length mismatches early if SHAPE and SPACING ever diverge unexpectedly.- report[ImageStatsKeys.SIZEMM] = [ - a * b for a, b in zip(report[ImageStatsKeys.SHAPE][0], report[ImageStatsKeys.SPACING]) - ] + report[ImageStatsKeys.SIZEMM] = [ + a * b for a, b in zip(report[ImageStatsKeys.SHAPE][0], report[ImageStatsKeys.SPACING], strict=True) + ]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@monai/auto3dseg/analyzer.py` around lines 271 - 273, The current computation of ImageStatsKeys.SIZEMM uses zip(report[ImageStatsKeys.SHAPE][0], report[ImageStatsKeys.SPACING]) which can silently truncate if lengths differ; update the comprehension in analyzer.py to use zip(..., strict=True) so mismatched lengths raise immediately (ensure environment uses Python 3.10+); reference the report dict keys ImageStatsKeys.SHAPE, ImageStatsKeys.SPACING and the target ImageStatsKeys.SIZEMM when making this change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@monai/auto3dseg/analyzer.py`:
- Around line 236-252: The function-level docstring for __call__ is misplaced
after validation; move the entire docstring block currently referencing
ImageStatsKeys, SampleOperations, and report_format to immediately follow the
def __call__(self, data): signature so it becomes the function's official
docstring (before any validation or logic), preserving the existing text and
formatting.
- Around line 271-273: The current computation of ImageStatsKeys.SIZEMM uses
zip(report[ImageStatsKeys.SHAPE][0], report[ImageStatsKeys.SPACING]) which can
silently truncate if lengths differ; update the comprehension in analyzer.py to
use zip(..., strict=True) so mismatched lengths raise immediately (ensure
environment uses Python 3.10+); reference the report dict keys
ImageStatsKeys.SHAPE, ImageStatsKeys.SPACING and the target
ImageStatsKeys.SIZEMM when making this change.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: fb6ef620-5777-4a34-a6c5-5372ddb90b86
📒 Files selected for processing (2)
monai/auto3dseg/analyzer.pytests/apps/test_auto3dseg.py
|
I noticed a couple of technical differences worth flagging between this approach and the fixes in #8809 and #8810. On nda_croppeds handling: The ternary here has no validation on the pre-computed value , if a caller passes a wrong type or a list with the wrong number of entries, the error surfaces downstream as a confusing shape mismatch rather than a clear message. #8809 adds explicit type and length validation with a descriptive ValueError. On grad state: torch.no_grad() always restores grad to enabled after the block, regardless of what state the caller was in before. If the caller had grad disabled before calling the analyzer, this silently re-enables it , which is a subtle semantic change. The try/finally pattern in #8810 uses torch.is_grad_enabled() to capture and restore the exact pre-call state, so both enabled→enabled and disabled→disabled are handled correctly. |
ericspod
left a comment
There was a problem hiding this comment.
Hi @bluehyena thanks for looking into this. I have minor comments to make, we should also look at @williams145 's PRs though I commented in there that using no_grad should work correctly and so I'm waiting to hear back from them.
| f"Image data under '{self.image_key}' must have at least 3 dimensions, but got shape {image.shape}." | ||
| ) | ||
| # --- End of validation --- | ||
| """ |
There was a problem hiding this comment.
This docstring comment appears spurious here, although it was in the original code please place it at the top of the method and check it's contents are correct.
| if isinstance(data[self.image_key], MetaTensor) | ||
| else [1.0] * min(3, data[self.image_key].ndim) | ||
| ) | ||
| with torch.no_grad(): |
There was a problem hiding this comment.
One other thing to note is that no_grad could be used as a method decorator as well. This might be cleaner for this code but you would have to check it is working as intended.
Fixes #8802
Also addresses #5889
Description
This pull request fixes two issues in
monai.auto3dseg.analyzer.ImageStats.__call__()now correctly handles the optional precomputednda_croppedsinput path instead of raisingUnboundLocalError.ImageStats,FgImageStats, andLabelStatsnow use an exception-safe no-grad context so analyzer failures do not leak the global PyTorch grad-enabled state, which is related to data analyzer set_grad_enabled #5889.I also added tests covering both behaviors.
Types of changes
./runtests.sh -f -u --net --coverage../runtests.sh --quick --unittests --disttests.make htmlcommand in thedocs/folder.