-
Notifications
You must be signed in to change notification settings - Fork 7
Changed workflow for setting the masking value in the skull stripping #171
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
… process to either a global custom value for all inputs or the minimum value within the input image
|
/format |
|
🤖 I will now format your code with black. Check the status here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
Aligns skull-stripping mask application with the defacing workflow by supporting either a global masking/background value or a per-image minimum when filling voxels outside the brain mask.
Changes:
- Added
masking_valuesupport toBrainExtractorand updatedapply_mask()to fill outside-mask voxels using either the provided global value or the per-image minimum. - Updated
HDBetExtractorandSynthStripExtractorconstructors to accept/forwardmasking_value. - Minor whitespace cleanup in the registration package init.
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 3 comments.
| File | Description |
|---|---|
brainles_preprocessing/registration/__init__.py |
Removes stray blank lines/whitespace. |
brainles_preprocessing/brain_extraction/brain_extractor.py |
Introduces masking_value state and updates mask application to use background fill logic consistent with defacing. |
brainles_preprocessing/brain_extraction/synthstrip.py |
Adds masking_value parameter to the extractor constructor (currently not applied during extract() output writing). |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| def __init__( | ||
| self, border: int = 1, masking_value: Optional[Union[int, float]] = None | ||
| ): |
Copilot
AI
Jan 28, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
masking_value is added to the constructor and forwarded to BrainExtractor, but SynthStripExtractor.extract() never uses self.masking_value when writing the masked output (it still hard-codes bg = np.min([0, img_data.min()])). This makes the new parameter ineffective and the docstring misleading. Update extract() to fill background using the same masking_value/per-image-min logic as BrainExtractor.apply_mask (or drop the parameter if not supported).
| # check whether a global masking value was passed, otherwise choose minimum | ||
| if self.masking_value is None: | ||
| current_masking_value = np.min(input_data) | ||
| else: | ||
| current_masking_value = ( | ||
| np.array(self.masking_value).astype(input_data.dtype).item() | ||
| ) | ||
| # Apply mask (element-wise either input or masking value) | ||
| masked_data = np.where( | ||
| mask_data.astype(bool), input_data, current_masking_value | ||
| ) |
Copilot
AI
Jan 28, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
apply_mask() now supports a global masking_value and falls back to per-image minimum when unset, but the existing unit test only asserts that an output file is created. Please extend tests to assert that voxels outside the mask are set to the expected value for both cases (default min-per-image and a custom masking_value).
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
neuronflow
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need changes here?
| # write the masked output |
|
From my understanding, updating the synthstrip init is still necessary, given that while the center modality might be handled without the apply_mask function in the preprocessor workflow (as seen here)
the moving modalities are then afterwards still handled by the apply_mask function:
|
This change implements the the same process as in the defacing step to either use a global custom value for all inputs or the minimum value within the input image as the background value when masking an inage