Skip to content

saadwazir/MCADS-Decoder

Repository files navigation

MCADS-Decoder

Rethinking Decoder Design:
Improving Biomarker Segmentation Using Depth-to-Space Restoration and Residual Linear Attention
- Accepted in CVPR 2025

Download Paper: https://openaccess.thecvf.com/content/CVPR2025/html/Wazir_Rethinking_Decoder_Design_Improving_Biomarker_Segmentation_Using_Depth-to-Space_Restoration_and_CVPR_2025_paper.html

Please Cite it as following

@inproceedings{wazir2025rethinking,
  title={Rethinking decoder design: Improving biomarker segmentation using depth-to-space restoration and residual linear attention},
  author={Wazir, Saad and Kim, Daeyoung},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={30861--30871},
  year={2025},
  doi = {10.48550/arXiv.2506.18335},
  url = {https://doi.org/10.48550/arXiv.2506.18335}
}

Experimental Results* on the MedCAGD-Dataset-Collection

Download Dataset from Huggingface. Link: https://huggingface.co/datasets/saadwazir/MedCAGD-Dataset-Collection Dataset Viewer. Link: https://huggingface.co/spaces/saadwazir/MedCAGD-Dataset-Viewer

TABLE 1: ACDC DATASET RESULTS (MULTI-CLASS SEMANTIC SEGMENTATION TASK)
Method Dice ↑ IoU ↑ HD95 ↓ RV Myo LV
U-Net 81.56 73.41 6.9854 76.99 80.28 87.43
MCADS 84.51 76.92 5.5595 81.16 83.27 89.09
TABLE 2: SYNAPSE DATASET RESULTS (MULTI-CLASS SEMANTIC SEGMENTATION TASK)
Method Dice ↑ IoU ↑ HD95 ↓ Aorta GB KL KR Liver PC SP SM
U-Net 70.11 59.39 44.69 84.00 56.70 72.41 62.64 86.98 48.73 81.48 67.96
MCADS 85.03 81.71 11.11 90.81 86.07 86.77 83.24 87.66 83.55 85.74 76.38
Self-Prompt SAM 86.74 - - 91.99 69.95 85.65 85.40 97.39 79.18 94.38 89.94
TABLE 3: RESULTS ON MULTIPLE DATASETS (BINARY SEMANTIC SEGMENTATION TASK)
Method Params ↓ Flops ↓ Skin Polyp Fundus Neoplasm Cell All
ISIC17 ISIC18 ETIS ColonDB DRIVE FIVES BUSI ThyroidXL CellSeg Avg
U-Net 34.53 M 65.53 G 83.07 86.67 76.85 83.95 71.20 75.77 74.04 71.16 71.52 77.14
MCADS 50.90 M 61.89 G 84.14 91.01 92.24 91.37 78.42 76.05 80.03 86.33 86.68 85.14
AutoSam 41.56 M 25.11 G - - 79.70 83.00 - - - - - -
Medical SAM3 840.0 M - - - 86.10 - 55.80 - - - - -

Research Note * This dataset collection provides early access to the datasets used for benchmarking segmentation models across multiple medical imaging datasets. The segmentation benchmarks associated with this dataset collection are part of ongoing research related to the MCADS decoder and the upcoming MedCAGD framework. The full benchmark results and evaluation protocols will appear in the MedCAGD paper, which is currently under review, and additional results will be released after the review process.


Setup Conda Environment

use this command to create a conda environment (all the required packages are listed in mcadsDecoder_env.yml file)

conda env create -f mcadsDecoder_env.yml

Datasets

MoNuSeg - Multi-organ nuclei segmentation from H&E stained histopathological images.

link: https://monuseg.grand-challenge.org/Data/

TNBC - Triple-negative breast cancer.

link: https://zenodo.org/records/1175282#.YMisCTZKgow

DSB - 2018 Data Science Bowl.

link: https://www.kaggle.com/c/data-science-bowl-2018/data

EM - Electron Microscopy.

link: https://www.epfl.ch/labs/cvlab/data/data-em/

Data Preprocessing

After downloading the dataset you must generate patches of images and their corresponding masks (Ground Truth), & convert it into numpy arrays or you can use dataloaders directly inside the code. Note: The last channel of masks must have black and white (0,1) values not greyscale(0 to 255) values. you can generate patches using Image_Patchyfy. Link : https://github.com/saadwazir/Image_Patchyfy

Offline Data Augmentation

(it requires albumentations library link: https://albumentations.ai)

use offline_augmentation.py to generate augmented samples

Training and Testing

  1. Edit the config.txt file to set training and testing parameters and define folder paths.
  2. Run the mcadsDecoder.py file in a conda environment. It contains the model, training, and testing code.

Configurations

  • Paths for training

Define paths for folders that contain patches of images and masks for training.

train_images_patch_dir=/mnt/hdd_2A/datasets/monuseg_patches_augm/images/
train_masks_patch_dir=/mnt/hdd_2A/datasets/monuseg_patches_augm/masks/
  • Paths for testing

Define paths for numpy arrays that contain patches of images and masks for testing.

test_images_patch_dir=/mnt/hdd_2A/datasets/monuseg_test_patches_arrays/monuseg_org_X_test.npy
test_masks_patch_dir=/mnt/hdd_2A/datasets/monuseg_test_patches_arrays/monuseg_org_y_test.npy

Define paths for folders that contain full-size images and masks for testing.

image_full_test_directory=/mnt/hdd_2A/datasets/monuseg_org/test/image/
mask_full_test_directory=/mnt/hdd_2A/datasets/monuseg_org/test/mask/
  • Training Parameters
training=False
gpu_device=0
num_epochs=200
batch_size=8
imgz_size=256
  • Evaluation Parameters

Parameters for processing patches of images and masks:

patch_img_size=256
patch_step_size=128
resize_img=True #set resize_img=False if full image sizes have different width and height.
resize_height_width=1024

Parameters for processing full-size images and masks:

resize_full_images=True #if resize_full_images=False then full-size images are not scaled down, but evaluation takes more time.


## Acknowledgement
We gratefully acknowledge the prior contributions of the research community, which have provided the foundation for our framework.

Releases

No releases published

Packages

 
 
 

Contributors

Languages