This project provides an implementation of the Noise2Inverse (N2I) framework for denoising CT data without requiring ground truth images. It implements the 2.5D approach utilizing adjacent slices in the deep learning model. A simple U-Net model with leaky ReLU and group norm is used for denoising.
Create the conda environment:
git clone https://github.com/AISDC/Noise2Inverse360 denoise
cd denoise
conda env create -f envs/denoise_environment.yml
conda activate denoise
pip install .Dependencies include:
- albumentations (data augmentation)
- pytorch >= 2.0 (with CUDA support)
- tifffile
- tqdm
- matplotlib
- scikit-image
- scipy
- pyyaml
-
All output (training results, inference results, trained models) are saved inside the reconstruction directory.
- User: John Smith
- Sample 1 Directory:
- Provided by the User:
- Full Reconstruction (Directory)
- Created by
tomocupy recon_steps(tomocupy env):- Sub-Reconstruction 0 (Directory)
- Sub-Reconstruction 1 (Directory)
- Created by
denoise prepare(denoise env):config.yaml
- Created by
denoise train/ inference:- TrainOutput (Directory)
<sample>_denoised_slices/<sample>_denoised_volume/
- Provided by the User:
- Sample 1 Directory:
- User: John Smith
-
Data is saved as
.tifffiles (.tifor.tiff). -
Model type/size is consistent across datasets.
- U-Net without skip connections + leaky ReLU + group norm has proven robust.
-
Inference can run while training is still in progress.
- Automatic batch size optimization for A100/V100 GPUs
- Accounts for image size, GPU memory, and model size to reduce OOM errors.
- Support for 2.5D inference with PyTorch
- Flexible plug-and-play workflow across different samples/users
denoise/
├── denoise/
│ ├── __init__.py
│ ├── __main__.py # CLI entry point (prepare / train / slice / volume / register / search)
│ ├── registry.py # local model registry (~/.denoise/registry/)
│ ├── log.py # colored logging module
│ ├── train.py # DDP training loop
│ ├── slice.py # single-slice inference
│ ├── volume.py # full-volume inference
│ ├── data.py # dataset classes
│ ├── data_utils.py # patch extraction / stitching
│ ├── model.py # U-Net architecture
│ ├── loss.py # LCL loss
│ ├── eval.py # evaluation metrics
│ ├── tiffs.py # TIFF I/O utilities
│ └── utils.py # image utilities
├── docs/ # Sphinx documentation
│ └── source/img/ # workflow and example figures
├── envs/
│ ├── denoise_environment.yml
│ └── requirements.txt
├── baseline_config.yaml
├── LICENSE
├── setup.py
└── VERSION
git clone https://github.com/AISDC/Noise2Inverse360 denoise
cd denoise
conda env create -f envs/denoise_environment.yml
conda activate denoise
pip install .Step 1 — write the config YAML (run in the denoise environment):
(denoise) $ denoise prepare --file-name /data/sample.h5This writes sample_rec_config.yaml (with instrument metadata read from
the HDF5) and prints the two tomocupy recon_steps commands you need to
run next.
Note:
denoise preparedoes not create the sub-reconstruction directories. Due to a NumPy compatibility issue between thedenoiseandtomocupyenvironments, the sub-reconstructions must be created manually by running the printed commands in thetomocupyenvironment.
Step 2 — create the sub-reconstructions (run in the tomocupy environment):
# even-indexed projections (0, 2, 4, ...)
(tomocupy) $ tomocupy recon_steps \
--file-name /data/sample.h5 \
--start-proj 0 --proj-step 2 \
--out-path-name /data/sample_rec_0 \
[... same options as the full reconstruction ...]
# odd-indexed projections (1, 3, 5, ...)
(tomocupy) $ tomocupy recon_steps \
--file-name /data/sample.h5 \
--start-proj 1 --proj-step 2 \
--out-path-name /data/sample_rec_1 \
[... same options as the full reconstruction ...]denoise prepare prints the exact paths for --out-path-name derived
from --file-name, so you can copy-paste them directly.
Before launching a new training run, denoise train automatically
searches the local model registry (~/.denoise/registry/) for a model
trained under the same instrument conditions. If a match is found, it
is listed and you are asked whether to proceed:
(denoise) $ denoise train --config /data/sample_rec_config.yaml --gpus 0,1
Registry search found 1 matching model(s):
[1] 2BM_pink_30keV_FLIROryx_20260219_143000 (9/9 criteria match — 100%)
beamline: 2-BM | mode: pink | energy: 30.0 keV | ...
registry path: /home/user/.denoise/registry/2BM_pink_30keV_FLIROryx_...
Train a new model anyway? [y/N]Enter N to skip training and use the existing model, or y to
train anyway. To bypass the search entirely, add --no-search.
Resume interrupted training with --resume:
(denoise) $ denoise train --config /data/sample_rec_config.yaml --gpus 0,1 --resumeAfter training, register the model so it can be found automatically in future sessions:
(denoise) $ denoise register \
--config /data/sample_rec_config.yaml \
--model-dir /data/sample_rec/TrainOutputModels are stored in ~/.denoise/registry/ (never committed to git).
On APS machines where tocai and tomo4 share a GPFS home directory, a
model registered on tocai is immediately visible on tomo4.
(denoise) $ denoise search --config /data/new_sample_rec_config.yamlPrints all registry entries that match the noise fingerprint of the given config, ranked by score (fraction of criteria matched).
denoise slice --config /data/sample_rec_config.yaml --slice-number 500- Loads pretrained model
- Fetches slice ± neighboring slices (2.5D)
- Applies sliding window patching
- Normalizes using training statistics
- Saves
.tiffto<sample>_denoised_slices/
denoise volume --config /data/sample_rec_config.yaml
denoise volume --config /data/sample_rec_config.yaml --start-slice 500 --end-slice 600- Optionally denoise slice subset
- Directory
<sample>_denoised_volume/is recreated each run - Automatic batch size calculation
- Sliding window patching
- Mini-batch inference
- Saves output
.tiffs
Areas for improvement:
- Fine-tuning from previous models (reduces training time from 8--12 hrs to ~30--60 min)
- Exploring alternative architectures beyond U-Net
Relevant Citations:
@article{https://doi.org/10.1109/TCI.2020.3019647,
title={Noise2inverse: Self-supervised deep convolutional denoising for tomography},
author={Hendriksen, Allard Adriaan and Pelt, Dani{\"e}l Maria and Batenburg, K Joost},
journal={IEEE Transactions on Computational Imaging},
volume={6},
pages={1320--1335},
year={2020},
publisher={IEEE}
}
@article{https://doi.org/10.1038/s41598-021-91084-8,
title={Deep denoising for multi-dimensional synchrotron X-ray tomography without high-quality reference data},
author={Hendriksen, Allard A and B{\"u}hrer, Minna and Leone, Laura and Merlini, Marco and Vigano, Nicola and Pelt, Dani{\"e}l M and Marone, Federica and Di Michiel, Marco and Batenburg, K Joost},
journal={Scientific reports},
volume={11},
number={1},
pages={11895},
year={2021},
publisher={Nature Publishing Group UK London}
}
@article{https://doi.org/10.1016/j.tmater.2025.100075,
title={Boosting Noise2Inverse via enhanced model selection for denoising computed tomography data},
author={Yunker, Austin and Kenesei, Peter and Sharma, Hemant and Park, Jun-Sang and Miceli, Antonino and Kettimuthu, Rajkumar},
journal={Tomography of Materials and Structures},
pages={100075},
year={2025},
publisher={Elsevier}
}