Zhen Yao, Xiaowen Ying, Zhiyu Zhu, Mooi Choo Chuah.
arXiv | Paper | Project page
This repository contains the official Pytorch implementation of training & evaluation code and the pretrained models for BRENet.
conda env create --file environment.yml
conda activate BRENet
pip install torch==1.9.1+cu111 torchvision==0.10.1+cu111 -f https://download.pytorch.org/whl/torch_stable.html
conda install -c conda-forge cudatoolkit-dev==11.1.1
pip install mmcv-full==1.3.0 -f https://download.openmmlab.com/mmcv/dist/cu111/torch1.9.1/index.html
pip install timm==0.4.12
pip install ipython
pip install einops
pip install attrs
pip install yapf==0.40.1
pip install opencv-python==4.5.1.48
cd BRENet && pip install -e . --user
Please follow DATASET.md to prepare the datasets.
We provide trained weights for our models reported in the paper. All of the models were evaluated with 1 NVIDIA RTX A5000 GPU, and can be reproduced with the evaluation script above.
| Dataset | Backbone | Resolution | mIoU | Accuracy | Download Link |
|---|---|---|---|---|---|
| DDD17 | MiT-B2 | 200*346 | 78.56 | 96.61 | [Huggingface] |
| DSEC | MiT-B2 | 440*640 | 74.94 | 95.85 | [Huggingface] |
# Single-gpu testing
python tools/test.py local_configs/BRENet/brenet.b2.640x440.dsec.80k.py /path/to/checkpoint_file
python tools/test.py local_configs/BRENet/brenet.b2.346x200.ddd17.160k.py /path/to/checkpoint_file
Download backbone weights of MiT-B2 pretrained on ImageNet-1K, and put it in the folder pretrained/.
Download FlowNet weights: Checkpoint trained on DSEC of eRaft in and put it in the folder pretrained/.
# Single-gpu training
python tools/train.py local_configs/BRENet/brenet.b2.640x440.dsec.80k.py
python tools/train.py local_configs/BRENet/brenet.b2.346x200.ddd17.160k.py
# Multi-gpu training
./tools/dist_train.sh local_configs/BRENet/brenet.b2.640x440.dsec.80k.py <GPU_NUM>
./tools/dist_train.sh local_configs/BRENet/brenet.b2.346x200.ddd17.160k.py <GPU_NUM>
This repository is under the Apache-2.0 license. For commercial use, please contact with the authors.
This codebase is built based on MMSegmentation. We thank MMSegmentation for their great contributions.
Please cite our BRENet paper and other related works if you find this useful:)
@article{yao2025learning,
title={Learning Flow-Guided Registration for RGB-Event Semantic Segmentation},
author={Yao, Zhen and Ying, Xiaowen and Zhu, Zhiyu and Chuah, Mooi Choo},
journal={arXiv preprint arXiv:2505.01548},
year={2025}
}
EVSNet: Event-guided low-light video semantic segmentation
@inproceedings{yao2025event,
title={Event-guided low-light video semantic segmentation},
author={Yao, Zhen and Chuah, Mooi Choo},
booktitle={2025 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
pages={3330--3341},
year={2025},
organization={IEEE}
}
