Skip to content

marmotlab/DARE

Repository files navigation

DARE: Diffusion Policy to Autonomous Robotics Exploration

ICRA 2025 arXiv Linux platform License: MIT


Introduction

Autonomous robot exploration requires efficient path planning to map unknown environments. While conventional methods are often limited to optimizing based on current beliefs, DARE (Diffusion Policy for Autonomous Robot Exploration) leverages the power of generative AI to reason about unknown areas by drawing on learned experiences.

DARE is a novel approach that utilizes diffusion models trained on expert demonstrations to explicitly generate long-horizon exploration paths. By combining an attention-based encoder with a diffusion policy, DARE learns to recognize potential structures in unknown regions from partial beliefs, enabling it to plan paths that consider these unobserved areas.

Key Features:

  • Generative Path Planning: Uses diffusion models to explicitly generate efficient exploration paths.
  • Expert Demonstrations: Trained on ground truth optimal demonstrations to learn superior exploration patterns.
  • Structure Reasoning: Capable of reasoning about potential structures in unknown areas based on partial beliefs.
  • Robust Performance: Achieves state-of-the-art performance with strong generalizability in both simulation and real-world scenarios.

Usage

Requirements

Install the following dependencies in a conda environment as shown below:

git clone https://github.com/marmotlab/DARE.git && cd DARE
conda create -n env_dare python=3.12.9 -y
conda activate env_dare
pip install -e .

Dataset Collection

Modify dataset_parameter.py to fit your dataset needs then run dataset collection script:

python dataset_driver.py

Dataset will be saved to directory diffusion_exploration/dataset/name_of_test. It will include a data.zarr directory which contains the dataset and a gifs directory.

Policy Training

Copy desired training config file from diffusion_exploration/diffusion_policy/config. Modify desired task config file from diffusion_exploration/diffusion_policy/config/task.

Note: You probably should modify the zarr_path to change dataset location

You can run the training script which requires two arguements:

  1. --config-dir which is the directory to find the config file
  2. --config-name which is the name of the config file
python train.py --config-dir=. --config-name=train_exploration_transformer_node_discrete.yaml

This will create a directory diffusion_exploration/data/date/time/name_of_run

Evaluation

Modify test_parameter.py to fit your test needs then run evaluation script:

python test_driver.py

Test results will be printed on terminal and saved as a CSV inference_gifs directory will be created in diffusion_exploration/data/date/time/name_of_run.


Credit

If you find this work useful, please consider citing us and the following works:

  • DARE: Diffusion Policy for Autonomous Robot Exploration
@inproceedings{cao2025dare,
  author={Cao, Yuhong and Lew, Jeric and Liang, Jingsong and Cheng, Jin and Sartoretti, Guillaume},
  booktitle={2025 IEEE International Conference on Robotics and Automation (ICRA)}, 
  title={DARE: Diffusion Policy for Autonomous Robot Exploration}, 
  year={2025},
  pages={11987-11993},
  doi={10.1109/ICRA55743.2025.11128196}}
}
  • ARiADNE: A Reinforcement learning approach using Attention-based Deep Networks for Exploration
@inproceedings{cao2023ariadne,
  author={Cao, Yuhong and Hou, Tianxiang and Wang, Yizhuo and Yi, Xian and Sartoretti, Guillaume},
  booktitle={2023 IEEE International Conference on Robotics and Automation (ICRA)}, 
  title={ARiADNE: A Reinforcement learning approach using Attention-based Deep Networks for Exploration}, 
  year={2023},
  pages={10219-10225},
  doi={10.1109/ICRA48891.2023.10160565}
  }
  • Deep Reinforcement Learning-based Large-scale Robot Exploration
@article{cao2024deepreinforcementlearningbasedlargescale,
  author={Cao, Yuhong and Zhao, Rui and Wang, Yizhuo and Xiang, Bairan and Sartoretti, Guillaume},
  journal={IEEE Robotics and Automation Letters}, 
  title={Deep Reinforcement Learning-Based Large-Scale Robot Exploration}, 
  year={2024},
  volume={9},
  number={5},
  pages={4631-4638},
  keywords={Training;Planning;Predictive models;Simultaneous localization and mapping;Trajectory;Three-dimensional displays;Reinforcement learning;Path planning;Robot learning;View Planning for SLAM;reinforcement learning;motion and path planning},
  doi={10.1109/LRA.2024.3379804}
}
  • Diffusion policy: Visuomotor policy learning via action diffusion
@inproceedings{chi2023diffusionpolicy,
	title={Diffusion Policy: Visuomotor Policy Learning via Action Diffusion},
	author={Chi, Cheng and Feng, Siyuan and Du, Yilun and Xu, Zhenjia and Cousineau, Eric and Burchfiel, Benjamin and Song, Shuran},
	booktitle={Proceedings of Robotics: Science and Systems (RSS)},
	year={2023}
}

@article{chi2024diffusionpolicy,
	author = {Cheng Chi and Zhenjia Xu and Siyuan Feng and Eric Cousineau and Yilun Du and Benjamin Burchfiel and Russ Tedrake and Shuran Song},
	title ={Diffusion Policy: Visuomotor Policy Learning via Action Diffusion},
	journal = {The International Journal of Robotics Research},
	year = {2024},
}

We build on the codebase from Deep Reinforcement Learning-based Large-scale Robot Exploration and Diffusion policy.


About

[ICRA 2025] DARE: Diffusion Policy for Autonomous Robot Exploration

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages