Skip to content

Latest commit

 

History

History
35 lines (25 loc) · 1.21 KB

File metadata and controls

35 lines (25 loc) · 1.21 KB

Curious Replay for Model-based Adaptation

Implementations of Curious Replay, a method for prioritizing experience replay that is tailored to model-based reinforcement learning agents.

Experiences are prioritized based on how interesting they are, as measured by a curiosity signal. In combination with DreamerV3, this method achieves a new state-of-the-art on the Crafter benchmark.

fig_overview-01_small

If you find this code useful, please reference in your paper:

@article{kauvar2023curious,
  title={Curious Replay for Model-based Adaptation},
  author={Kauvar, Isaac and Doyle, Chris and Zhou, Linqi and Haber, Nick},
  journal={International Conference on Machine Learning},
  year={2023}
}

Here we provide links to implementations with different model-based agents.

DreamerV3

github.com/AutonomousAgentsLab/cr-dv3

DreamerV2

github.com/AutonomousAgentsLab/cr-dv2