Hi @nicklashansen and team,
First of all, thank you for the great work on "Learning Massively Multitask World Models for Continuous Control" and for releasing the code!
I am reading through the paper and am very interested in the MMBench benchmark introduced in the work. As I am relatively new to this area, I am trying to get a better sense of how the benchmark is constructed programmatically.
Could you provide some high-level guidance and details on the step-by-step process used to collect the demonstrations for the 200 tasks.
I would love to understand how to work with the benchmark directly or potentially extend it in the future, so any instructions would be extremely helpful.
Thanks for your time!
Hi @nicklashansen and team,
First of all, thank you for the great work on "Learning Massively Multitask World Models for Continuous Control" and for releasing the code!
I am reading through the paper and am very interested in the MMBench benchmark introduced in the work. As I am relatively new to this area, I am trying to get a better sense of how the benchmark is constructed programmatically.
Could you provide some high-level guidance and details on the step-by-step process used to collect the demonstrations for the 200 tasks.
I would love to understand how to work with the benchmark directly or potentially extend it in the future, so any instructions would be extremely helpful.
Thanks for your time!