Skip to content

How to Handle Limited Observability in TDMPC? #21

@lrchit

Description

@lrchit

Hi,

Thank you for the incredible work on TDMPC!

I'm implementing it on my own task with the Unitree Go1 robot, and I have some questions regarding the "observation".
image

In the image above:

  • The left represents the "stand task" without privileged information (i.e., the velocity of the base link and the robot's height).
  • The right shows the task with these additional privileged observations.

From my experiments, it’s significantly harder for the agent to learn without the privileged information, as shown in the image. After 14 hours of training, the agent on the right still struggles. By contrast, the agent with privileged information (left) stands reliably after just 40 minutes.

This leads to my confusion:
Is it inherently too challenging to train TDMPC when only the latent state is used for the value (critic network), which is inferred from only proprioceptive data? While a motion capture system might be available for rewards during training, in my case, the trained policy would only have access to proprioceptive data during deployment on the real robot.

I’m considering a teacher-student framework:

  1. The teacher loop is trained first with full access to privileged information to refine the latent states.
  2. The student loop then learns to "imitate" the latent states using only proprioceptive data.

Do you think such an approach would help?

Looking forward to your insights!

Best regards,
Ruochen Li

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions