Skip to content

Chen-Wendi/ImplicitRDP

Repository files navigation

ImplicitRDP:

An End-to-End Visual-Force Diffusion Policy with Structural Slow-Fast Learning

Wendi Chen12, Han Xue1, Yi Wang12, Fangyuan Zhou12, Jun Lv13, Yang Jin1, Shirun Tang3,
Chuan Wen1†, Cewu Lu123†
1Shanghai Jiao Tong University 2Shanghai Innovation Institute 3Noematrix Ltd.
*Equal contribution Equal advising

arXiv      project website      data      checkpoints      powered by Pytorch     

teaser

⚙️ Environment Setup

📝 Use Customized Force Sensors, Robots and Customized Tasks

Please refer to docs/customized_deployment_guide.md.

Hardware

  • Workstation with Ubuntu 22.04 for compatibility with ROS2 Humble.

    A workstation with a GPU (e.g., NVIDIA RTX 3090) is required.

  • 1 robot arms with a 6-axis force/torque sensor at the end effector.

    We use Flexiv Rizon 4s.

  • 1 USB camera or RealSense camera.

    We use an off-the-shelf USB camera for external camera. Follow the official document to install librealsense2 if you use RealSense camera.

  • (Optional) A single-button keyboard for controlling kinematic teaching.

    We assign this button as the “pause” key. When the button is pressed, the robot can be dragged for teaching. You can modify the used key in ImplicitRDP/real_world/kinematic_teaching/kineteach_controller.py.

hardware settings

Software

  1. Follow the official document to install ROS2 Humble.
  2. Since ROS2 has some compatibility issues with Conda, we recommend using a virtual environment with venv.
    python3 -m venv implicitrdp_venv
    source implicitrdp_venv/bin/activate
    pip install torch==2.2.2 torchvision==0.17.2 torchaudio==2.2.2 --index-url https://download.pytorch.org/whl/cu121
    pip install -r requirements.txt
  3. We perform manual CPU core binding to reducing the delay caused by the OS scheduler.
    1. Add config into /etc/security/limits.conf to ensure the user has the permission to set realtime priority.
      username - rtprio 99
      
    2. Edit /etc/default/grub and add isolcpus=xxx to the GRUB_CMDLINE_LINUX_DEFAULT line for isolating certain CPU cores.
    3. Modify the task config file (e.g. ImplicitRDP/config/task/real_flip_one_usb_camera_kineteach_10fps.yaml) and the beginning several lines of all entry-point Python files (e.g. control.py) to adjust the corresponding core binding.

📦 Data Collection

Kinematic Teaching

The environment and the task have to be configured first and then start several services for performing kinematic teaching, publishing sensor data and record the data.

  1. Environment and Task Configuration.
  2. Start services. Run each command in a separate terminal. You can use tmux to split the terminal.
    # start controlling service
    python control.py task=[task_config_file_name]
    # start camera node launcher
    python camera_node_launcher.py task=[task_config_file_name]
    # start data recorder
    python record_data.py --save_to_disk --vis_wrench --save_file_dir [task_data_dir] --save_file_name [record_seq_file_name]
  3. Press the "pause" button to enable the kinematic teaching.

Data Collection Tips

Please refer to docs/data_collection_tips.md.

Example Data

We provide the data we collected on data.

📚 Training

  1. Task Configuration. In addition to the task config file used in data collection, another file is needed to configure dataset, runner, and model-related parameters such as obs and action. You can take ImplicitRDP/config/task/real_flip_image_wrench_implicitrdp_10fps.yaml as an example. Refer to docs/customized_deployment_guide.md for more details.

    The dp, at, ldp, dpt, vrr, fp, no aux in the config file name indicate the Diffusion policy, Asymmetric Tokenizer, Latent Diffusion Policy, Diffusion Policy (Transformer), Virtual-target-based Representation Regularization, Force Prediction and No Auxiliary Task.

  2. Run the Training Script. We provide training scripts for Diffusion Policy, Reactive Diffusion Policy, Diffusion Policy (Transformer) and ImplicitRDP. The scripts will first post-process the data and then train the model. You can modify the training script to train the desired task and model.
    # config multi-gpu training
    accelerate config
    # Diffusion Policy
    ./train_dp.sh
    # Reactive Diffusion Policy
    ./train_rdp.sh
    # Diffusion Policy (Transformer)
    ./train_dpt.sh
    # ImplicitRDP
    ./train_implicitrdp.sh

    Make sure the action_type when post-processing the data is consistent with the the task config file.

🚀 Inference

  1. (Optional) Refer to vcamera_server_ip and vcamera_server_port in the task config file and start the corresponding vcamera server. If you want to record experiment videos with MindVision cameras, follow third_party/mvsdk/README.md to install MindVision SDK. We also support recording videos with RealSense or USB cameras.
    # run vcamera server
    python vcamera_server.py --host_ip [host_ip] --port [port] --camera_type [camera_type] --camera_id [camera_id] --fps [fps]
  2. Modify eval.sh to set the task and model you want to evaluate and run the command in separate terminals.
    # start controlling service
    python control.py task=[task_config_file_name]
    # start camera node launcher
    python camera_node_launcher.py task=[task_config_file_name]
    # start inference
    ./eval.sh

Checkpoints

We provide the checkpoints in our experiments on checkpoints.

🙏 Acknowledgement

Our work is built upon Reactive Diffusion Policy, Diffusion Policy, VQ-BeT, Stable Diffusion, UMI and Data Scaling Laws. Thanks for their great work!

🔗 Citation

If you find our work useful, please consider citing:

@article{chen2025implicitrdp,
  title     = {ImplicitRDP: An End-to-End Visual-Force Diffusion Policy with Structural Slow-Fast Learning},
  author    = {Chen, Wendi and Xue, Han and Wang, Yi and Zhou, Fangyuan and Lv, Jun and Jin, Yang and Tang, Shirun and Wen, Chuan and Lu, Cewu},
  journal   = {arXiv preprint arXiv:2512.10946},
  year      = {2025}
}

@inproceedings{xue2025reactive,
  title     = {Reactive Diffusion Policy: Slow-Fast Visual-Tactile Policy Learning for Contact-Rich Manipulation},
  author    = {Xue, Han and Ren, Jieji and Chen, Wendi and Zhang, Gu and Fang, Yuan and Gu, Guoying and Xu, Huazhe and Lu, Cewu},
  booktitle = {Proceedings of Robotics: Science and Systems (RSS)},
  year      = {2025}
}

About

Official Code of ImplicitRDP: An End-to-End Visual-Force Diffusion Policy with Structural Slow-Fast Learning

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages