Skip to content

empriselab/feeding-deployment

Repository files navigation

Requirements

  • Python 3.10+
  • Tested on Ubuntu 20.04

Pre-Installation

  1. Install ROS and rospy.
  2. Install pyaudio.

Installation

  1. Recommended: create and source a virtualenv or a conda environment
  2. pip install -e ".[robot, develop]" for full install or pip install -e . for only preference learning setup

Run Feeding Demo on Real Robot

  1. Run the arm controller server on the NUC:
    • ssh to the NUC: sshnuc with lab password
    • [only for inside-mouth bite transfer] zero the arm torque offsets:
      • Alias set_zeros on NUC
      • Otherwise, run the following commands:
        • conda activate controller
        • cd ~/feeding-deployment/src/feeding_deployment/robot_controller
        • python kinova.py
    • run the controller server:
      • Alias run_server on NUC
      • Otherwise, run the following commands:
        • conda activate controller
        • cd feeding-deployment/src/feeding_deployment/robot_controller
        • python arm_server.py
  2. Run bulldog on the NUC:
    • ssh to the NUC: sshnuc with lab password
    • run bulldog with alias run_bulldog
  3. Run a roscore on the compute system: roscore
  4. Launch all the sensors on the compute system using launch_sensors
  5. Launch the roslaunch on compute system for visualization / publish tfs:
    • Alias launch_robot on compute system
    • Otherwise,run the following commands from the root of your ROS workspace:
      • conda activate feed
      • source devel/setup.bash
      • cd src/feeding-deployment/launch
      • roslaunch robot.launch
  6. Start feeding utensil:
    • Alias launch_utensil on compute system
    • Otherwise, run the following commands from the root of your ROS workspace:
      • conda activate feed
      • source devel/setup.bash
      • rosrun wrist_driver_ros wrist_driver
    • Important Note: To shutdown this node, press Ctrl + / (Signal handling is setup to shutdown cleanly)
  7. Start the web application:
    • Make sure that the feeding laptop's WiFi is off (so that the webapp only launches on the router IP)
    • Alias launch_app on compute system
    • Otherwise, run the following commands from the root of your ROS workspace:
      • conda activate feed
      • source devel/setup.bash
      • cd ~/deployment_ws/src/feedingpage/vue-ros-demo
      • npm run serve
    • On a browser connected to FeedingDeployment-5G (on the laptop or the iPad), open the following webpage: http://192.168.1.2:8080/#/task_selection
  8. Run the feeding demo:
    • Make sure that the feeding laptop's WiFi is on and connected to the internet so that ChatGPT API works (use KortexWiFi if available)
    • Alias run_demo on compute system
    • Otherwise,run the following commands from the root of your ROS workspace:
      • conda activate feed
      • source devel/setup.bash
      • cd src/feeding-deployment/src/feeding_deployment/integration
      • python run.py --user tests --run_on_robot --use_interface --no_waits
    • Important Note 1: If you want to resume from some state (state names: after_utensil_pickup, after_bite_pickup, last_state), use: python run.py --user tests --run_on_robot --use_interface --no_waits --resume_from_state after_utensil_pickup (replace after_utensil_pickup with appropriate state name).
    • Important Note 2: The preset food item for tests user is bananas. If you want to try some other food item, just change the user name to a new one. For example, python run.py --user tests_new --run_on_robot --use_interface --no_waits

Moving the robot to preset configurations

You can move the robot to preset configurations by running:

  • Alias cd_actions on compute system
  • python retract.py (you can also send it to transfer.py and acquisition.py)

Calibrate tool offset for inside-mouth transfer

  1. Grasp the tool and move to before bite transfer position.
  2. Calibrate tool:
    • Alias cd_demo on compute system
    • Otherwise, run the following commands from the root of your ROS workspace:
      • conda activate feed
      • source devel/setup.bash
      • cd src/feeding-deployment/src/feeding_deployment/integration
    • python transfer_calibration.py --tool <tool_name> where <tool_name> is one of "fork", "drink" and "wipe"
  3. Manually (using buttons on the robot) move the robot to the intended inside-mouth transfer config, and press [ENTER] in the script above to record it.
  4. To test the tool calibration:
    • Alias cd_demo on compute system
    • Otherwise, run the following commands from the root of your ROS workspace:
      • conda activate feed
      • source devel/setup.bash
      • cd src/feeding-deployment/src/feeding_deployment/integration
    • python transfer_calibration.py --tool <tool_name> --test where <tool_name> is one of "fork", "drink" and "wipe"

Run Feeding Demo in Simulation

  1. Launch the roslaunch for visualization / publish tfs:
    • Navigate to the launch files: cd launch
    • Launch: roslaunch sim.launch
  2. Run the feeding demo:
    • Navigate to integration scripts: cd src/feeding_deployment/integration
    • Run demo: python demo.py

Random

  • To check FT readings: rostopic echo /forque/forqueSensor
  • IP for robot: 192.168..10
  • IP for webapp: http://192.168.1.2:8080/#/task_selection

Build navigation map + save named base locations (ros_vention)

roslaunch ros_vention vention_navigation.launch does not load map files by itself. It starts move_base, which uses whatever /map and map -> odom are currently published.

The workflow below uses Cartographer-native saved state (.pbstream).

Part 1: First-time mapping + save map state + save named locations

From /home/isacc/deployment_ws, source your workspace in each terminal: source devel/setup.bash

  1. Start core and robot sources:
    • roscore
    • roslaunch ros_vention vention_description.launch
    • roslaunch ros_vention vention_rplidar_a1.launch
    • roslaunch ros_vention vention_odm_d435.launch
    • roslaunch ros_vention vention_cartographer_lidar.launch
  2. Build map and save Cartographer state (.pbstream):
    • cd src/feeding-deployment
    • python src/feeding_deployment/integration/build_map_interactive.py --pbstream-file /home/isacc/deployment_ws/src/feeding-deployment/config/maps/vention_map.pbstream
    • Optional: also export YAML/PGM snapshot: add --save-occupancy-snapshot
    • By default this script does not call /finish_trajectory, so Cartographer can keep publishing map -> odom for follow-up steps like named location capture.
    • Optional: if you explicitly want to finish the trajectory during save, add --finish-trajectory-before-save
  3. Save named navigation locations:
    • python src/feeding_deployment/integration/capture_named_locations.py --locations-file /home/isacc/deployment_ws/src/feeding-deployment/config/nav_named_locations.yaml
    • This captures in order: fridge, microwave, table, sink.

Part 2: Actual deployment (reuse saved map)

From /home/isacc/deployment_ws, source your workspace in each terminal: source devel/setup.bash

  1. Start core and robot sources:
    • roscore
    • roslaunch ros_vention vention_description.launch
    • roslaunch ros_vention vention_rplidar_a1.launch
    • roslaunch ros_vention vention_odm_d435.launch
  2. Start Cartographer localization from saved state:
    • roslaunch ros_vention vention_cartographer_localization.launch load_state_filename:=/home/isacc/deployment_ws/src/feeding-deployment/config/maps/vention_map.pbstream
  3. Start navigation:
    • roslaunch ros_vention vention_navigation.launch

In deployment mode, Cartographer publishes /map and map -> odom from the saved .pbstream, and move_base consumes that.

By default, named locations are written to config/nav_named_locations.yaml. NavigateHLA reads this file automatically. To use a different file, set: export FEEDING_NAV_LOCATIONS_FILE=/absolute/path/to/your_locations.yaml

Check Installation

Run ./run_ci_checks.sh. It should complete with all green successes in 5-10 seconds.

About

Code for the robot-assisted feeding project at EmPRISE Lab

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages