Stack of two packages in this repository:
| Package | Role |
|---|---|
lidar_obstacle_detection |
Node lidar_cloud_ingress: subscribe PointCloud2, rigid transform into the robot base frame, spatial filter + voxel, optional temporal merge, optional surface segmentation + DBSCAN → ObstacleList (with YZ-smoothed closest_surface_point) and debug topics (optional depth-shaded segmented cloud, optional closest-point RViz spheres). |
lidar_obstacle_detection_msgs |
Message definitions (Obstacle, ObstacleList). |
Defaults in lidar_obstacle_detection/config/lidar_cloud_ingress.yaml target a Unitree Go2 setup: input cloud /utlidar/cloud, output in base_link, and mount extrinsics matching that robot. The published cloud is always expressed in output_cloud_frame_id (usually base_link). A static TF base_link → lidar_link is published only for visualization alignment in RViz; it does not change the point math.
RViz2 — point cloud in base_link, segmentation display, obstacle markers (bounding box + normal), and TF frames (base_link, lidar_link):
Pipeline — raw scan → intermediate processing → clustered obstacles (cosine similarity surface split + DBSCAN):
From your ROS 2 workspace (parent of this src tree):
cd ~/ros2_ws
rosdep install --from-paths src/object_detection --ignore-src -r -y # optional; first-time deps
python3 -m pip install -r src/object_detection/lidar_obstacle_detection/requirements.txt
colcon build --packages-up-to lidar_obstacle_detection --symlink-install
source install/setup.bashros2 launch lidar_obstacle_detection lidar_cloud_ingress.launch.py- With RViz (default): omit
headlessor setheadless:=false. - Headless (no RViz, e.g. on-robot or SSH):
headless:=true.
Override parameters file: ingress_params_file:=/path/to/your.yaml. Verbose logs: verbose:=true.
Edit lidar_obstacle_detection/config/lidar_cloud_ingress.yaml (or a copy passed as ingress_params_file). Fields that almost always change for a new platform:
| Parameter | What to set |
|---|---|
input_topic |
Your driver’s sensor_msgs/PointCloud2 topic. |
output_cloud_frame_id |
Frame of the processed cloud (typically the robot base / base_link). |
lidar_mount_tf_parent_frame |
Must match output_cloud_frame_id so TF and cloud headers agree. |
lidar_link_frame |
Child name for the visualization static TF (often matches URDF LiDAR link). |
lidar_mount_tf_xyz, lidar_mount_tf_rpy_rad |
Parent → LiDAR extrinsic (meters, radians); same geometry the node uses to transform points. |
driver_cloud_to_lidar_link_xyz, driver_cloud_to_lidar_link_rpy_rad |
Extra rigid step if the driver’s points are not already in the frame your URDF expects before the mount (often all zeros). |
publish_lidar_mount_static_tf |
Set false if another node (e.g. robot_state_publisher) already publishes the same transform. |
Then tune fov_deg, max_depth, voxel_size, temporal_*, and perception (dbscan_*, cosine_threshold, etc.) for your sensor density and environment. For forward depth in base_link (+X ahead), yz_depth_* parameters control YZ-bin smoothing of the minimum-X surface estimate; perception_segmented_cloud_depth_shading modulates segmented obstacle colors by that smoothed depth in RViz; publish_closest_surface_markers adds sphere markers at the chosen closest points (see the package README).
Saved config: lidar_obstacle_detection/rviz/lidar_ingress.rviz.
- Set Global Options → Fixed Frame to your base frame (default
base_link). - Point each display’s Topic at the names you set in YAML (
output_topic,colored_segmented_cloud_topic,obstacle_markers_topic, etc., if you renamed them). Withperception_segmented_cloud_depth_shading, the segmented cloud encodes smoothed forward depth in brightness; enablepublish_closest_surface_markersto add closest-surface spheres on the obstacleMarkerArraytopic (marker namespacelidar_obstacle_closest_surface).
See the package README: lidar_obstacle_detection/README.md (topics, QoS, algorithm layout, message README link).
Related work was poster-presented at ICRA 2025, Workshop of Field Robotics. If you use this software or ideas from that line of work, please cite:
T. Girgin, E. Girgin, and C. Kilic, “Learning Rock Pushability on Rough Planetary Terrain,” arXiv preprint arXiv:2505.09833, 2025. https://arxiv.org/abs/2505.09833
BibTeX:
@article{girgin2025learning,
title={Learning Rock Pushability on Rough Planetary Terrain},
author={Girgin, Tuba and Girgin, Emre and Kilic, Cagri},
journal={arXiv preprint arXiv:2505.09833},
year={2025}
}
