Skip to content

USArmyResearchLab/ltmap

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Lightweight Terrain Mapping

This node provides a lightweight implementation for terrain mapping suitable for more resource and/or sensor constraied platforms.

It expects as input RGB pointclouds from a stereo or depth camera. The pointcloud is filtered to the area in front of the robot. After quantization and projection to a 2.5d grid, missing measurements are predicted using the approach detailed here:

http://proceedings.mlr.press/v87/shan18a/shan18a.pdf

The measurements are then fused into a long term map (held in a configurable LRU cache).

For each cell in the map, we keep a 16x16 pixel RGB patch that is later extracted in a square region around the robot, optionally pushed through an embedding model and then a cost model, and then these products are ouput through a grid_map with a number of layers.

BUILD REQUIREMENTS

ltmap requires libtorch to build and run. Ideally this is available via a preinstalled pytorch package available at the location referenced in CMakelists.txt. It will fail to build if version 2.7.0 or greater is not found.

Running with Unity Sim

You can run ltmap as an alternative perception stack in sim using the following:

# start the simulator
./sim.x86_64 --load-scene Meadows --start-location road1

# in another window, remove the RZR and add a warthog
simctl call set_robot_count 0 crl_rzr
simctl call set_robot_count 1 warthog

# now launch the stack
ros2 launch phoenix_launch stack_launch.xml ltmap:=true sim:=true name:=warthog1

# in a third window, launch rviz
ros2 launch phoenix_launch rviz_launch.py name:=warthog1

This should result in something that looks roughly like the following (the map is published on /warthog1/map/planning/local): example image

Operation

ltmap can be run in both a data collection and an inference mode. To enable the collection of data, collect_train_data should be set to true and and output directory provided. There are notebooks provided which show how to import this data as well as how to train an embedding model as well as a cost model.

Once a model is created, it can be provided to ltmap which will run inference to create embeddings and terrain costs on the fly.

It can also be instructed to export high resolution PNG maps with export_map_data. One should avoid collecting training data and exporting map data at the same time as it will result in increased latency in map updates.

ltmap does not currently perform slam. It relies on transforms provided from an external estimation/localization solution. It will 'average' the latest observation into the current map, with a high trust on the moset recent

Features Tracked

The map tracks elevation, hue|grayscale value (from RGB), and delta_height over time with Kalman updates.

It also keeps track of a 16x16 pixel RGB patch for each map cell aligned to the global grid. During data collection, it collects a stack of such patches, one for each observation in that cell.

Runtime Products

The node exports a gridmap gridmap which contains a number of layers:

layer description
hue hue/grayscale value for the cell
elevation elevation of the cell
delta_h difference between the min and max height in the cell
speed [NOT CURRENTLY USED]
cost costmap value for the cell
min_z thresholded version of delta_h providing a simplified map if no cost models are loaded
sim embedding cosine similatiry of the full map to the patch currently under the robot
p_lethal [UNDER DEVELOPMENT]

It also exports a number of debugging pointclouds that show the progressive stages in the pipeline as well as the full map which is the entirey of the map currently stored in the LRU cache.

Example Usage

The node can be started with a launch file such as the following:

  <arg name="ltmap_robot_config" default="$(find-pkg-share ltmap)/config/ltmap_$(var name).yaml" />

  <push_ros_namespace namespace="$(var name)"/>

  <node pkg="depth_image_proc" exec="point_cloud_xyzrgb_node" name="point_cloud_xyzrgb_node">
        <remap from="rgb/camera_info" to="sensors/camera/realsense/color/camera_info" />
        <remap from="rgb/image_rect_color" to="sensors/camera/realsense/color/image_raw" />
        <remap from="depth_registered/image_rect" to="sensors/camera/realsense/aligned_depth_to_color/image_raw" />
        <remap from="points" to="sensors/camera/realsense/depth/points" />
  </node>

  <node pkg="ltmap" exec="ltmap_node" name="ltmap" output="screen">
        <remap from="points" to="sensors/camera/realsense/depth/points" />
        <remap from="imu/data" to="/$(var name)/sensors/microstrain/ekf/imu/data" />
        <!-- remap from="ltmap/gridmap" to="map/planning/local" / -->

        <!-- define robot specific params here -->
        <param from="$(var ltmap_robot_config)"/>
        
        <!-- define non-robot specific params here -->
        <param name="collect_train_data" value="False" />    
        <param name="collect_output_dir" value="/phoenix/repmap" />

        <param name="export_map_data" value="False" />    
        <param name="export_output_dir" value="/phoenix/repmap_maps" />
        
        <param name="embed_model_file" value="/phoenix/install/ltmap/share/ltmap/models/repmap_embed_model_ViTMAE_sim.pt" />
        
        <param name="cost_model_interactive" value="True" />
        <param name="cost_model_file" value="/phoenix/install/ltmap/share/ltmap/models/repmap_krr_model_ViTMAE_sim.pt" />
  </node>

Parameters

The node takes the following parameters. The defaults should be sufficient for most use cases:

    // sensor filtering
    sensor_neg_z_limit_ = this->declare_parameter("sensor_neg_z_limit", 0.0);
    sensor_pos_z_limit_ = this->declare_parameter("sensor_pos_z_limit", 40.0);
    sensor_neg_x_limit_ = this->declare_parameter("sensor_neg_x_limit", 30.0);
    sensor_pos_x_limit_ = this->declare_parameter("sensor_pos_x_limit", 30.0);
    sensor_neg_y_limit_ = this->declare_parameter("sensor_neg_y_limit", 5.0);
    sensor_pos_y_limit_ = this->declare_parameter("sensor_pos_y_limit", 2.0);

    // map params
    map_resolution_ = this->declare_parameter("map_resolution", 0.2);
    quant_map_size_ = this->declare_parameter("quant_map_size", 420);
    predict_kernel_size_ = this->declare_parameter("predict_kernel_size", 0.8);
    delta_height_scale_ = this->declare_parameter("height_delta_scale", 5.0);
    cache_max_cells_ = this->declare_parameter("cache_max_cells", 50000);

    costmap_size_ = this->declare_parameter("costmap_size", 210);
    min_z_thres_ = this->declare_parameter("min_z_thres", 0.5);
    
    // other params
    grayscale_ = this->declare_parameter("grayscale", false);
    default_max_speed_ = this->declare_parameter("default_max_speed", 8.0);
    
    // training
    collect_train_data_ = this->declare_parameter("collect_train_data", false);
    collect_output_dir_ = this->declare_parameter("collect_output_dir", "/tmp/repmap");
    collect_max_samples_per_patch_ = this->declare_parameter("collect_max_samples_per_patch", 64);
    collect_min_pixels_per_patch_ = this->declare_parameter("collect_min_pixels_per_patch", 5);
    
    // exports
    export_map_data_ = this->declare_parameter("export_map_data", false);
    export_output_dir_ = this->declare_parameter("export_output_dir", "/tmp/repmap_maps");
    
    // models
    embed_model_file_ = this->declare_parameter("embed_model_file", "");
    cost_model_file_ = this->declare_parameter("cost_model_file", "");
    cost_model_interactive_ = this->declare_parameter("cost_model_interactive", false);

Input Topics

The node will accept PclXYZRGB on points. These can easily be remapped if desired. For training data collection, IMU data can be captured as well - this is not required for inference.

    cloud_sub_ = this->create_subscription<sensor_msgs::msg::PointCloud2>("points", 
                        rclcpp::QoS(rclcpp::KeepLast(5)).best_effort().durability_volatile(), 
                        std::bind(&TerrainMap::pcCallback, this, _1), 
                        cloud_options);

    imu_sub_ = this->create_subscription<sensor_msgs::msg::Imu>("imu/data", 
                        5, 
                        std::bind(&TerrainMap::imuCallback, this, _1), 
                        imu_options);

Visualization Output Topics

The node exports pointclouds at each stage of processing for visualization as well as the long term map.

    filtered_pub_ = this->create_publisher<sensor_msgs::msg::PointCloud2> ("ltmap/debug/filtered", 1);
    quantized_pub_ = this->create_publisher<sensor_msgs::msg::PointCloud2> ("ltmap/debug/quantized", 1);
    predicted_pub_ = this->create_publisher<sensor_msgs::msg::PointCloud2> ("ltmap/debug/predicted", 1);

These topics provide debug viz along the processing path. All of these are best viewed with the 'Box' rendering with a cell size of map_resolution_ (default 0.2).

Main Output Topics

    gridmap_pub_ = this->create_publisher<grid_map_msgs::msg::GridMap> ("ltmap/gridmap", 1);
    fullmap_pub_ = this->create_publisher<sensor_msgs::msg::PointCloud2> ("ltmap/debug/full", 1);    

These topics are the main outputs.

  • gridmap is the gridmap described above
  • full is the elevation heightmap exported as a PclXYZRGB.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages