Skip to content

smrithis/hw10_AutonomousBot

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 

Repository files navigation

Self Driving Turtlebot

This project implements a vision-based steering angle prediction model for autonomous navigation using the TurtleBot platform.
It involves data collection via teleoperation, preprocessing of camera images, and training a PyTorch based regression model to predict steering commands from RGB images. The project has complicated procedures, please follow the read me and watch the workflow videos kept in the resource folder.

The model learns to predict the angular velocity (ω) command directly from the RGB image captured by the front camera of the TurtleBot. It serves as a foundation for end-to-end autonomous steering control in a ROS2 environment.

Software package

The workspace contains a folder named model_generation and a ROS package called self_driving.

Model_generation folder structure:

├── ckpt_best.pt                           # trained weight (initially might not be present)
├── data
│   ├── processed
│   │   └── merged_dataset        # dataset name
│   │   ├── images                          # folder contains images - gets populated
│   │   ├── index_smooth.json   # smoothed cmd_vel - gets populated
│   │   └── index_split.json         # raw cmd_vel - gets populated
│   └── raw                                      # contains the ROS2 bag file(s)
├── dataset.py                              # Class that loads dataset
├── eval.py                                     # script to test model
├── model.py                                 # neural net model
├── runs                                          # stores log for tensorboard
├── train.ipynb                              # script to train the model
└── util
├── data_split.py                         # split the data randomly into train, test, val
├── extract_ros_bag.py             # script to extract ros bag and dump images (.png), cmd_vel (.json)
├── merge_dataset.py               # merge dataset from different bags
├── plot_data.py                          # plot cmd_vel for debugging
└── smooth_omega.py              # smooth cmd_vel for better training and generalization

Self_driving package structure:

├── Launch                                               # trained weight (initially might not be present)
├── resource
└── self_driving
├──_init.py                                                # split the data randomly into train, test, val
├── model.py                                           # plot cmd_vel for debugging
└── steering_inference_image.py    # smooth cmd_vel for better training and generalization
├── package.xml                                     # trained weight (initially might not be present)
├── setup.config
└── setup.py

Software dependencies and Setup(in sequence):

$ sudo apt install python3-pip
$ pip install torch==2.3.1 torchvision==0.18.1 --index-url https://download.pytorch.org/whl/cu121
$ pip install tensorboard
$ pip install tqdm
$ sudo apt install ros-humble-cv-bridge

You will train the model using google colab with accelerated GPU enabled. The file for it is train.ipynb. You will have to upload it to the drive and link it to the colab notebook inorder for the script to be able to access your dataset. The instructions for this is provided in train.ipynb file, please follow it accordingly.

Files to modify

In the robot computer
We need to increase the framerate of the topic /cmd_vel.
Open the following file:

$ nano turtlebot3_ws/src/turtlebot3/turtlebot3_teleop/turtlebot3_teleop/script/teleop_keyboard.py

Modify line #89
rlist, _, _ = select.select([sys.stdin], [], [], 0.03)              # 0.03 = 33Hz originally 0.1 for 10 hz

Compile and source subsequently

Important tip

We assume that the the images (/image_raw/compressed) are arriving at 5Hz and cmd_vel at 33 Hz. without this rate, the regression training and testing will not work. You may also have to alter the camera position to get a good frame.

Data collection:

The first step of the problem is to collect data for training. Data collection is an important step while training neural networks. Data is collected using ROS 2 bag recording while manually teleoperating the TurtleBot along the track. Place the turtlebot on the track (blue lines) at the center. Launch the bringup and teleoperation node. Move the turtlebot along the track completing atleast 2 CCW laps and 2 CW laps. Record ros2 bag: inside the data/raw folder

  • bag file 1 should have two laps for Counter Clockwise drive
  • bag file 2 should have two laps for Clock Wises drive

Quality Guidelines:

  • The blue track must be clearly visible in every frame.
  • Minor occlusions at corners are acceptable, but ensure most of the track is always within the camera’s field of view.
  • Refer to the provided “good data” reference video for visual examples of good recording quality.

Navigate to the folder /data/raw and run the below command while teleoperating.

$ ros2 bag record /image_raw/compressed /cmd_vel

Dataset generation

Use the functions provided in the Util folder for dataset generation. Be sure to replace rosbag2_xxx and rosbag2_yyy with the correct name of your rosbags. assuming we have two ros bags: rosbag2_xxx and rosbag2_yyy

  1. Extract Images and Commands. Each ROS bag is processed to extract: Camera frames from /image_raw/compressed Velocity commands from /cmd_vel
$ python3 util/extract_ros_bag.py --bag data/raw/rosbag2_xxx --out data/processed/bag_1 --image-topic /image_raw/compressed --cmd-topic /cmd_vel
$ python3 util/extract_ros_bag.py --bag data/raw/rosbag2_yyy --out data/processed/bag_2 --image-topic /image_raw/compressed --cmd-topic /cmd_vel
  1. Split Each Dataset. Split each bag into training and validation sets for reproducibility
$ python3 util/data_split.py --index data/processed/bag_1/index.json  --seed 123
$ python3 util/data_split.py --index data/processed/bag_2/index.json  --seed 123
  1. Merge Datasets. Combine both directions (CW + CCW) into one unified dataset:
$ python3 util/merge_dataset.py --runs data/processed/bag_1 data/processed/bag_2 --out data/processed/merged_dataset --copy
  1. Smooth Steering Values. Apply temporal smoothing to reduce noise in angular velocity (ω) commands:
$ python3 util/smooth_omega.py --index data/processed/merged_dataset/index_split.json
  1. Visualize the Dataset. To inspect command distribution and verify data integrity:
$ python3 util/plot_data.py data/processed/merged_dataset/index_split.json
$ python3 util/plot_data.py data/processed/merged_dataset/index_smooth.json

You’ll see: Histograms of ω command values Sample image–command pairs Smoothing effect comparisons

Training

The model is trained to learn a direct mapping from front camera images to steering commands (ω) using the processed and smoothed dataset. It uses a CNN-based regression network optimized with MAE loss to minimize steering prediction error.

Model training is done using the provided Google Colab notebook, which includes all setup, dataset loading, and logging steps. Navigate to train.ipynb file (>model_generation>train.ipynb) and follow the instructions in the notebook.

Once training completes, the best model checkpoint (ckpt_best.pt) will be saved automatically in your Drive.

python3 -m tensorboard.main --logdir runs     

Execute this from the root directory of the project, ctrl + click on the http://localhost:port_no link to visualize the tensorboard

Evaluate

After training, evaluate the model’s performance on the test split using the saved checkpoint. Understand what the graphs represent and answer the corresponding questions.

You could also either run it through terminal as below:

python3 eval.py --index data/processed/merged_dataset/index_smooth.json --root  data/processed/merged_dataset --ckpt  ckpt_best.pt --split test --outdir eval_out --save-overlays --flip-sign --short-side 224 --top-crop 0.2

This will dump important metrics, plots and test images inside the eval_out folder

Testing

Before starting, place the TurtleBot centered on the track and ensure it faces the forward driving direction. Once positioned, follow the steps below to deploy the trained model for autonomous steering.

ROS2 Nodes

Robot:

  • Robot bring up
$ ros2 launch turtlebot3_bringup robot.launch.py
  • Launch the camera node with QoS
$ ros2 run v4l2_camera v4l2_camera_node  --ros-args -p image_size:="[320, 240]" -p qos_overrides.image_raw.publisher.reliability:=best_effort -p qos_overrides.image_raw.publisher.history:=keep_last -p qos_overrides.image_raw.publisher.depth:=10 -p qos_overrides.image_raw.publisher.durability:=volatile

Host computer:

If any subscriber latches and reduces the frame rate, the inference loop will slow down — leading to unstable control.

$ ros2 topic hz /image_raw/compressed
$ ros2 run rqt_image_view rqt_image_view image:=/image_raw/compressed            ======> make sure the /image_raw/compressed frame rate does not drop
$ ros2 topic  hz /cmd_vel
$ ros2 topic echo /cmd_vel
$ v4l2-ctl -d /dev/video0 -p 5       # set the camera Hz to 5 Hz
$ ros2 launch self_driving steering_inference.launch.xml

✅ Expected Behavior at the end of the task -- The robot autonomously follows the blue track.

About

Starter code for end to end self driving turtlebot

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors