This repository is the official PyTorch implementation for Lid-Lab-NeRF.
Lid-Lab-NeRF is pytorch-based NeRF framwork to generate novel scans for LiDAR pointclouds (from previously unexplored sensor locations). Lid-Lab-NeRF provides a wholistic approach to synthetic LiDAR data generation, it generates depth, intensity, raydrop and semantic labels altogether. Lid-Lab-NeRF also uses a post-processing pipeline that generates raydrop patterns to match the real data. Along with this our framework also captures the movement of dynamic object through scans to generate position accurate, non distorted outputs. Our framework is the first work that does multi-property prediction along with capturing the dynamic objects and generating realistic scans, in a signle combined setup.
git clone https://github.com/Kafka2122/Lid-Lab-NeRF.git
cd Lid-Lab-NeRF
conda create -n labelnerf python=3.9
conda activate labelnerf
# PyTorch
# CUDA 12.1
pip install torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu121
# CUDA 11.8
# pip install torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu118
# CUDA <= 11.7
# pip install torch==2.0.0 torchvision torchaudio
# Dependencies
pip install -r requirements.txt
# Local compile for tiny-cuda-nn
git clone --recursive https://github.com/nvlabs/tiny-cuda-nn
cd tiny-cuda-nn/bindings/torch
python setup.py install
# compile packages in utils
cd utils/chamfer3D
python setup.py installKITTI-360 dataset (Download)
We use sequence00 (2013_05_28_drive_0000_sync) for experiments in our paper.
Download KITTI-360 dataset (2D images are not needed) and put them into data/kitti360. Download the data3D semantics and search for the semantic file name that matches
your sequence, for example for 2013_05_28_drive_0000_sync file name 0000004916_0000005264_dynamic.ply and 0000004916_0000005264.ply are the 3D semantic files. The semantic files are labelled as {start_frame}_{endframe}.ply so you need to pick these files accordingly so that all the scans in your sequence folder fall in between the timeframe start_frame and end_frame and the corresponding labels are availble.
The folder tree is as follows:
data
└── kitti360
└── KITTI-360
├── calibration
├── data_3d_raw
└── data_posesNext, run KITTI-360 dataset preprocessing: (set DATASET, SEQ_ID, STATIC_SEMANTIC_PATH and DYNAMIC_SEMANTIC_PATH (the 3d semantic files you downloaded))
bash preprocess_data.shAfter preprocessing, your folder structure should look like this:
configs
├── kitti360_{sequence_id}.txt
data
└── kitti360
├── KITTI-360
│ ├── calibration
│ ├── data_3d_raw
│ └── data_poses
├── train
├── transforms_{sequence_id}test.json
├── transforms_{sequence_id}train.json
└── transforms_{sequence_id}val.jsonSet corresponding sequence config path in --config and you can modify logging file path in --workspace. Remember to set available GPU ID in CUDA_VISIBLE_DEVICES.
Run the following command:
# KITTI-360
bash run_kitti_labelnerf.shAfter the training is completed, you can use the simulator to render and manipulate LiDAR point clouds in the whole scenario. It supports dynamic scene re-play, novel LiDAR configurations (--fov_lidar, --H_lidar, --W_lidar) and novel trajectory (--shift_x, --shift_y, --shift_z).
We also provide a simple demo setting to transform LiDAR configurations from KITTI-360 to NuScenes, using --kitti2nus in the bash script.
In the main_labelnerf_sim.py you can also check line 267 to simulate a sine trajectory for the simulated sensor.
To generate denser scans increase --H_lidar and --W_lidar to something like 128 and 2048 respectively.
Check the sequence config and corresponding workspace and model path (--ckpt).
Run the following command:
bash run_kitti_labelnerf_sim.shThe results will be saved in the workspace folder.
We sincerely appreciate the great contribution of the following works:
All code within this repository is under Apache License 2.0.