Skip to content

[Guide] Guidance on building BundleSDF with conda #205

@michaelz9436

Description

@michaelz9436

Hi everyone,

I successfully managed to build and run BundleSDF on a server environment without sudo rights, using Conda. My setup utilizes modern hardware and software stacks (NVIDIA L20, CUDA 12.4, PyTorch 2.5.1), which differs significantly from the original docker image.

Here is a step-by-step guide to reproducing the environment.

My Environment Details

  • GPU: NVIDIA L20 (Arch sm_89)
  • OS: Linux (CentOS/Ubuntu)
  • CUDA Toolkit: 12.4 (Local) or 12.6
  • Python: 3.10
  • PyTorch: 2.5.1

Step 1: Conda Environment & C++ Dependencies

Create a clean environment and install essential C++ libraries from conda-forge.

conda create -n bundlesdf python=3.10 -y
conda activate bundlesdf

# 1. Install PyTorch (Official wheels for CUDA 12.4, or you can choose 12.6)
pip install torch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 --index-url https://download.pytorch.org/whl/cu124

# 2. Install C++ build dependencies via Conda

conda install -c conda-forge \
    cmake ninja make gxx_linux-64 sysroot_linux-64 \
    boost eigen pcl=1.12 \
    yaml-cpp pybind11 zeromq cppzmq jsoncpp \
    freeglut glew mesa-libgl-devel-cos7-x86_64 \
    libblas libcblas liblapacke cudnn -y

Step 2: Install Python Dependencies (Source Build)

Since I'm using PyTorch 2.5+, pre-built wheels for kaolin and pytorch3d might not exist or be compatible. Building from source is recommended.

Important: use --no-build-isolation to ensure they link against the PyTorch installed in the current environment.

# Set CUDA envs (Modify path to your local CUDA)
export CUDA_HOME=/usr/local/cuda-12.4
export FORCE_CUDA=1

# 1. Install Kaolin (Build from source for Torch 2.5 compat)
pip install "git+https://github.com/NVIDIAGameWorks/kaolin.git" --no-build-isolation
# If any problem, get the sourcecode v0.17 and build it local

# 2. Install PyTorch3D
pip install fvcore iopath
pip install "git+https://github.com/facebookresearch/pytorch3d.git@stable" --no-build-isolation

# 3. Install other deps
# IMPORTANT: Comment out 'opencv-python' in BundleTrack/LoFTR/requirements.txt first
# We will manage OpenCV separately:
sed -i 's/^opencv_python/# opencv_python/g' BundleTrack/LoFTR/requirements.txt

pip install -r BundleTrack/LoFTR/requirements.txt
pip install trimesh wandb matplotlib imageio tqdm open3d ruamel.yaml sacred kornia pymongo pyrender jupyterlab scipy scikit-learn yacs einops transformations xatlas pymeshlab

Step 3: Build OpenCV C++ SDK manually

BundleTrack is a C++ project requiring opencv_cudaimgproc. The default opencv from conda (or opencv-python from pip) is CPU-only. Must compile OpenCV from source with CUDA support.

  1. Download OpenCV 4.10.0 source code (and also opencv_contrib-4.10.0, make sure your submodules folder contains opencv and opencv_contrib-4.10.0).
  2. inside opencv folder, run the following script to compile and install it into your Conda environment:
#!/bin/bash
export CMAKE_PREFIX_PATH=$CONDA_PREFIX
export CUDA_HOME=/usr/local/cuda-12.4
export CC=$(which x86_64-conda-linux-gnu-gcc)
export CXX=$(which x86_64-conda-linux-gnu-g++)

mkdir -p build && cd build

# Here I choose not to compile with cudnn and PROTOBUF for simpler building, but choice is on you

cmake -D CMAKE_BUILD_TYPE=RELEASE \
      -D CMAKE_INSTALL_PREFIX=$CONDA_PREFIX \
      -D OPENCV_EXTRA_MODULES_PATH=../../opencv_contrib-4.10.0/modules \
      -D WITH_CUDA=ON \
      -D WITH_CUDNN=OFF \
      -D OPENCV_DNN_CUDA=OFF \
      -D CUDA_ARCH_BIN=8.9 \
      -D WITH_CUBLAS=ON \
      -D ENABLE_FAST_MATH=ON \
      -D CUDA_FAST_MATH=ON \
      -D WITH_QT=OFF \
      -D WITH_OPENGL=ON \
      -D BUILD_opencv_python3=ON \
      -D BUILD_opencv_python_bindings_generator=ON \
      -D BUILD_TESTS=OFF \
      -D BUILD_PERF_TESTS=OFF \
      -D BUILD_EXAMPLES=OFF \
      -D BUILD_PROTOBUF=OFF \
      -D PROTOBUF_UPDATE_FILES=OFF \
      ..

make -j$(nproc)
make install

Step 4: Build BundleSDF

Modify BundleTrack/CMakeLists.txt to:

  1. Add your GPU arch (e.g., 89 for L20/4090) to CMAKE_CUDA_ARCHITECTURES.
  2. Add find_package(PkgConfig REQUIRED) and pkg_check_modules(JSONCPP REQUIRED jsoncpp) to fix VTK linking issues (if so).

Then run the following build script
Build Script (you can save as build_conda.sh or run separately):

#!/bin/bash
ROOT=$(pwd)

# Setup Paths
PYTHON_LIB_PATH=$(python -c "import site; print(site.getsitepackages()[0])")
TORCH_LIB_PATH=$(python -c "import torch; import os; print(os.path.dirname(torch.__file__))")

export LD_LIBRARY_PATH="$TORCH_LIB_PATH/lib:$CONDA_PREFIX/lib:$LD_LIBRARY_PATH"
export TORCH_LIBRARIES="$TORCH_LIB_PATH/lib"
export TORCH_CUDA_ARCH_LIST="8.9" # Match your GPU
export FORCE_CUDA=1
export Python_ROOT_DIR=$CONDA_PREFIX
export Python3_ROOT_DIR=$CONDA_PREFIX

# 1. Build mycuda
cd ${ROOT}/mycuda
rm -rf build *.egg-info
# Ensure pyproject.toml allows torch >= 2.0
pip install -e . --no-build-isolation --no-deps

# 2. Build BundleTrack
cd ${ROOT}/BundleTrack
rm -rf build && mkdir build && cd build

cmake .. \
    -DCMAKE_PREFIX_PATH="$CONDA_PREFIX;$TORCH_LIB_PATH" \
    -DCMAKE_BUILD_TYPE=Release \
    -DPYTHON_EXECUTABLE=$(which python) \
    -DPYTHON_INCLUDE_DIR=$(python -c "from distutils.sysconfig import get_python_inc; print(get_python_inc())") \
    -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda-12.4

make -j$(nproc)

Step 4: Fix C++ Source Code (PCL 1.12+ Compatibility)

If you compiling bundlesdf with errors related to boost and geometry, that's bacause of pcl version:
PCL 1.12+ (installed via Conda) defaults to std::shared_ptr, while the original code uses boost::shared_ptr. This causes cannot convert errors during compilation. I haven't figure how to use a older version of PCL to build with modern cuda
and torch version, but I solved it by modifying a bit of source code in ./BundleTrack folder, here is how I did it:
To fix this without changing PCL version:

Files to modify:

  • BundleTrack/src/Utils.h
  • BundleTrack/src/Utils.cpp
  • BundleTrack/src/Frame.cpp
  • BundleTrack/src/Bundler.cpp

The Fix:

  1. Replace all occurrences of boost::shared_ptr<pcl::PointCloud<...>> with typename pcl::PointCloud<...>::Ptr.
  2. In Frame.cpp, replace boost::make_shared<...>() with std::make_shared<...>().
  3. In Utils.cpp, ensure explicit template instantiations match the header signatures exactly (use typename ... ::Ptr).
  4. In BundleTrack/src/FeatureManager.cpp, replace pcl::geometry::distance with pcl::euclideanDistance.

With these changes, the project compiles and runs successfully on current CUDA environments. Hope this helps!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions