This tutorial provides a step-by-step guide to compiling and installing cumm and spconv from source, specifically for CUDA 12.8.
This tutorial has been tested on CUDA 12.8, Python 3.12, and CUDA arch 10.0.
Steps:
cumm install
# if cumm is installed, uninstall it
pip list | grep cumm # pip uninstall cumm
export CUDA_PATH="path/to/you/cuda/dir/"
git clone https://github.com/RayYoh/cumm.git
cd cumm
export CUMM_CUDA_VERSION="12.8" # cuda version
export CUMM_DISABLE_JIT="1" # build whl then install
# GPU arch, https://developer.nvidia.com/cuda/gpus
export CUMM_CUDA_ARCH_LIST="10.0"
python3 setup.py bdist_wheel
pip install dist/cumm_cu128-0.8.2-cp312-cp312-linux_x86_64.whl
# or you can download our prebuilt whl and install
wget https://github.com/RayYoh/cumm/releases/download/cu128_py312/cumm_cu128-0.8.2-cp312-cp312-linux_x86_64.whl
pip install cumm_cu128-0.8.2-cp312-cp312-linux_x86_64.whlspconv install
pip list | grep spconv # pip uninstall spconv*
git clone https://github.com/kenomo/spconv.git
cd spconv
export SPCONV_DISABLE_JIT="1"
cd spconv
python3 setup.py bdist_wheel
pip install dist/spconv_cu128-2.3.8-cp312-cp312-linux_x86_64.whl
# or you can download our prebuilt whl and install
wget https://github.com/RayYoh/spconv-12.8/releases/download/spconv128/spconv_cu128-2.3.8-cp312-cp312-linux_x86_64.whl
pip spconv_cu128-2.3.8-cp312-cp312-linux_x86_64.whl
# test
python test/benchmark.py && \
python test/fake_train.pyNote: Compiling these libraries is resource-intensive. Ensure your system has sufficient RAM and that your MAX_JOBS environment variable is set appropriately to avoid out-of-memory errors during the build process.