Tested on Rocky Linux 8.10, NVIDIA Ampere A40, CUDA 12.1, Python 3.10.
git clone https://github.com/UVA-Computer-Vision-Lab/LabelAny3D.git
cd LabelAny3D
export EXT_DIR=$(pwd)/externalconda create -n la3d python=3.10
conda activate la3dpip install torch==2.2.2 torchvision==0.17.2 --index-url https://download.pytorch.org/whl/cu121
pip install -r requirements.txtpip install git+https://github.com/facebookresearch/pytorch3d.git@055ab3a --no-build-isolation
pip install git+https://github.com/yaojin17/detectron2.git --no-build-isolation
pip install pycocotools==2.0 --no-build-isolationcd $EXT_DIR/MoGe && pip install -r requirements.txtcd $EXT_DIR/ml-depth-pro && pip install -e .Requires GCC 11+.
export CC=$(which gcc)
export CXX=$(which g++)
cd $EXT_DIR/TRELLIS && . ./setup.sh --basic --xformers --diffoctreerast --spconv --mipgaussian --nvdiffrastDownload pre-built wheel for your Python/PyTorch/CUDA version from flash-attention releases.
Example for Python 3.10, PyTorch 2.2, CUDA 12:
wget https://github.com/Dao-AILab/flash-attention/releases/download/v2.7.4.post1/flash_attn-2.7.4.post1+cu12torch2.2cxx11abiFALSE-cp310-cp310-linux_x86_64.whl
pip install flash_attn-2.7.4.post1+cu12torch2.2cxx11abiFALSE-cp310-cp310-linux_x86_64.whl
rm flash_attn-2.7.4.post1+cu12torch2.2cxx11abiFALSE-cp310-cp310-linux_x86_64.whlBuild from source (required for older glibc systems):
git clone --recursive https://github.com/NVIDIAGameWorks/kaolin.git /tmp/kaolin
cd /tmp/kaolin && git checkout v0.17.0
pip install . --no-build-isolationcd $EXT_DIR/InvSR && pip install -e . --no-depscd $EXT_DIR/checkpoints
./download.shThis downloads:
- DepthPro - metric depth estimation (~1.8GB)
- InvSR - image super-resolution (~130MB)
- Amodal Completion - complete occluded regions (~3.3GB)
Other models (TRELLIS, MoGe) are auto-downloaded from HuggingFace at runtime.
Install Blender 3.6+ and add trimesh:
blender --background --python-expr "import subprocess, sys; subprocess.check_call([sys.executable, '-m', 'pip', 'install', 'trimesh'])"cd src
python -c "import torch; print(f'PyTorch: {torch.__version__}, CUDA: {torch.cuda.is_available()}')"