Code repository for the paper "Reconstructing 3D Human Pose from RGB-D Data with Occlusions".
fznet: Free zone network (independent of other code).
hsiviewer: Human interaction dataset viewer. Refer to hsiviewer/README.md for usage instructions.
cfg_files: Parameters for optimization.
misc: Miscellaneous parameters, functions, or classes.
|--constants.py # Common parameters such as paths.
|--data_io.py # Class for loading the prox dataset easily.
|--utils.py # Utility functions.
|--fps.py # Farthest point sample (FPS) in PyTorch implementation.
|--model.py # Same as the model.py in fznet.
prox: Optimization.
make_data.py: Scripts for making data used to train the free zone network.
run.py: Scripts for running the optimization.
We train the free zone network on a server without screen. To visualize the optimization process, we run optimization on another computer.
You can use envs/fznet.txt to create the virtual environment.
The free zone network part requires the Chamfer3D package. Please refer to this for the installation of the package.
You can use envs/opt.txt to create the virtual environment.
The optimization part is based on PROX. Please refer to its code repository for the installation of the environment.
Before running the code, you need to download the following data:
There are also other optional data:
We did not use these data in the formal paper, and you can run the code without them.
You need to train the free zone network before running the optimization.
Make your own data before training:
python make_data.py
The generated data will appear as follows:
FZNet
|--prox
|--recording1
|--frame1.npz
|--...
|--...
|--prox_info.json # Per-frame information.
|--prox_split_info.json # Train/test split information.
There will also be a vertice_pairs.npy file used for volume matching.
Then you can train the free zone network:
python train.py --name exp_name
After training the free zone network, you will get a trained model at fznet/logs/exp_name/checkpoints/best_model.pt. You can use it during the optimization:
python run.py
By default, this will process all test frames in offscreen mode. You can modify the parameters for debugging or visualizing the optimization process.
We referred to some model code of Grasping Field. The optimization code is based on PROX. Thanks to these authors for their great work.
