HodgeFormer: Transformers for Learnable Operators on Triangular Meshes through Data-Driven Hodge Matrices
This repository holds code for the HodgeFormer deep learning architecture operating on mesh data. Links:
- Project page: https://hodgeformer.github.io/
- Paper: https://arxiv.org/abs/2509.01839
Prolem statement: Currently, prominent Transformer architectures applied on graphs and meshes for shape analysis tasks employ traditional attention layers that heavily utilize spectral features requiring costly eigenvalue decomposition-based methods. To encode the mesh structure, these methods derive positional embeddings that heavily rely on eigenvalue decomposition based operations, e.g. on the Laplacian matrix, or on heat-kernel signatures, which are then concatenated to the input features.
Important
Core contribution: This paper proposes a novel approach inspired by the explicit construction of
the Hodge Laplacian operator in Discrete Exterior Calculus as a product of discrete Hodge operators
and exterior derivatives, i.e.
We adjust the Transformer architecture in a novel deep learning layer that utilizes the multi-head
attention mechanism to approximate Hodge matrices
Our approach results in a computationally-efficient architecture that achieves comparable performance in mesh segmentation and classification tasks, through a direct learning framework, while eliminating the need for costly eigenvalue decomposition operations or complex preprocessing operations.
The code in this repository consists of two python packages, mesh_sim and mesh_opformer:
-
The
mesh_simpackage for reading meshes along with functionalities for extracting useful geometric features. For the experimentsmesh_o3dis used instead ofmesh_simin several places. -
The
mesh_opformerpackage for the layer definitions of the HodgeFormer architecture along with dataset definitions and utility modules for training and evaluation.
The packages follow the src structure format and need to be installed as python packages in a python environment.
Preferably, create a new python environment to hold the package installations using venv or conda. Then either
install the packages in development mode or as wheel files.
For each experiment, training and evaluation scripts are provided in dedicated folders in the experiments folder along with configuration files and documentation.
Note: All experiments were conducted with Python 3.10.
For each package, navigate to the package top-level directories, where setup.py file is located, and install the
package in development mode:
cd ./packages/mesh_sim
pip install -e .cd ./packages/mesh_opformer
pip install -e .Alternatively, for each package build the corresponding .whl file:
python setup.py bdist_wheel Install the packages via their .whl files using pip:
pip install <package>.whlThe paper experiments are organized in a dedicated experiments folder. Each dataset used in the paper has its own subfolder with code and an accompanying configuration file for training and evaluating HodgeFormer models. In total, there are experiments for four different datasets on the tasks of mesh classification and mesh segmentation:
- SHREC-11 - mesh classification: link
- Cube Engraving - mesh classification: link
- COSEG Chairs, Aliens, Vases - mesh segmentation: link
- Human - mesh segmentation: link
The configuration files are written in toml format and are used to control dataset paths, data preprocessing, model
architecture, training, and evaluation parameters. Documentation about the configuration sections can be found in the
readme file at the experiments folder. Results are stored and visualized using the wandb
library. If you have a wandb account, you can enable it by configuring the following sections in your config file:
[wandb]
WANDB_MODE = "online" # Options: 'online', 'offline', 'disabled'
name = "hodgeformer-shrec11"
[wandb.init]
project = "project-name"
entity = "user-wandb-account"Extensive execution examples are available in the accompanying documentation files of each dataset in the experiments folder.
Below is an example for training and performing inference with a classification model on the SHREC11 dataset. For inference, one can use
the provided entry point infer-hodgeformer which is installed along the installation of the mesh_opformer package. Alternatively, one
can use the script provided here with the same inputs.
Executed from the /experiments/classification_shrec folder.
- Training:
python classification_shrec.py --cfg_path ./classification_shrec11_cfg.toml --out ./runs- Inference:
infer-hodgeformer \
--model ./path/to/model.pth \
--cfg_path ./classification_shrec11_cfg.toml \
--dataset_path ./data/shrec16/dinosaur/test \
--out ./out.json If you find this work useful for your research, please cite:
- A. Nousias and S. Nousias, “HodgeFormer: Transformers for Learnable Operators on Triangular Meshes through Data-Driven Hodge Matrices,” 2025, arXiv. doi: 10.48550/ARXIV.2509.01839.
With the following bibtex entry:
@article{nousias2025hodgeformer,
title={HodgeFormer: Transformers for Learnable Operators on Triangular Meshes through Data-Driven Hodge Matrices},
author={Akis Nousias and Stavros Nousias},
year={2025},
eprint={2509.01839},
archivePrefix={arXiv},
primaryClass={cs.GR},
url={https://arxiv.org/abs/2509.01839},
}
