Accepted at WSDM 2026 | Paper
Authors: Mona Zamiri, Alexander Kotov (Wayne State University)
Neural ranking architecture for conversational entity retrieval from knowledge graphs. DRAGON aggregates fine-grained relevance signals using Graph Convolutional Networks and multi-head attention.
Model Configuration & Training Details
The following hyperparameters and architectural settings were used in all experiments reported in the paper:
Sub-graph Pruning
The pruning threshold λ for candidate entity sub-graph pruning was set to 0.5.
The coefficient λ′ in the feature weighting function was set to 3 × 10⁻⁴.
Optimization
All neural architectures were optimized using the AdamW optimizer.
Learning rate: 2.5 × 10⁻⁵
DRAGON Architecture
Hidden dimension of all convolutional layers: 32
Number of attention layers: 8
Number of GCN layers: 2
Number of fully connected layers: 6
The aggregated relevance vector for each candidate response entity is passed through six fully connected layers to produce the final ranking score.
pip install -r requirements.txtDependencies: PyTorch 2.4.1, DGL 2.4.0, Transformers 4.46.2
Execute the following scripts in order:
python convert_triple_to_dict.py \
--input_tsv /path/to/triples.tsv \
--output_json ./data/dict_train.jsonpython Pruning_train.py \
--gpu 0 \
--out_path ./models/pruning_modelpython Calculate_features.py \
--gpu 0 \
--output_file ./data/features_train.jsonpython GCN_train.py \
--gpu 0 \
--score_file_dir /path/to/scores/ \
--triple_file_dir /path/to/triples/ \
--model_path ./models/dragon_model \
--epochs 100python GCN_test.py \
--gpu 0 \
--model_path ./models/dragon_model.pt \
--score_file_dir /path/to/scores/ \
--triple_file_dir /path/to/triples/[{"q_id": {"candidate": [s1, s2, ..., s12, n1_s1, ...], ...}}]{"q_id": {"p": "positive_entity", "n": ["neg1", "neg2", ...]}}question_id,candidate,score
q_001,entity_name,0.876@inproceedings{zamiri2026dragon,
author = {Zamiri, Mona and Kotov, Alexander},
title = {Conversational Entity Retrieval from a Knowledge Graph using Aggregation of Fine-grained Relevance Signals with Graph Convolutions and Self-Attention},
booktitle = {WSDM 2026},
year = {2026}
}