Skip to content

Reproducing Test AUCs #14

@geoffreyangus

Description

@geoffreyangus

Hi,

Thank you all for providing this repository for public use. I am trying to reproduce the results from the paper, namely the test AUC given for the 50-bag, 10-instance experiment.

I've run the implementation in this repository with the following command:

python main.py --num_bags_train 50 --num_bags_test 1000

Doing so actually overshoots the result given in the paper, by a substantial margin (0.768 (paper) vs. 0.898 (repo)). I understand that there are differences in the repository vs. the implementation in the paper (i.e. no validation set, no early stopping). However, given that the sample count in the training bags is so small, I am not convinced that such a large difference is due to the data split, and I am printing the AUC at each step for both the train and test set, which should allow me to reason about the early stopping difference. Is there something else that I am missing? Let me know, thanks!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions