Skip to content

Reproducing MolBERT results on QSAR tasks #7

@TWRogers

Description

@TWRogers

Hi MolBERT team,

First of all thank you for releasing this repository and providing the scripts to reproduce your paper,
it is deeply appreciated!

I have an issue reproducing the QSAR results from Table 3 in the paper for MolBERT and MolBERT (finetune),
as detailed below:

  1. I can exactly reproduce the Table 3 entries for RDKit and ECFC4 using scripts/run_qsar_test_molbert.py so that is reassuring
  2. The MolBERT featurizer, however, yields lower AUROCs i.e. for BACE I get 0.835 vs 0.849 from the paper and for BBBP I get versus 0.744 vs 0.750 in the paper.
  3. Similarly for MolBERT (finetune) using scripts/run_finetuning.py for BBBP I get 0.751 vs 0.762 reported in the paper

The pre-trained model I am using is the one provided in the README i.e. https://ndownloader.figshare.com/files/25611290

Could it somehow be that I am using the wrong weights, or the wrong weights were uploaded to figshare? This would effect the results in both 2. & 3. above so would make sense.

Finally, the parameters I have been using for the fine-tuning are the following:

  • freeze_level = 0 taken from the answer in Dataset size and creation #3
  • learning_rate = 3e-5 taken from the paper although I could only find the value for pre-training and not fine-tuning
  • batch_size=16

All other arguments are left to the defaults provided in the code. Should the above arguments reproduce results similar to the paper?

Thanks in advance!

Tom

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions