Dear Mikel,
Thank you for sharing your great work with us.
I'm running your codes and trying to reproduce the result that you reported in your ACL 2018 paper. But I could not get a comparable result.
I got all required datasets and embedding file by ./get_data.sh and used them to train the model by
python3 map_embeddings.py --acl2018 --cuda SRC.EMB TRG.EMB SRC_MAPPED.EMB TRG_MAPPED.EMB
The results you reported in the paper are: 48.13 for EN-IT, 48.19 for EN-DE, 32.63 for EN-FI and 37.33 for EN-ES respectively. However, I got the results for 4 language pair are: 21.04 for EN-IT, 38.6 for EN-DE, 18.64 for EN-FI and 12.68 for EN-ES respectively. My evaluation code is:
python3 eval_translation.py SRC_MAPPED.EMB TRG_MAPPED.EMB -d TEST.DICT --retrieval csls
My results are nearly only one half of what you reported. But I totally don't know why. Could you help me? Thank you very much!
Dear Mikel,
Thank you for sharing your great work with us.
I'm running your codes and trying to reproduce the result that you reported in your ACL 2018 paper. But I could not get a comparable result.
I got all required datasets and embedding file by ./get_data.sh and used them to train the model by
python3 map_embeddings.py --acl2018 --cuda SRC.EMB TRG.EMB SRC_MAPPED.EMB TRG_MAPPED.EMB
The results you reported in the paper are: 48.13 for EN-IT, 48.19 for EN-DE, 32.63 for EN-FI and 37.33 for EN-ES respectively. However, I got the results for 4 language pair are: 21.04 for EN-IT, 38.6 for EN-DE, 18.64 for EN-FI and 12.68 for EN-ES respectively. My evaluation code is:
python3 eval_translation.py SRC_MAPPED.EMB TRG_MAPPED.EMB -d TEST.DICT --retrieval csls
My results are nearly only one half of what you reported. But I totally don't know why. Could you help me? Thank you very much!