Skip to content

Cannot reproduce the numbers on the paper with this Notebook #1

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
changranelk opened this issue Mar 29, 2022 · 0 comments
Open

Cannot reproduce the numbers on the paper with this Notebook #1

changranelk opened this issue Mar 29, 2022 · 0 comments

Comments

@changranelk
Copy link

Hey! Thanks for open-sourcing this amazing project!
Just a quick question, I folllowed strictly with this notebook FinBERT_QA.ipynb and cannot reproduce the numbers reported on the paper.
More specificly, with using bert-qa as the starting point, after

config = {'bert_model_name': bert-qa,
      'max_seq_len': 512,
      'batch_size': 16,
      'learning_rate': 3e-6,
      'weight_decay': 0.01,
      'n_epochs': 3,
      'num_warmup_steps': 10000}

The results I had was
Epoch 2:

Train Loss: 0.069 | Train Accuracy: 98.39%
Validation Loss: 0.089 | Validation Accuracy: 98.09%
Average nDCG@10 for 333 queries: 0.476
MRR@10 for 333 queries: 0.442
Average Precision@1 for 333 queries: 0.381

Epoch 3:

Train Loss: 0.055 | Train Accuracy: 98.75%
Validation Loss: 0.097 | Validation Accuracy: 98.1%

Average nDCG@10 for 333 queries: 0.471
MRR@10 for 333 queries: 0.427
Average Precision@1 for 333 queries: 0.357

but the reported nDCG@10 should be 0.481.

Any ideas/suggestions? Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant