In this paper, we tackle the task of creating different models to classify semantic equivalence on question pairs. We utilize Quora’s 400,000 question pairs as released as part of a Kaggle competition. We utilize the Manhatten LSTM model that achieved state-of-the-art performance in this task, but also compare it against a universal sentence encoder that was recently released by Google. Our paper finds that Google’s sentence encoder was outperformed by a Siamese LSTM with Word2Vec embeddings.
-
Notifications
You must be signed in to change notification settings - Fork 0
tputti2/ML_in_NLP
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
Machine Learning in NLP
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published