Replies: 2 comments 4 replies
-
Hi @amitarjun On what data does the model make "junk predictions"? One way you could check that the model was loaded correctly and works correctly is to apply it to the CUAD dataset it was trained on (and tested on) and compare the performance on that data to the numbers reported in the paper https://arxiv.org/pdf/2103.06268.pdf If you could share some code as a google colab notebook I'd be happy to look deeper into it. 🙂 |
Beta Was this translation helpful? Give feedback.
-
Hi @julian-risch Any comments? |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I'm trying to use the haystack pipeline to extract the answers using deberta-v2-xlarge-cuad fine-tuned on the CUAD dataset.
https://github.com/TheAtticusProject/cuad
However, most of the time, the model makes junk predictions. Is there a way to implement It correctly?
Beta Was this translation helpful? Give feedback.
All reactions