Replication of the "Evaluating Parameter-Efficient Finetuning Approaches for Pre-trained Models on the Financial Domain"
Recent findings show that LoRa and Adapter fine-tuning methods show the similar performance as the full fine-tuning while saving time, computational resources and memory.
We repeated the experiment and got nearly the same performance and confirm that paper's results are reliable
- Question Answering
- Named Entity Recognition
- News Headline Classification
- Sentiment analysis