Skip to content

Fine tuning the Large language models is costly and there is Lora and Adapter techniques which saves the computational units and time. In this project we repeat the experiment of original paper. The main idea of this project is to apply Peft methods in financial tasks using Bert-like models

Notifications You must be signed in to change notification settings

Nedzhin/nlp_portfolio

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 

Repository files navigation

nlp_portfolio

Replication of the "Evaluating Parameter-Efficient Finetuning Approaches for Pre-trained Models on the Financial Domain"

Peft methods

Recent findings show that LoRa and Adapter fine-tuning methods show the similar performance as the full fine-tuning while saving time, computational resources and memory.

Steps done

We repeated the experiment and got nearly the same performance and confirm that paper's results are reliable

The tasks in Finance tested here are

  • Question Answering
  • Named Entity Recognition
  • News Headline Classification
  • Sentiment analysis

Link to the Notebook: is here -> Notebook

About

Fine tuning the Large language models is costly and there is Lora and Adapter techniques which saves the computational units and time. In this project we repeat the experiment of original paper. The main idea of this project is to apply Peft methods in financial tasks using Bert-like models

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published