Skip to content

Rachit-Shaha/Evaluation-of-LLMs-on-STS-Data

Repository files navigation

Evaluation-of-LLMs-on-STS-Data

This repo contains a study on the performance of LLMs on STS(Semantic Textual Similarity) Data.

Abstract

The use of Large Language Models (LLMs) has made huge advancements in Natural Language Processing (NLP) applications, from embedding techniques to generative text creation, it marks an exciting era in the NLP community. Opportunities have increased in exploring the ability and impact of different LLMs on different NLP tasks. Semantic text Similarity (STS) is one the areas which can be explored and can gain new benchmarks from the progress made in LLMs. These LLM models contain a wide variety and a broad overview of many topics and domains because they are pre-trained on large datasets with several parameters. This can become a drawback, as LLMs may lack the domain-specific knowledge necessary for certain applications. Through this project, we aim to study how LLMs perform on domain-specific STS. Additionally, we also want to find ways to improve LLMs’ and understand their performance on domain-centered data.

About

This repo contains a study on performance of LLMs on STS(Semantic Textual Similarity) Data.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •