Skip to content

lathashree01/NLP_implementations

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 

Repository files navigation

About This Repository

This repository contains implementations of various core components used in NLP from scratch (Inspired by many amazing resources).

Implementations

Transformer Components

Attention Mechanisms
Attention mechanisms are a powerful technique in deep learning that allow models to focus on specific parts of the input data when making predictions. They have been widely used in various applications, including natural language processing and computer vision.

  • Self-Attention

  • Multi-Head Attention

  • Masked Multi-Head Attention

    [More incoming...]

Classical NLP models

  • TF-IDF
  • Unigram

About

Contains basic implementation of NLP componnets

Topics

Resources

Stars

Watchers

Forks