This repository contains implementations of various core components used in NLP from scratch (Inspired by many amazing resources).
Attention Mechanisms
Attention mechanisms are a powerful technique in deep learning that allow models to focus on specific parts of the input data when making predictions. They have been widely used in various applications, including natural language processing and computer vision.
-
Self-Attention
-
Multi-Head Attention
-
Masked Multi-Head Attention
[More incoming...]
- TF-IDF
- Unigram