You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This project implements an LSTM-based model for recognizing sign language gestures, specifically targeting actions like 'hello', 'thanks', 'nothing', and 'I love you'. Using PyTorch, it processes sequences of hand gestures, trains the model, and evaluates performance through confusion matrices and probabilities.
The project uses YOLOv5 for real-time hand gesture detection in videos, converting gestures into corresponding sign language text. It helps the hearing-impaired by translating sign language into a digital format using computer vision techniques, improving accessibility and communication.