Skip to content

Sign Language Detector is a real-time ISL ๐Ÿ‡ฎ๐Ÿ‡ณ recognition system. It uses OpenCV and MediaPipe for hand tracking, a machine learning classifier for gesture recognition, and a Flask web interface for live inference.

License

Notifications You must be signed in to change notification settings

Life-Experimentalist/SignLanguageDetector

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

6 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

Sign Language Detector

A real-time Flask application for detecting and recognizing Indian Sign Language gestures using machine learning and computer vision.

Project Overview

This project uses MediaPipe for hand landmark detection and machine learning to recognize Indian Sign Language (ISL) alphabet. It includes:

  1. Data Collection (collect_imgs.py)
  2. Dataset Creation (create_dataset.py)
  3. Model Training (train_classifier.py)
  4. Real-time Inference (inference_classifier.py)
  5. An Interactive CLI tool for training operations (interactive_cli.py)
  6. A Flask web application (app.py)

The training component can be used for any hand gesture recognition, not limited to sign language.

Directory Structure

SignLanguageDetector/
โ”œโ”€โ”€ app.py                       # Flask web application
โ”œโ”€โ”€ training/                    # Training scripts
โ”‚   โ”œโ”€โ”€ collect_imgs.py          # Data collection script
โ”‚   โ”œโ”€โ”€ create_dataset.py        # Dataset creation script
โ”‚   โ”œโ”€โ”€ train_classifier.py      # Model training script
โ”‚   โ”œโ”€โ”€ inference_classifier.py  # Inference script
โ”œโ”€โ”€ interactive_cli.py           # CLI tool for training pipeline
โ”œโ”€โ”€ data/                        # Training data directory
โ”œโ”€โ”€ models/                      # Saved model files directory
โ”œโ”€โ”€ logs/                        # Application logs directory
โ”œโ”€โ”€ templates/                   # Web application templates
โ”‚   โ””โ”€โ”€ index.html               # Main web interface
โ”œโ”€โ”€ test_cv.py                   # Test OpenCV installation
โ””โ”€โ”€ README.md                    # Project overview and usage

Installation

  1. Clone the repository:
    git clone <repository-url>
    cd SignLanguageDetector
    
  2. Create a virtual environment:
    python -m venv venv
    source venv/bin/activate  # Windows: venv\Scripts\activate
    
  3. Install dependencies:
    pip install numpy opencv-python mediapipe flask scikit-learn colorama paho-mqtt
    
  4. Install Mosquitto:
    • Follow the instructions on the Mosquitto website to install Mosquitto on your system.

Usage

Data Collection

python training/collect_imgs.py

Create Dataset

python training/create_dataset.py

Train Classifier

python training/train_classifier.py

Test Inference

python training/inference_classifier.py

Interactive CLI

python interactive_cli.py

Run Web Application

python app.py

Then open your browser at http://127.0.0.1:5000.

Logging

Logs are stored in logs/, organized by session timestamps. Files include:

  • performance.log (timing data)
  • debug.log (debug messages)
  • error.log (errors)
  • access.log (HTTP access logs)

Notes

  • Ensure proper lighting for improved detection.
  • A physical camera is required.
  • Some gestures need two hands for accurate recognition.

About

Sign Language Detector is a real-time ISL ๐Ÿ‡ฎ๐Ÿ‡ณ recognition system. It uses OpenCV and MediaPipe for hand tracking, a machine learning classifier for gesture recognition, and a Flask web interface for live inference.

Topics

Resources

License

Stars

Watchers

Forks