Skip to content

Facial expressions play a vital role in understanding human emotions and enabling natural interactions between humans and machines. This project leverages **YOLOv7 (You Only Look Once v7)**, a state-of-the-art object detection model, fine-tuned for **emotion recognition**.

Notifications You must be signed in to change notification settings

Annajappa/Emotion_Recognisation_YOLOv7

Repository files navigation

Emotion Detection with YOLOv7-TensorFlow

This project uses YOLOv7 implemented in TensorFlow to detect and classify emotions in facial images, providing a Flask web interface for easy interaction.

Features

  • YOLOv7 model implemented in TensorFlow for emotion detection
  • Trains on custom emotion dataset
  • Detects 7 emotions: angry, disgusted, fearful, happy, neutral, sad, surprised
  • Web interface for uploading images and getting predictions
  • Visual results with bounding boxes and emotion labels

Project Structure

yolov7 emotion/
├── Data/                   # Original emotion dataset
│   ├── train/              # Training data
│   └── test/               # Testing data
├── emotion_model/          # Trained YOLOv7-TF model (after training)
├── static/                 # Static files for Flask app
│   └── uploads/            # Uploaded and result images
├── templates/              # HTML templates
│   └── index.html          # Web interface
├── app.py                  # Flask application using YOLOv7-TF for detection
├── train_emotion.py        # Script to train YOLOv7-TF on emotion dataset
└── requirements.txt        # Python dependencies

Setup and Usage

1. Install Dependencies

pip install -r requirements.txt

2. Train the YOLOv7-TF Model

Train the YOLOv7 model implemented in TensorFlow on your emotion dataset:

python train_emotion.py

You can adjust training parameters:

python train_emotion.py --batch-size 8 --epochs 30 --img-size 416

3. Run the Web Application

After training (or even without training, using the fallback detection):

python app.py

Then open your browser and go to http://localhost:5000

4. Using the Web Interface

  1. Upload an image containing faces
  2. Click "Detect Emotions"
  3. View the detected emotions with confidence scores

How It Works

  1. Training: The system trains a YOLOv7 model using TensorFlow on the emotion dataset
  2. Detection: The trained model detects faces and classifies emotions in uploaded images
  3. Result Visualization: The detected emotions are displayed with bounding boxes and labels

Notes

  • The model is trained using YOLOv7 architecture implemented in TensorFlow
  • If the trained model is not available, the system falls back to a simpler detection method
  • For best results, use clear images with visible faces
  • The system can detect multiple faces and emotions in a single image

About

Facial expressions play a vital role in understanding human emotions and enabling natural interactions between humans and machines. This project leverages **YOLOv7 (You Only Look Once v7)**, a state-of-the-art object detection model, fine-tuned for **emotion recognition**.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published