Skip to content

A machine learning-based academic project that identifies emotional states using both voice and facial data. Combines CNN-LSTM and audio/visual analysis to detect emotions like happy, sad, angry, and more.

Notifications You must be signed in to change notification settings

hari5556/ML-based-Self-Identification-of-mental-health

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

4 Commits
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

🧠 ML-Based Self-Identification of Mental Health

This academic mini project aims to detect and classify a person’s emotional state using voice and facial expressions. By combining speech and image data, the system attempts to provide an early indication of mental health status using machine learning techniques.


πŸ“Œ Project Overview

The system is divided into three core modules:

  1. Voice Emotion Recognition – Classifies emotions based on audio signals
  2. Facial Emotion Recognition – Identifies emotions from facial images
  3. Integrated Model – Combines predictions from both modules for better accuracy

This project was developed as a team effort during our final year of B.Tech in Information Technology.


🧩 Modules

πŸŽ™οΈ Voice Emotion Recognition

  • Dataset: RAVDESS (not included in repo due to size)
  • Features Used: MFCC, Chroma, Mel Spectrogram
  • Model: CNN-LSTM
  • Output: Emotion label (e.g., happy, sad, angry)

πŸ˜€ Facial Emotion Recognition

  • Dataset: FER2013 (publicly available on Kaggle)
  • Model: CNN
  • Output: Facial emotion classification into predefined categories

πŸ” Integrated Model

  • Merges predictions from both modules
  • Gives a more reliable assessment of emotional state

πŸ› οΈ Technologies Used

  • Python
  • TensorFlow / Keras
  • NumPy, Pandas
  • Librosa (audio feature extraction)
  • OpenCV (image processing)
  • Scikit-learn
  • Matplotlib (visualization)

About

A machine learning-based academic project that identifies emotional states using both voice and facial data. Combines CNN-LSTM and audio/visual analysis to detect emotions like happy, sad, angry, and more.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published