Recognizing and understanding emotions is crucial in human-computer interaction, as it can greatly enhance decision-making and judgment. This project proposes a comprehensive emotion recognition system that focuses on analyzing candidates' expressions during behavioral interviews.
The Multimodal Emotion Recognition System aims to enhance the hiring process by analyzing candidates' emotions during behavioral interviews. It utilizes both textual data and facial images captured during interviews to analyze underlying sentiments. The system incorporates advanced deep learning techniques such as Bi-directional Long Short-Term Memory (Bi-LSTM) for textual content and Convolutional Neural Networks (CNNs) for facial expressions.
- Analyzes facial expressions and textual data from speech cues for emotion recognition.
- Utilizes Bi-LSTM and CNNs for accurate emotion analysis.
- Provides real-time analysis to interviewers through a user-friendly web interface.
- Access the web interface by opening the provided URL. (multimodal-emotion-detection.azurewebsites.net)
- Capture candidate facial images and speech cues.
- The system will analyze emotions and provide real-time insights.
-
Streamlit
-
Docker
-
CI/CD Pipeline
-
Microsoft Azure