MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversation
-
Updated
Mar 10, 2024 - Python
MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversation
Lightweight and Interpretable ML Model for Speech Emotion Recognition and Ambiguity Resolution (trained on IEMOCAP dataset)
A collection of datasets for the purpose of emotion recognition/detection in speech.
Human Emotion Understanding using multimodal dataset.
The repo contains an audio emotion detection model, facial emotion detection model, and a model that combines both these models to predict emotions from a video
Offical implementation of paper "MSAF: Multimodal Split Attention Fusion"
A survey of deep multimodal emotion recognition.
This repository provides the codes for MMA-DFER: multimodal (audiovisual) emotion recognition method. This is an official implementation for the paper MMA-DFER: MultiModal Adaptation of unimodal models for Dynamic Facial Expression Recognition in-the-wild.
A Unimodal Valence-Arousal Driven Contrastive Learning Framework for Multimodal Multi-Label Emotion Recognition (ACM MM 2024 oral)
Scientific Reports - Open access - Published: 14 February 2025
SERVER: Multi-modal Speech Emotion Recognition using Transformer-based and Vision-based Embeddings
This API utilizes a pre-trained model for emotion recognition from audio files. It accepts audio files as input, processes them using the pre-trained model, and returns the predicted emotion along with the confidence score. The API leverages the FastAPI framework for easy development and deployment.
[MM 2025] The official implementation code for "VAEmo: Efficient Representation Learning for Visual-Audio Emotion with Knowledge Injection“
audio-text multimodal emotion recognition model which is robust to missing data
Emotion recognition from Speech & Text using different heterogeneous ensemble learning methods
This emotion recognition app analyzes text, facial expressions, and speech to detect emotions. Designed for self-awareness and mental well-being, it provides personalized insights and recommendations.
Add a description, image, and links to the multimodal-emotion-recognition topic page so that developers can more easily learn about it.
To associate your repository with the multimodal-emotion-recognition topic, visit your repo's landing page and select "manage topics."