VERA is an emotion classification model which takes audio files recorded from speech, primarily in .wav format, and predicts the emotions conveyed from voices.
You can check the Jupyter Notebook for model creation here.
-
Librosa: for audio processing and feature extraction
-
Tensorflow/Keras: for model creation and training
-
Scikit-Learn: also for model creation and training
-
Pandas: for data cleaning and manipulation
-
Numpy: for data manipulation
-
Seaborn: for data visualization
-
Plotly: also for data visualization
-
Kaggle: where we found the data
-
Flask: for backend services
-
Deployment platform: work in progress...
I've combined the following four datasets. This left us with a total of 12,162 audio files voiced by 121 actors, with 229 phrases spoken and expressing 7 emotions across them. After performing data augmentation, 12,162 audio files became 48,648 files. Each original file was stretched, noise injected, and pitched, effectively quadrupling the dataset size.
-
RAVDESS Emotional Speech Dataset on Kaggle
This portion of the RAVDESS contains 1440 files: 60 trials per actor x 24 actors = 1440. RAVDESS contains 24 professional actors (12 female, 12 male), vocalizing two lexically-matched statements in a neutral North American accent. Speech emotions includes calm, happy, sad, angry, fearful, surprise, and disgust expressions. Each expression is produced at two levels of emotional intensity (normal, strong), with an additional neutral expression.
-
CREMA-D: Crowd Sourced Emotional Multimodal Actors Dataset
CREMA-D is a data set of 7,442 original clips from 91 actors. These clips were from 48 male and 43 female actors between the ages of 20 and 74 coming from a variety of races and ethnicities (African America, Asian, Caucasian, Hispanic, and Unspecified). Actors spoke from a selection of 12 sentences. The sentences were presented using one of six different emotions (Anger, Disgust, Fear, Happy, Neutral, and Sad) and four different emotion levels (Low, Medium, High, and Unspecified).
-
SAVEE: Surrey Audio-Visual Expressed Emotion
The SAVEE database was recorded from four native English male speakers (identified as DC, JE, JK, KL), postgraduate students and researchers at the University of Surrey aged from 27 to 31 years. Emotion has been described psychologically in discrete categories: anger, disgust, fear, happiness, sadness and surprise. A neutral category is also added to provide recordings of 7 emotion categories.
The text material consisted of 15 TIMIT sentences per emotion: 3 common, 2 emotion-specific and 10 generic sentences that were different for each emotion and phonetically-balanced. The 3 common and 2 × 6 = 12 emotion-specific sentences were recorded as neutral to give 30 neutral sentences. This resulted in a total of 120 utterances per speaker.
-
TESS: Toronto Emotional Speech Set
There are a set of 200 target words were spoken in the carrier phrase "Say the word __' by two actresses (aged 26 and 64 years) and recordings were made of the set portraying each of seven emotions (anger, disgust, fear, happiness, pleasant surprise, sadness, and neutral). There are 2800 data points (audio files) in total.
Vijay Anandan who helped coordinate with the ideation and guidance in the project.
If you would like to contribute or have any feedback for this project please feel free to contact any one of the contributors.