Speech emotion recognition based on syllable-level feature extraction
Authors: Rehman, A., Liu, Z.T., Wu, M., Cao, W.H. and Jiang, C.S.
Journal: Applied Acoustics
Volume: 211
eISSN: 1872-910X
ISSN: 0003-682X
DOI: 10.1016/j.apacoust.2023.109444
Abstract:Speech emotion recognition systems have high computational requirements for deep learning models and low generalizability mainly because of the poor reliability of emotional measurements across multiple corpora. To solve these problems, we present a speech emotion recognition system based on a reductionist approach of decomposing and analyzing syllable-level features. Mel-spectrogram of an audio stream is decomposed into syllable-level components, which are then analyzed to extract statistical features. The proposed method uses formant attention, noise-gate filtering, and rolling normalization contexts to decrease contextual differences and increase focus the attention on the structure of a formant. A set of syllable-level formant features is extracted and fed into a single hidden layer neural network that makes predictions for each syllable as opposed to the conventional approach of using a sophisticated deep learner to make sentence-wide predictions. The syllable level predictions help to lower the aggregated error in utterance level cross-corpus predictions. The experiments on IEMOCAP (IE), MSP-Improv (MI), and RAVDESS (RA) databases show that the method archives better than the state-of-the-art cross-corpus unweighted accuracy of 47.6% for IE to MI and 56.2% for MI to IE.
Source: Scopus
Speech emotion recognition based on syllable-level feature extraction
Authors: Rehman, A., Liu, Z.-T., Wu, M., Cao, W.-H. and Jiang, C.-S.
Journal: APPLIED ACOUSTICS
Volume: 211
eISSN: 1872-910X
ISSN: 0003-682X
DOI: 10.1016/j.apacoust.2023.109444
Source: Web of Science (Lite)
Speech emotion recognition based on syllable-level feature extraction
Authors: Abdul, R. and Rehman, A.
Journal: Applied Acoustics
Volume: 211
Publisher: Elsevier
ISSN: 0003-682X
DOI: 10.1016/j.apacoust.2023.109444
Source: Manual