Retta, Ephrem Afele and Sutcliffe, Richard and Mahmood, Jabar and Berwo, Michael Abebe and Almekhlafi, Eiad and Khan, Sajjad Ahmad and Chaudhry, Shehzad Ashraf and Mhamed, Mustafa and Feng, Jun (2023) Cross-Corpus Multilingual Speech Emotion Recognition: Amharic vs. Other Languages. Applied Sciences, 13 (23). p. 12587. DOI https://doi.org/10.3390/app132312587
Retta, Ephrem Afele and Sutcliffe, Richard and Mahmood, Jabar and Berwo, Michael Abebe and Almekhlafi, Eiad and Khan, Sajjad Ahmad and Chaudhry, Shehzad Ashraf and Mhamed, Mustafa and Feng, Jun (2023) Cross-Corpus Multilingual Speech Emotion Recognition: Amharic vs. Other Languages. Applied Sciences, 13 (23). p. 12587. DOI https://doi.org/10.3390/app132312587
Retta, Ephrem Afele and Sutcliffe, Richard and Mahmood, Jabar and Berwo, Michael Abebe and Almekhlafi, Eiad and Khan, Sajjad Ahmad and Chaudhry, Shehzad Ashraf and Mhamed, Mustafa and Feng, Jun (2023) Cross-Corpus Multilingual Speech Emotion Recognition: Amharic vs. Other Languages. Applied Sciences, 13 (23). p. 12587. DOI https://doi.org/10.3390/app132312587
Abstract
In a conventional speech emotion recognition (SER) task, a classifier for a given language is trained on a pre-existing dataset for that same language. However, where training data for a language do not exist, data from other languages can be used instead. We experiment with cross-lingual and multilingual SER, working with Amharic, English, German, and Urdu. For Amharic, we use our own publicly available Amharic Speech Emotion Dataset (ASED). For English, German and Urdu, we use the existing RAVDESS, EMO-DB, and URDU datasets. We followed previous research in mapping labels for all of the datasets to just two classes: positive and negative. Thus, we can compare performance on different languages directly and combine languages for training and testing. In Experiment 1, monolingual SER trials were carried out using three classifiers, AlexNet, VGGE (a proposed variant of VGG), and ResNet50. The results, averaged for the three models, were very similar for ASED and RAVDESS, suggesting that Amharic and English SER are equally difficult. Similarly, German SER is more difficult, and Urdu SER is easier. In Experiment 2, we trained on one language and tested on another, in both directions for each of the following pairs: Amharic↔German, Amharic↔English, and Amharic↔Urdu. The results with Amharic as the target suggested that using English or German as the source gives the best result. In Experiment 3, we trained on several non-Amharic languages and then tested on Amharic. The best accuracy obtained was several percentage points greater than the best accuracy in Experiment 2, suggesting that a better result can be obtained when using two or three non-Amharic languages for training than when using just one non-Amharic language. Overall, the results suggest that cross-lingual and multilingual training can be an effective strategy for training an SER classifier when resources for a language are scarce.
Item Type: | Article |
---|---|
Uncontrolled Keywords: | speech emotion recognition; multilingual; cross-lingual; feature extraction |
Subjects: | Z Bibliography. Library Science. Information Resources > ZZ OA Fund (articles) |
Divisions: | Faculty of Science and Health Faculty of Science and Health > Computer Science and Electronic Engineering, School of |
SWORD Depositor: | Unnamed user with email elements@essex.ac.uk |
Depositing User: | Unnamed user with email elements@essex.ac.uk |
Date Deposited: | 04 Dec 2023 21:30 |
Last Modified: | 30 Oct 2024 21:04 |
URI: | http://repository.essex.ac.uk/id/eprint/37031 |
Available files
Filename: applsci-13-12587-v2.pdf
Licence: Creative Commons: Attribution 4.0