Neural Networks and MLP Classification Based Speech Emotion Recognition
Abstract
Research in signal processing and human-machine interaction systems is focused on the recognition of emotional states from speech. Humans, on the other hand, are capable of feeling and expressing emotions. It is feasible to strengthen the relationship between people and computers by using an automated method for recognising human emotions. Use of Neural Network (NN) also Mel Frequency Cepstral (MFCC) features for the identification of spoken emotions in speech has been discussed in this work. An emotional speech dataset, known as the Berlin Emotional Speech dataset (EmoDB), has been used to classify the data. According to our findings, the average accuracy was 83.40 percent, and the greatest accuracy was 92.35 percent.
Downloads
Published
2022-03-19
Issue
Section
Articles