Repository logo
Communities & Collections
Research Outputs
Fundings & Projects
People
Statistics
User Manual
Have you forgotten your password?
  1. Home
  2. Faculty of Computer Science and Engineering
  3. Faculty of Computer Science and Engineering: Journal Articles
  4. Automatic Recognition of Emotions from Speech
Details

Automatic Recognition of Emotions from Speech

Journal
International Journal of Computational Linguistics Research
Date Issued
2019-12-01
Author(s)
Gjoreski, Martin
DOI
10.6025/jcl/2019/10/4/101-107
Abstract
This paper presents an approach to recognition of human emotions from speech. Seven emotions are recognized: anger, fear, sadness, happiness, boredom, disgust and neutral. The approach is applied on a speech database, which
consists of simulated and annotated utterances. First, numerical features are extracted from the sound database by using
audio feature extractor. Next, the extracted features are standardized. Then, feature selection methods are used to select the
most relevant features. Finally, a classification model is trained to recognize the emotions. Three classification algorithms
are tested, with SVM yielding the highest accuracy of 89% and 82% using the 10 fold cross-validation and Leave-OneSpeaker-Out techniques, respectively. “Sadness” is the emotion which is recognized with highest accuracy.
Subjects

Support Vector Machin...

SVM

Classification Algori...

Emotions

Automatic Recognition...

⠀

Built with DSpace-CRIS software - Extension maintained and optimized by 4Science

  • Accessibility settings
  • Privacy policy
  • End User Agreement
  • Send Feedback
Repository logo COAR Notify