An integrated framework for emotion recognition using speech and static images with deep classifier fusion approach

K. Jayanthi, S. Mohan, Lakshmipriya B

Research output: Contribution to journalArticlepeer-review

20 Scopus citations

Abstract

Research on emotion recognition is witnessing an increasing trend in recent years due to escalation in stress level of humans. Human emotions in most cases are identified through facial expressions and speech modulation. Most works in literature recognizes the emotion either from the facial recognition or through speech modulation. This work on the other hand proposes an integrated framework considering both the human static facial images and speech modulation to recognize the mental state of the individual. The proposed integrated framework by the virtue of deep classifier fusion demonstrates an exemplary performance with 94.26% accuracy on comparison with 89% and 91.49% respectively for voice signal and facial expression when considered individually. Furthermore, auto-suggestion is provided for the depressed persons to enhance their mental wellness thereby assisting them to come out of depression.

Original languageEnglish
Pages (from-to)3401-3411
Number of pages11
JournalInternational Journal of Information Technology (Singapore)
Volume14
Issue number7
DOIs
StatePublished - Dec 2022
Externally publishedYes

Funding

The authors would like to acknowledge the following student members: Ms. T. Soumiya, Mr. S. Tamizemani and Mr. B. Saravanan for their moral support and cooperation.

Keywords

  • 1D CNN
  • 2D CNN
  • Classifier fusion
  • Convolutional neural network
  • Emotion recognition

Fingerprint

Dive into the research topics of 'An integrated framework for emotion recognition using speech and static images with deep classifier fusion approach'. Together they form a unique fingerprint.

Cite this