Abstract
Research on emotion recognition is witnessing an increasing trend in recent years due to escalation in stress level of humans. Human emotions in most cases are identified through facial expressions and speech modulation. Most works in literature recognizes the emotion either from the facial recognition or through speech modulation. This work on the other hand proposes an integrated framework considering both the human static facial images and speech modulation to recognize the mental state of the individual. The proposed integrated framework by the virtue of deep classifier fusion demonstrates an exemplary performance with 94.26% accuracy on comparison with 89% and 91.49% respectively for voice signal and facial expression when considered individually. Furthermore, auto-suggestion is provided for the depressed persons to enhance their mental wellness thereby assisting them to come out of depression.
Original language | English |
---|---|
Pages (from-to) | 3401-3411 |
Number of pages | 11 |
Journal | International Journal of Information Technology (Singapore) |
Volume | 14 |
Issue number | 7 |
DOIs | |
State | Published - Dec 2022 |
Externally published | Yes |
Funding
The authors would like to acknowledge the following student members: Ms. T. Soumiya, Mr. S. Tamizemani and Mr. B. Saravanan for their moral support and cooperation.
Keywords
- 1D CNN
- 2D CNN
- Classifier fusion
- Convolutional neural network
- Emotion recognition