Proceedings of the 2025 2nd International Conference on Electrical Engineering and Intelligent Control (EEIC 2025)

Different Modalities of Emotion Recognition in Human-Computer Interaction

Authors
Jiarun Shi1, *
1School of Electrical, Electronic and Mechanical Engineering, University of Bristol, Bristol, UK
*Corresponding author. Email: kh24952@bristol.ac.uk
Corresponding Author
Jiarun Shi
Available Online 23 October 2025.
DOI
10.2991/978-94-6463-864-6_8How to use a DOI?
Keywords
Emotion Recognition; Human-Computer Interaction (HCI); Multimodal; Deep Learning
Abstract

Recognizing emotions plays a vital role in Human-Computer Interaction (HCI), with applications in improving student engagement in education and patient monitoring in healthcare. While traditional systems rely on single-modal inputs, human emotions are inherently multimodal, combining facial expressions (i.e. smiles), speech intonations, and physiological signals like heart rate. As a result, research has increasingly focused on multimodal system recognition to enhance system accuracy and adaptability. This systematic review analyzes eight studies from 2010 to 2023 that explore different methodologies in emotion recognition for HCI. These studies are categorized into single-modal and multimodal inputs, with further distinction between traditional and deep learning-based methods. The review highlights key trends in feature engineering, model efficiency and ethical considerations. Findings showed that multimodal systems improve adaptability across scenarios, achieving higher accuracy than single-modality approaches. Additionally, hybrid models effectively balanced accuracy and computational efficiency. However, challenges remain, including noise reduction and real-time processing. Moreover, non-intrusive methods faced imbalance between accuracy and privacy protection. Future research should focus on developing real-time, computing-friendly architectures, shifting from discrete to dynamic continuous emotion models, and establishing ethical evaluation frameworks.

Copyright
© 2025 The Author(s)
Open Access
Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

Download article (PDF)

Volume Title
Proceedings of the 2025 2nd International Conference on Electrical Engineering and Intelligent Control (EEIC 2025)
Series
Advances in Engineering Research
Publication Date
23 October 2025
ISBN
978-94-6463-864-6
ISSN
2352-5401
DOI
10.2991/978-94-6463-864-6_8How to use a DOI?
Copyright
© 2025 The Author(s)
Open Access
Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

Cite this article

TY  - CONF
AU  - Jiarun Shi
PY  - 2025
DA  - 2025/10/23
TI  - Different Modalities of Emotion Recognition in Human-Computer Interaction
BT  - Proceedings of the 2025 2nd International Conference on Electrical Engineering and Intelligent Control (EEIC 2025)
PB  - Atlantis Press
SP  - 61
EP  - 71
SN  - 2352-5401
UR  - https://doi.org/10.2991/978-94-6463-864-6_8
DO  - 10.2991/978-94-6463-864-6_8
ID  - Shi2025
ER  -