Proceedings of the 2025 International Conference on Electronics, Electrical and Grid Technology (ICEEGT 2025)

Emotion Recognition in Smart Cockpits Emotion Recognition in Smart Cockpits Using the Approach of Multimodal Deep Learning

Authors
Zikai Liu1, Chengrui Yu2, *
1Guanghua Cambridge International School, Shanghai, 201319, China
2Shenzhen (Nanshan) Concord College of Sino-Canada, Shenzhen, 518052, China
*Corresponding author. Email: chengruiyu76@outlook.com
Corresponding Author
Chengrui Yu
Available Online 18 February 2026.
DOI
10.2991/978-94-6463-986-5_63How to use a DOI?
Keywords
Smart Cockpit; Emotion Recognition; Deep Learning
Abstract

This paper systematically discusses the status and challenges of in-vehicle emotion recognition technology and analyses the application of deep learning in intelligent cockpits. Current technology faces three core issues: first, data scarcity; obtaining emotional data in driving scenarios is difficult, which limits the model’s generalisation capabilities. Secondly, there is a conflict between real-time performance and accuracy. High-precision models (such as 3D-CNN) consume a lot of computing resources and cannot meet the real-time requirements of the in-vehicle environment. To address the above issues, the study proposes a multi-modal fusion framework that implements a closed-loop system through perception, processing, and feedback layers, compared with single-modal technology. In practice, single-modal technology can significantly reduce computational overhead through lightweight design (such as MobileNet and EfficientNet). Multimodal fusion (such as visual-CNN + speech-LSTM) further improves system robustness (actual false positive rate reduced by 37%). Future research needs to overcome bottlenecks, such as high data collection costs and weak model adaptability across various scenarios, while ensuring a clear understanding of the relationship between model compression and system efficiency. The paper suggests optimising models through strategies such as knowledge distillation, transfer learning, and adversarial training to promote the transition of in-vehicle emotion recognition from the laboratory to practical application.

Copyright
© 2026 The Author(s)
Open Access
Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

Download article (PDF)

Volume Title
Proceedings of the 2025 International Conference on Electronics, Electrical and Grid Technology (ICEEGT 2025)
Series
Advances in Engineering Research
Publication Date
18 February 2026
ISBN
978-94-6463-986-5
ISSN
2352-5401
DOI
10.2991/978-94-6463-986-5_63How to use a DOI?
Copyright
© 2026 The Author(s)
Open Access
Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

Cite this article

TY  - CONF
AU  - Zikai Liu
AU  - Chengrui Yu
PY  - 2026
DA  - 2026/02/18
TI  - Emotion Recognition in Smart Cockpits Emotion Recognition in Smart Cockpits Using the Approach of Multimodal Deep Learning
BT  - Proceedings of the 2025 International Conference on Electronics, Electrical and Grid Technology (ICEEGT 2025)
PB  - Atlantis Press
SP  - 615
EP  - 622
SN  - 2352-5401
UR  - https://doi.org/10.2991/978-94-6463-986-5_63
DO  - 10.2991/978-94-6463-986-5_63
ID  - Liu2026
ER  -