Emotional Security for Sustainable Future: Multimodal Affective Computing
- DOI
- 10.2991/978-94-6463-716-8_47How to use a DOI?
- Keywords
- Affective computing; emotion recognition; multimodality; Deep Learning; Machine Learning
- Abstract
Emotional well-being is quite much a significant human resource, which certainly boosts productivity, creativity as well as sustainability. As natural resources get conserved and sometimes preserved, emotional resilience would have to be preserved too against burnout and disaffection from society. One of the contributions by affective computing in this area is early detection of stress, personalized intervention, and even emotion-aware systems for emotional-social harmony among people. This goes in hand with the sustainability goals because it is actively supporting well-being as building blocks for a healthy sustainable future. Reading and comprehending emotions are complicated tasks, but technology aids us in this endeavor. Nowadays, sophisticated algorithms can extract and exploit aspects of body language to identify emotions from a wide range of data sources, such as images, videos, and biosignals. Scientists have been working on developing and analyzing techniques for automated emotion detection and recognition for decades. Extensive literature exists in the subject of emotion recognition, which suggests, evaluates, and estimates many methodologies in fields such as signal processing, machine learning, deep learning, computer vision, and speech recognition. Since the inception of affective computing, there are number of articles published on it. In this paper, we thoroughly examine cutting-edge fusion approaches as part of the review, and then critically evaluate possible performance gains from multimodal analysis over unimodal analysis. To help readers better grab this difficult and interesting study topic, a thorough narrative of these two complementing fields is presented.
- Copyright
- © 2025 The Author(s)
- Open Access
- Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
Cite this article
TY - CONF AU - Nisha Rathi AU - Parul Saran AU - Satyam Shrivastava PY - 2025 DA - 2025/05/26 TI - Emotional Security for Sustainable Future: Multimodal Affective Computing BT - Proceedings of the International Conference on Recent Advancements and Modernisations in Sustainable Intelligent Technologies and Applications (RAMSITA 2025) PB - Atlantis Press SP - 611 EP - 638 SN - 1951-6851 UR - https://doi.org/10.2991/978-94-6463-716-8_47 DO - 10.2991/978-94-6463-716-8_47 ID - Rathi2025 ER -