Proceedings of International Conference on Computer Science and Communication Engineering (ICCSCE 2025)

Sign Language Gesture-Based Sentence Generator – A Review

Authors
Raj Vaidya1, *, Parthraj Ghatge1, Khushi Vaishnav1, Manas Bagul1, Sukhada Bhingarkar1
1Dr. Vishwanath Karad MIT World Peace University, Pune, India
*Corresponding author. Email: 1032211277@mitwpu.edu.in
Corresponding Author
Raj Vaidya
Available Online 4 November 2025.
DOI
10.2991/978-94-6463-858-5_76How to use a DOI?
Keywords
Artificial Neural Networks (ANN); Computer Vision; Deep Learning; Gesture Recognition; Indian Sign Language (ISL); MediaPipe; Natural Language Processing (NLP); Sentence Generation; Sign Language Recognition (SLR)
Abstract

Sign Language Recognition (SLR) is essential for facilitating communication in the hearing-impaired community. However, existing SLR systems primarily focus on recognizing isolated words, lacking the ability to generate grammatically structured sentences, which is crucial for natural communication. Additionally, most research is centered around American Sign Language (ASL), leaving Indian Sign Language (ISL) underexplored due to its unique grammatical structure and gesture variations. Current deep learning-based approaches, including CNNs and LSTMs, have improved recognition accuracy but face challenges related to dataset limitations, real-time processing efficiency, and adaptability to diverse environments.

To address these limitations, we propose a hybrid framework that integrates MediaPipe for real-time gesture detection, Artificial Neural Networks (ANN) for gesture classification, and Large Language Models (LLMs) for sentence formation. This approach enhances recognition accuracy, ensures smooth sentence generation, and improves computational efficiency for real-time applications. By bridging the gap between isolated word recognition and full sentence translation in ISL, our method significantly enhances accessibility for the hearing-impaired community. Additionally, we review existing SLR techniques, highlight research gaps, and outline future directions for improving real-time sign language communication through deep learning and NLP advancements.

Copyright
© 2025 The Author(s)
Open Access
Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

Download article (PDF)

Volume Title
Proceedings of International Conference on Computer Science and Communication Engineering (ICCSCE 2025)
Series
Advances in Computer Science Research
Publication Date
4 November 2025
ISBN
978-94-6463-858-5
ISSN
2352-538X
DOI
10.2991/978-94-6463-858-5_76How to use a DOI?
Copyright
© 2025 The Author(s)
Open Access
Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

Cite this article

TY  - CONF
AU  - Raj Vaidya
AU  - Parthraj Ghatge
AU  - Khushi Vaishnav
AU  - Manas Bagul
AU  - Sukhada Bhingarkar
PY  - 2025
DA  - 2025/11/04
TI  - Sign Language Gesture-Based Sentence Generator – A Review
BT  - Proceedings of International Conference on Computer Science and Communication Engineering (ICCSCE 2025)
PB  - Atlantis Press
SP  - 902
EP  - 912
SN  - 2352-538X
UR  - https://doi.org/10.2991/978-94-6463-858-5_76
DO  - 10.2991/978-94-6463-858-5_76
ID  - Vaidya2025
ER  -