Convoconnect: Enabling Bidirectional Communication with Natural Language Processing And Deep Learning Sign Alphabet Conversion Communication
- DOI
- 10.2991/978-94-6463-858-5_75How to use a DOI?
- Keywords
- Indian Sign Language (ISL); American Sign Language (ASL); Sign Language Recognition; Computer Vision; OpenCV; Random Forest Classifier (RFC); State-of-the-Art (SOTA); Long Short-Term Memory (LSTM); Deep Learning; Amazon Web Services S3 (AWS S3)
- Abstract
Hand gesture-based sign language recognition distributed as major interaction link to people with hearing difficulties. The absence of a universal sign language allows distinct linguistic variations are evident in different regions, includes Indian Sign Language and American Sign Language. Conventional systems mostly falter with cross-language recognition, limiting their effectiveness in multilingual scenarios. This research presents a state-of-the-art solution called ConvoConnect, which promotes seamless communication by converting ISL and ASL gestures into text formats accurately. The proposed approach applies computer vision techniques for precise hand landmark localization using OpenCV and MediaPipe, followed by gesture classification through a Random ForestClassifier trained on different datasets. Additionally, a deep learning-aided auto-complete distribution gets developed to enhance the efficacy of communication and is powered by the LSTM model, creating real-time text generation based on predicted contextually relevant words. Extensive usage of the models on AWS S3 facilitates centralized storage and real-time inference for easy scalability. Performance evaluation is executed with an F1-score and precision metrics comparing real-time gesture predictions with stored data. The extensive experimentations upon ISL and ASL datasets prove the model efficacy over classical methods in terms of accuracy and adaptability. This research fills a gap to connect with a plethora of sign language users during interactions to make them inclusive and efficient.
- Copyright
- © 2025 The Author(s)
- Open Access
- Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
Cite this article
TY - CONF AU - Veera Swamy Pittala AU - Prasanth Namburi AU - Madhu Appala Narasimha Golthi AU - Teja Sri Bheemasetti PY - 2025 DA - 2025/11/04 TI - Convoconnect: Enabling Bidirectional Communication with Natural Language Processing And Deep Learning Sign Alphabet Conversion Communication BT - Proceedings of International Conference on Computer Science and Communication Engineering (ICCSCE 2025) PB - Atlantis Press SP - 886 EP - 901 SN - 2352-538X UR - https://doi.org/10.2991/978-94-6463-858-5_75 DO - 10.2991/978-94-6463-858-5_75 ID - Pittala2025 ER -