Dynamic Assistant for Efficient Multiple and Oral Navigation [D.A.E.M.O.N]
- DOI
- 10.2991/978-94-6463-940-7_22How to use a DOI?
- Keywords
- Voice Assistant; Human-Computer Interaction; NLP; Smart Automation; Context Awareness; Edge AI
- Abstract
D.A.E.M.O.N (Dynamic Assistant for Efficient Multiple and Oral Navigation) is a modular, Python-based, voice-driven assistant designed for efficient, context-aware interaction with digital systems. The framework integrates speech recognition, natural language understanding, system-level automation, and real-time information retrieval, operating entirely offline to preserve privacy and reduce latency. Evaluation demonstrates 90.37% speech recognition accuracy, 92% intent classification success, 97.1% task automation accuracy, and an average response time of 1.75 s. These results highlight D.A.E.M.O.N’s potential as a practical, low-latency solution for accessibility-focused environments, smart computing, and autonomous systems.
- Copyright
- © 2025 The Author(s)
- Open Access
- Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
Cite this article
TY - CONF AU - Madhan Kairamkonda AU - Suryaprakash Reddy Musku AU - Rahul Gogur AU - V. Ambica AU - Sony Pakkiru PY - 2025 DA - 2025/12/31 TI - Dynamic Assistant for Efficient Multiple and Oral Navigation [D.A.E.M.O.N] BT - Proceedings of the Conference on Social and Sustainable Innovation in Technology & Engineering (SASI-ITE 2025) PB - Atlantis Press SP - 300 EP - 312 SN - 1951-6851 UR - https://doi.org/10.2991/978-94-6463-940-7_22 DO - 10.2991/978-94-6463-940-7_22 ID - Kairamkonda2025 ER -