Conversion of Images to Indian Sign Language Using Object Detection
DOI:
https://doi.org/10.17010/ijcs/2025/v10/i3/175397Keywords:
Assistive Technology, Avatar Animation, Hearing-Impaired, Indian Sign Language, Object Detection, SiGML, Sign Language Translation, YOLO.Publication Chronology: Paper Submission Date : May 3, 2025 ; Paper sent back for Revision : May 15, 2025 ; Paper Acceptance Date : May 18, 2025 ; Paper Published Online : June 5, 2025
Abstract
The present paper introduces a web-based application that automatically converts images to Indian Sign Language (ISL) signs to provide accessibility for the hearing-impaired. The system includes a custom-trained object detection with YOLOv8, ISL API for fetching SiGML data and JASigning for animating the ISL signs using "Marc" avatar. Users can upload images or select from predefined samples and the system will show the detected objects with their respective ISL signs. The system achieved an average precision of 92% for object detection and a 95% success rate for fetching sign data over a test dataset of 100 images of varied categories. Due to the user-friendly environment and high accuracy, the system is a viable educational tool and for daily usage, to fill the communication gap between the hearing-impaired. The work is to expand object varieties and information coverage of sign language in the near future.
Downloads
Published
How to Cite
Issue
Section
References
[1] A. Joshi, R. Mohanty, M. Kanakanti, A. Mangla, S. Choudhary, M. Barbate, and A. Modi, “ ISign: A benchmark for Indian sign language processing,” in Findings Assoc. Comput. Linguistics: ACL 2024, Association for Computational Linguistics, 2024, pp. 10827–10844. doi: 10.18653/v1/2024.findings-acl.643.
[2] N. Sugandhi, P. Kumar, and S. Kaur, “Sign Language Generation System based on Indian Sign Language Grammar,” ACM Trans. Asian Low-Resource Lang. Inf. Process., vol. 19, no. 4, Art. no. 54, pp. 1–26, 2020, doi: 10.1145/3384202.
[3] D. Bragg, O. Koller, M. Bellard, L. Berke, P. Boudreault, A. Braffort, N. Caselli, M. Huenerfauth, H. Kacorri, T.. Verhoef, C. Vogler, and M. R. Morris, “Sign Language Recognition, Generation, and Translation: An interdisciplinary perspective,” in Proc. 21st Int. ACM SIGACCESS Confer. Comput. Accessibility (ASSETS '19). Assoc. Comput. Machinery, New York, NY, USA, pp. 16–31, 2019, doi: 10.1145/3308561.3353774.
[4] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You Only Look Once: Unified, real-time object detection,” in 2016 IEEE Conf. Comput. Vision Pattern Recognit., Las Vegas, NV, USA, pp. 779–788, 2016, doi: 10.1109/cvpr.2016.91.
[5] D. Kothadiya, C. Bhatt, K. Sapariya, K. Patel, A.-B. Gil-González, and J. M. Corchado, “DeepSign: Sign language detection and recognition using Deep learning,” Electronics, vol. 11, no. 11, 1780, 2022, doi: 10.3390/electronics11111780.
[6] R. Rastgoo, K. Kiani, and S. Escalera, “Sign Language Recognition: A deep survey,” Expert Syst. Appl., vol. 164, Art. no. 113794, Feb. 2021, doi:10.1016/j.eswa.2020.113794.
[7] S. Katoch, V. Singh, and U. S. Tiwary, “Indian Sign Language recognition system using SURF with SVM and CNN,” Array, vol. 14, Art. no. 100141, Jul. 2022, doi: 10.1016/j.array.2022.100141.
[8] M. Alaftekin, I. Pacal, and K. Cicek, “Real-time sign language recognition based on YOLO algorithm,” Neural Comput. Appl., vol. 36, no. 14, pp. 7609–7624, 2024, doi: 10.1007/s00521-024-.09503-6.
[9] S. C. J., and L. A, “Signet: A Deep Learning based Indian Sign Language Recognition System,” in 2019 Int. Conf. Communication Signal Process., Chennai, India, 2019, pp. 596–600, doi: 10.1109/iccsp.2019.8698006.
[10] R. Damdoo and P. Kumar, “An integrative survey on Indian sign language recognition and translation,” IET Image Process., vol. 19, no. 1, 2025, doi: 10.1049/ipr2.70000.
[11] A. Sridhar, R. G. Ganesan, P. Kumar, and M. Khapra, “INCLUDE: A Large Scale Dataset for Indian Sign Language Recognition,” in Proc. 30th ACM Int. Conf. Multimedia,” pp. 1366–1375, 2020, doi: 10.1145/3394171.3413528.
[12] R. Wolfe, J. C. McDonald, T. Hanke, S. Ebling, D. V. Landuyt, F. Picron, V. Krausneker, E. Efthimiou, E. Fotinea, and A. Braffort, “Sign Language Avatars: A question of Representation,” Inf., vol. 13, no. 4, 206, 2022, doi:10.3390/info13040206.
[13] K. Kaur and P. Kumar, “HamNoSys to SiGML Conversion System for sign language Automation,” Procedia Comput.Sci., vol. 89, pp. 794–803, 2016, doi: 10.1016/j.procs.2016.06.063.
[14] M. Aziz and A. Othman, “Evolution and trends in sign language avatar systems: Unveiling a 40-year journey via systematic review,” Multimodal Technologies Interaction, vol. 7, no. 10, 97, 2023, doi: 10.3390/mti7100097.
[15] S. Srivastava, A. Gangwar, R. Mishra, and S. Singh, “Sign language recognition System using TensorFlow Object Detection API,” in Woungang, I., Dhurandher, S. K. Pattanaik, K. K. Verma, A. P. Verma (eds) Adv. Netw. Technologies Intell. Comput. ANTIC 2021. Commun. Comput. Inform. Sci., vol. 1534, Springer, Cham., pp. 634–646, Feb. 2022, doi: 10.1007/978-3-030-96040-7_48.
[16] T. F. Dima and M. E. Ahmed, “Using YOLOv5 algorithm to detect and recognize American sign language,” in 2021 Int. Conf. Inform. Technol., Amman, Jordan, 2021, pp. 603–607, doi: 10.1109/ICIT52682.2021.9491672.