“Signconnect - Bilingual Asl And Isl Gesture Detection System Using Deep Learning Techniques”

Authors

  • Mohammed Hizbullah UG Scholar, Dept. of IT, B S Abdur Rahman Crescent Institute of Science & Technology, Chennai, Tamil Nadu, India. Author
  • F Ahamed Nawfal UG Scholar, Dept. of IT, B S Abdur Rahman Crescent Institute of Science & Technology, Chennai, Tamil Nadu, India. Author
  • Ms. K Subashini Assistant Professor, B S Abdur Rahman Crescent Institute of Science & Technology, Chennai, Tamil Nadu, India Author

DOI:

https://doi.org/10.47392/IRJAEM.2026.0269

Keywords:

Sign Language Recognition, YOLOv11, MediaPipe, LSTM, Gesture Recognition, Text-to-Sign

Abstract

Communication between the deaf population and normal hearing people continues to be a problem since the interpretation of sign languages is still not well understood. Available communication systems concentrate on the one-way approach of translating gestures into text or vice versa, which constrains communication. In this context, we propose the SignConnect system, a two-way translation approach combining both gestures to text and text to sign pipelines. In the gesture-to-text approach, we use You Only Look Once Version 11 (YOLOv11) to detect hand gestures quickly and MediaPipe for extracting keypoint information. Then we utilize a Long Short-Term Memory (LSTM) neural network for recognizing gestures. In the other direction, we propose a text-to-sign translation system by first normalizing the input text and then tokenizing it into a sign language dictionary and creating animated gestures using 2D/3D visualization. The proposed approach combines both approaches using the Open-Source Computer Vision Library (OpenCV) library and creates a frontend application for video input and text and animation output in real-time. The results obtained confirm the efficiency of the approach.

Downloads

Download data is not yet available.

Downloads

Published

2026-05-09