SIGNLINK: Linking Audio, Text, and Hand Gestures Using Machine Learning Algorithms

Authors

  • Naheed Fatima Department of IT, GNITS, Hyderabad, India. Author
  • M. Sridevi Department of IT, GNITS, Hyderabad, India. Author

DOI:

https://doi.org/10.47392/IRJAEM.2024.0341

Keywords:

ASL Animations, CNN, Gesture recognition, Multilingual conversion, Speech recognition

Abstract

Sign language, an important means of communication for the hearing impaired, often encounters barriers when interacting with non-signing people. Our "Sign-Link" aims to bridge this gap by seamlessly integrating audio and text communication with American Sign Language (ASL) through advanced hand gesture recognition technology. By utilizing state-of-the-art techniques, our project facilitates the conversion of multilingual audio input into text or translates text into ASL sign language animations with English dubbed voices. By using a convolutional neural network (CNN), our system accurately identifies and reads aloud the hand gestures enabling real-time recognition and interpretation whose accuracy ranges between 98% to 100% percent respectively. This innovative approach not only improves accessibility and inclusion, but also promotes meaningful communication between people, regardless of their hearing abilities.

Downloads

Download data is not yet available.

Downloads

Published

2024-07-26