SIGNLINK: Linking Audio, Text, and Hand Gestures Using Machine Learning Algorithms
DOI:
https://doi.org/10.47392/IRJAEM.2024.0341Keywords:
ASL Animations, CNN, Gesture recognition, Multilingual conversion, Speech recognitionAbstract
Sign language, an important means of communication for the hearing impaired, often encounters barriers when interacting with non-signing people. Our "Sign-Link" aims to bridge this gap by seamlessly integrating audio and text communication with American Sign Language (ASL) through advanced hand gesture recognition technology. By utilizing state-of-the-art techniques, our project facilitates the conversion of multilingual audio input into text or translates text into ASL sign language animations with English dubbed voices. By using a convolutional neural network (CNN), our system accurately identifies and reads aloud the hand gestures enabling real-time recognition and interpretation whose accuracy ranges between 98% to 100% percent respectively. This innovative approach not only improves accessibility and inclusion, but also promotes meaningful communication between people, regardless of their hearing abilities.
Downloads
Downloads
Published
Issue
Section
License
Copyright (c) 2024 International Research Journal on Advanced Engineering and Management (IRJAEM)
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.