Sign Language Conversion to Text and Speech Using Machine Learning
DOI:
https://doi.org/10.47392/IRJAEM.2025.0173Keywords:
CNN, MediaPipe, OpenCV, PYTTSX3, TensorFlowAbstract
This Study introduces a camera-based sign language detection system that bridges communication barriers for deaf and hard-of-hearing individuals. Unlike existing solutions requiring specialized sensors, our approach uses standard cameras with OpenCV for image capture and Convolutional Neural Networks for gesture recognition. The system processes hand movements in real-time, translating American Sign Language into text and speech through TensorFlow, MediaPipe, and the PYTTSX3 library. Experimental results demonstrate high accuracy across various environmental conditions and user variations. This accessible technology enables seamless communication between signing and non-signing individuals, promoting greater inclusion in educational, workplace, and public settings without requiring specialized equipment.
Downloads
Downloads
Published
Issue
Section
License
Copyright (c) 2025 International Research Journal on Advanced Engineering and Management (IRJAEM)

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.