Sign Language Conversion to Text and Speech Using Machine Learning

Authors

  • Dr. M. Laxmaiah Head of Department, Dept. of Data science, CMR Engineering College, Medchal, 501401, Telangana, India. Author
  • V. Harshitha UG Scholar, Dept. of Data science, CMR Engineering College, Medchal, 501401, Telangana, India. Author
  • C. Mahesh UG Scholar, Dept. of Data science, CMR Engineering College, Medchal, 501401, Telangana, India. Author
  • P. Sumanth UG Scholar, Dept. of Data science, CMR Engineering College, Medchal, 501401, Telangana, India. Author
  • CH. Harsha Vardhan UG Scholar, Dept. of Data science, CMR Engineering College, Medchal, 501401, Telangana, India. Author

DOI:

https://doi.org/10.47392/IRJAEM.2025.0173

Keywords:

CNN, MediaPipe, OpenCV, PYTTSX3, TensorFlow

Abstract

This Study introduces a camera-based sign language detection system that bridges communication barriers for deaf and hard-of-hearing individuals. Unlike existing solutions requiring specialized sensors, our approach uses standard cameras with OpenCV for image capture and Convolutional Neural Networks for gesture recognition. The system processes hand movements in real-time, translating American Sign Language into text and speech through TensorFlow, MediaPipe, and the PYTTSX3 library. Experimental results demonstrate high accuracy across various environmental conditions and user variations. This accessible technology enables seamless communication between signing and non-signing individuals, promoting greater inclusion in educational, workplace, and public settings without requiring specialized equipment.  

Downloads

Download data is not yet available.

Downloads

Published

2025-04-02