Bridging the Communication Gap: A Real-Time Static Sign Language Recognition System Using a Convolutional Neural Network
DOI:
https://doi.org/10.47392/IRJAEM.2025.0512Keywords:
Convolutional Neural Network, Deep Learning, Real-Time System, Sign Language Recognition, Static Gesture RecognitionAbstract
Effective communication remains a significant challenge for the global deaf and hard-of-hearing community when interacting with the hearing majority. This paper presents a real-time Sign Language Recognition (SLR) system designed to translate static hand gestures representing alphabetic and numeric characters into text using a standard webcam. The system employs a deep learning approach based on a Convolutional Neural Network (CNN) architecture. A comprehensive pre-processing pipeline including grayscale conversion, Gaussian blurring, and adaptive thresholding ensures robust hand gesture isolation under varying lighting conditions. The implemented CNN model achieved a test accuracy of 98.5% on a dataset of 27 sign classes. The system demonstrates low-latency performance in real-time inference, providing immediate visual feedback by displaying predicted characters and concatenating them into text strings. This work establishes a practical foundation for assistive communication technologies using deep learning.
Downloads
Downloads
Published
Issue
Section
License
Copyright (c) 2025 International Research Journal on Advanced Engineering and Management (IRJAEM)

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
.