Bridging the Communication Gap: A Real-Time Static Sign Language Recognition System Using a Convolutional Neural Network

Authors

  • Rithika PG – Computer Science and Engineering (AI & ML), Jansons Institute of Technology (Autonomous), Coimbatore, Tamil Nadu, India Author
  • Maragatham Assistant Professor, Computer Science and Engineering, Jansons Institute of Technology (Autonomous), Coimbatore, Tamil Nadu, India Author

DOI:

https://doi.org/10.47392/IRJAEM.2025.0512

Keywords:

Convolutional Neural Network, Deep Learning, Real-Time System, Sign Language Recognition, Static Gesture Recognition

Abstract

Effective communication remains a significant challenge for the global deaf and hard-of-hearing community when interacting with the hearing majority. This paper presents a real-time Sign Language Recognition (SLR) system designed to translate static hand gestures representing alphabetic and numeric characters into text using a standard webcam. The system employs a deep learning approach based on a Convolutional Neural Network (CNN) architecture. A comprehensive pre-processing pipeline including grayscale conversion, Gaussian blurring, and adaptive thresholding ensures robust hand gesture isolation under varying lighting conditions. The implemented CNN model achieved a test accuracy of 98.5% on a dataset of 27 sign classes. The system demonstrates low-latency performance in real-time inference, providing immediate visual feedback by displaying predicted characters and concatenating them into text strings. This work establishes a practical foundation for assistive communication technologies using deep learning.

Downloads

Download data is not yet available.

Downloads

Published

2025-11-27