Sign Language Prediction Using Deep Learning
DOI:
https://doi.org/10.47392/IRJAEM.2024.0274Keywords:
Sign Language Recognition, Image Processing, Inception V3, Deep learning, Convolutional Neural NetworkAbstract
Despite the significant potential benefits for wide social group, the concept of utilizing technology for sign language recognition remains largely untapped. There exist various technologies that could facilitate a connection between this social group and the broader community. A key tool in bridging the communication gap for sign language users is the ability to interpret sign language. Computers equipped with image categorization and machine learning capabilities can recognize sign language gestures, humans can then translate these packages. This study utilizes convolutional neural networks (CNNs) to detect sign language gestures. The dataset consists of stationary sign language motions recorded using RGB cameras, which underwent preprocessing to ensure cleanliness before being utilized as input data. The Inception v3 CNN model was chosen for retraining and testing on this study presents results using a dataset of sign language gestures, where the model utilizes multiple convolution filter inputs to process a single input, achieving a validation accuracy exceeding 90%. Additionally, the study reviews many efforts have been made in sign language detection utilizing machine learning and image depth data. evaluating the various challenges inherent in addressing this issue and discussing potential future developments in the field.
Downloads
Downloads
Published
Issue
Section
License
Copyright (c) 2024 International Research Journal on Advanced Engineering and Management (IRJAEM)
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.