An Adam based CNN and LSTM approach for sign language recognition in real time for deaf people

dc.contributor.authorPaul, Subrata Kumer
dc.contributor.authorWalid, Md. Abul Ala
dc.contributor.authorPaul, Rakhi Rani
dc.contributor.authorUddin, Md. Jamal
dc.contributor.authorRana, Md. Sohel
dc.contributor.authorDevnath, Maloy Kumar
dc.contributor.authorDipu, Ishaat Rahman
dc.contributor.authorHaque, Md. Momenul
dc.date.accessioned2023-12-15T15:22:04Z
dc.date.available2023-12-15T15:22:04Z
dc.date.issued2023-02
dc.description.abstractHand gestures and sign language are crucial modes of communication for deaf individuals. Since most people can't understand sign language, it's hard for a mute and an average person to talk to each other. Because of technological progress, computer vision and deep learning can now be used to count. This paper shows two ways to use deep knowledge to recognize sign language. These methods help regular people understand sign language and improve their communication. Based on American sign language (ASL), two separate datasets have been constructed; the first has 26 signs, and the other contains three significant symbols with the crucial sequence of frames or videos for regular communication. This study looks at three different models: the improved ResNet-based convolutional neural network (CNN), the long short-term memory (LSTM), and the gated recurrent unit (GRU). The first dataset is used to fit and assess the CNN model. With the adaptive moment estimation (Adam) optimizer, CNN obtains an accuracy of 89.07%. In contrast, the second dataset is given to LSTM and GRU and a comparison has been conducted. LSTM does better than GRU in all classes. LSTM has a 94.3% accuracy, while GRU only manages 79.3%. Our preliminary models' real-time performance is also highlighted.
dc.description.urihttps://beei.org/index.php/EEI/article/view/6059
dc.format.extent11 pages
dc.genrejournal articles
dc.identifier.citationPaul, Subrata Kumer, Md Abul Ala Walid, Rakhi Rani Paul, Md Jamal Uddin, Md Sohel Rana, Maloy Kumar Devnath, Ishaat Rahman Dipu, and Md Momenul Haque. “An Adam Based CNN and LSTM Approach for Sign Language Recognition in Real Time for Deaf People.” Bulletin of Electrical Engineering and Informatics 13, no. 1 (February 1, 2024): 499–509. https://doi.org/10.11591/eei.v13i1.6059.
dc.identifier.urihttps://doi.org/10.11591/eei.v13i1.6059
dc.identifier.urihttp://hdl.handle.net/11603/31112
dc.language.isoen_US
dc.publisherIAES
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Information Systems Department Collection
dc.relation.ispartofUMBC Student Collection
dc.rightsThis item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.
dc.rightsCC BY-SA 4.0 DEED Attribution-ShareAlike 4.0 International en
dc.rights.urihttps://creativecommons.org/licenses/by-sa/4.0/
dc.titleAn Adam based CNN and LSTM approach for sign language recognition in real time for deaf people
dc.typeText
dcterms.creatorhttps://orcid.org/0009-0005-5590-1943

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
6059-19024-1-PB (1).pdf
Size:
772.53 KB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
2.56 KB
Format:
Item-specific license agreed upon to submission
Description: