Please use this identifier to cite or link to this item: https://hdl.handle.net/1959.11/60967
Title: MultiModal Ensemble Approach Leveraging Spatial, Skeletal, and Edge Features for Enhanced Bangla Sign Language Recognition
Contributor(s): Shams, Khan Abrar (author); Rafid Reaz, Md (author); Ur Rafi, Mohammad Ryan (author); Islam, Sanjida (author); Shahriar Rahman, Md (author); Rahman, Rafeed (author); Tanzim Reza, Md (author); Parvez, Mohammad Zavid (author); Chakraborty, Subrata  (author)orcid ; Pradhan, Biswajeet (author); Alamri, Abdullah (author)
Publication Date: 2024-06-20
Open Access: Yes
DOI: 10.1109/ACCESS.2024.3410837
Handle Link: https://hdl.handle.net/1959.11/60967
Abstract: 

Sign language is the predominant mode of communication for individuals with auditory impairment. In Bangladesh, BdSL or Bangla Sign Language is widely used among the hearing-impaired population. However, because of the general public’s limited awareness of sign language, communicating with them using BdSL can be challenging. Consequently, there is a growing demand for an automated system capable of efficiently understanding BdSL. For automation, various Deep Learning (DL) architectures can be employed to translate Bangla Sign Language into readable digital text. The automation system incorporates live cameras that continuously capture images, which a DL model then processes. However, factors such as lighting, background noise, skin tone, hand orientations, and other aspects of the image circumstances may introduce uncertainty variables. To address this, we propose a procedure that reduces these uncertainties by considering three modalities: spatial information, skeleton awareness, and edge awareness. We introduce three image pre-processing techniques alongside three CNN models. The CNN models are combined using nine distinct ensemble meta-learning algorithms, with five of them being modifications of averaging and voting techniques. In the result analysis, our individual CNN models achieved higher training accuracy at 99.77%, 98.11%, and 99.30%, respectively, than most of the other state-ofthe-art image classification architectures, except for ResNet50, which achieved 99.87%. Meanwhile, the ensemble model attained the highest accuracy of 95.13% on the testing set, outperforming all individual CNN models. This analysis demonstrates that considering multiple modalities can significantly improve the system’s overall performance in hand pattern recognition.

Publication Type: Journal Article
Source of Publication: IEEE Access, v.12, p. 83638-83657
Publisher: Institute of Electrical and Electronics Engineers
Place of Publication: United State of America
ISSN: 2169-3536
Fields of Research (FoR) 2020: 4601 Applied computing
Peer Reviewed: Yes
HERDC Category Description: C1 Refereed Article in a Scholarly Journal
Appears in Collections:Journal Article
School of Science and Technology

Files in This Item:
2 files
File Description SizeFormat 
openpublished/MultiModalChakraborty2024JournalArticle.pdfPublished Version3.61 MBAdobe PDF
Download Adobe
View/Open
Show full item record
Google Media

Google ScholarTM

Check

Altmetric


This item is licensed under a Creative Commons License Creative Commons