Real-Time ASL Single Handed Video Gestures Recognition: A Deep Neural Network Approach
Keywords:
American Sign Language, Deep Neural Network, Feature Extraction, Hand Gesture Recognition, Radon Features, Stacked Auto-Encoder, SURF, Zernike MomentAbstract
American Sign Language (ASL) is a comprehensive Sign Language in the world. This paper presents transliteration of Real-time single handed video hand gestures of American SL into humanoid / device recognizable text (in English) using Deep Neural Network (DNN). Real time single handed video gestures are inputted through the ordinary web/mobile camera. The proposed involves two phases such as cognition and recognition phase. In cognition phase, there are 15 single handed videos gestures of various invariants such as signer, location, background; angle, and illumination are trained using DNNs and various feature extraction techniques. There are 32 real time single handed video gesture datasets of ASL are used for recognition, which are completely signer independent and invariants compared to cognition phase. Basic pre-processing operations are carried in both the cognition and recognition phase for efficient results. As part of the result, the proposed work provides the recognition (average) success rate of 97.3%. This result is good and improved result by comparing with the earlier research works.