0% found this document useful (0 votes)
132 views

Methodologies For Sign Language Recognition A Survey

Interpreting deaf-mute people has always been a problem for people as they primarily rely on sign language for communicating.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
132 views

Methodologies For Sign Language Recognition A Survey

Interpreting deaf-mute people has always been a problem for people as they primarily rely on sign language for communicating.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Volume 6, Issue 4, April – 2021 International Journal of Innovative Science and Research Technology

ISSN No:-2456-2165

Methodologies for Sign Language Recognition:


A Survey
Ayushi N. Patani, Varun S. Gawande, Jash V. Gujarathi, Vedant K. Puranik, Tushar A .Rane
Department of Information Technology
Society for Computer Technology and Research’s Pune Institute of Computer Technology
Affiliated to Savitribai Phule Pune University (formerly known as Pune University )
Survey no 27, Dhankawadi, Pune - 411043 , Maharashtra, India

Abstract:- Interpreting deaf-mute people has always mainly based on fingerspelling systems that transferred
been a problem for people as they primarily rely on sign spoken language to a sign language and vice versa. Sign
language for communicating. Active participation of the languages have evolved since then to develop more complex
deaf-mute community still remains at an elementary relations with the languages spoken in the land. Hence
stage, despite multiple nations providing resources for developing multiple dialects and variations from country to
the same, like a sign language interpreter and country.
communicator of news in the country of New Zealand.
Perturbing situations such as kidnapping, deception, fire Some notes about Sign Language that readers must be
breakout or any other situations of general agony could aware of are :
further exacerbate this barrier of communication, as the a.) Sign languages have an equally vibrant vocabulary as
mute people try their best to communicate, but the spoken languages and exhibit all fundamental structures that
majority remains oblivious to their language. Therefore, exist in all spoken languages.
bridging the gap between these two worlds is of utmost b.) Just like in spoken languages words do not have any
necessity. This paper aims to briefly acquaint the reader onomatopoeic relation with the referent they are describing,
with how sign language communication works and puts sign languages do not have any visual relevance with what
forward research conducted in this field that explains they convey.
how to capture and recognize sign language and also c.) Just like spoken languages use grammar to turn words
attempts to suggest a systemized solution. into meaningful sentences, Sign languages have semantics
that organize elementary meaningless units into meaningful
Keywords:- Hilbert Curve, Support Vector Machines, units/phrases.
Random Forests, Artificial Neural Network, Feed-forward d.) Unlike spoken languages, sign languages convey
Backpropagation, Hough Transform, Convolutional Neural meaning by simultaneous meanings by the main articulators
Networks, Stacked Deionized Decoders, Multilayer i.e. the head and the hands.
Perceptron Neural Network, Adaline Neural Network.
Given the significant percentage of people that rely
I. INTRODUCTION upon Sign Languages to be their primary mode of
communication, it is imperative for the wider public outside
Using statistics of the World Health organization, there the deaf-mute communities to be aware of sign languages to
are 466 Million hearing disabled people and a million some extent at least. However, normal individuals have
people that are speech impaired. This rounds up to over 5% very less incentive to learn even basic Sign language. For
of the World’s Population that cannot be communicated eg. In India there are only 250 certified sign language
with by using conventional speech based approaches. interpreters that translate for a community of up to 7 million
people. The current situation creates an overwhelmingly
Sign languages have been the most widespread method exclusionary society for the deaf-mute community . Given
of communicating with members of the deaf-mute the increasing impetus of communication skills in the
community throughout history, even being mentioned by workforce, the deaf mute community are presented with a
Socrates in Plato’s Cratylus. very high barrier of entry to participate in society as a fully
functioning member. These people are dependent on their
Multiple books and scholarly articles were written people close to them that have taken the effort to be able to
from the 16th to 18th century in European countries regarding understand and converse with them to be able to interact
instructions on how to communicate with and teach deaf- with society.
mute people. These books formed the basis for multiple sign
languages ,like the British Sign Language (BSL) and the It has been increasingly evident that a technological
French Sign Language (FSL) , American Sign Language solution is needed to bridge the communication gap that
(ASL) (based on the FSL),New Zealand Sign language and exists between the members of the speech impaired
the Sign Language used in Spain and Mexico. However, community and society as this community is most
until the 19th century most of these sign languages were

IJISRT21APR505 www.ijisrt.com 775


Volume 6, Issue 4, April – 2021 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
vulnerable to being left behind in the technical revolution of The leap motion technique [5] makes use of a cost-
the past few decades. effective sensor, called Leap Motion Controller, based
system. Here the information about hand and finger
Section II highlights the different approaches that can movements is captured by this sensor via APIs designed for
be adopted for capturing videos and/or images for the same. This is done by performing the movements a few
interpretation. Section III and IV talk about the literature feet above the horizontally positioned sensor. This data is
survey and the proposed methodology respectively. The then sent to a computer via USB. This approach ensures a
paper is concluded in section V. cheaper solution to the glove- and kinect-based approaches
however still faces the same challenges as the two
II. APPROACHES mentioned.

Camera-based image or video capturing has been one Brain-Computer Interfacing is an advanced approach
of the widest implemented and effective methods used in to identifying sign language. Electroencephalogram [6]
sign language interpretation systems. Using this technique, brain activities are obtained for the recognition of sign-
researchers have been successful in interpreting sign language. This approach goes one-step further by
language by capturing gestures of only one hand, both hands completely eliminating the need of any physical movements
and static or dynamic images. The signs could therefore be for detection of sign language. Here, brain waves are made
either isolated signs or continuous signs. In case of videos, use of, which are then directly transmitted to a computer
they are first captured and broken down into frames of with the help of Bluetooth. Other techniques like functional
images that can then be passed onto the system for further magnetic resonance imaging i.e. fRMI [7] and
analysis and interpretation. Hence, overall a stream of Electrocorticography [8] are also used in a similar fashion.
images is passed to the system, after which different They face a major problem of implementation complexity
techniques as per application are utilized to obtain results. and still rely on using devices connected to the head to
detect signals.
Using Kinect is another approach that has started
receiving recognition from the research community. III. LITERATURE SURVEY
Microsoft Kinect is a motion camera device that captures
users’ movements in real-time. Kinect has been primarily M. Qutaisht et al developed a system [9] for
used for gaming purposes in the recent past. It [1] provides a automatically translating the static gestures in the American
significant advantage over the camera-based approach as it Sign language(ASL). To facilitate natural interaction with
is not restricted to 2D image/video capturing, but can also the system, they performed recognition on hand images by
capture depth information such as color depth, etc. using neural networks and Hough transform. The vector
effectively. However, the maintenance and overall costs representation of the image was compared with the training
pose a higher overhead than the camera method and is hence set. Transformations such as shearing, rotation, scaling, etc.
not so commonly adopted for commercial purposes. helped by adding small noise to the model and made it
robust for the variations inherent in real-life input and also
The armband is a technique that depends on fostered flexibility. The system was implemented and tested
Electromyography (EMG) Signals. These signals are against 300 samples of hand gestures with 15 images for
generated in our muscles whenever there is any movement. each sign and an accuracy of 92.33% was achieved.
The data is collected [2] from the signers arm through a
band in the form of signals and then processed to interpret Hardik Rewari et al worked on directly processing
sign language. One of the greatest advantages this method video input to generate the relevant audio output. Their sign
assures over camera- and Kinect-based methods is zero language interpreter [10] essentially worked on the Indian
dependency on light. However, to detect signals effectively, Sign Language (ISL) to aid the deaf and dumb Indian
a lot of wires need to be connected to the band and the people. They harnessed the hardware ability of the system
portability is also an issue and proves to be a setback over by using various components like MPU6050, flex sensors,
the former two approaches. HC-05, etc. and worked on 90 words from the ISL.

A glove can also be used which primarily relies on the Microsoft Kinect was used by Rajaganapathy. S et al
path-breaking innovation [3] done in 1993 called a Cyber [11], who relied on motion capture and gesture
Glove. For getting data, the signers wear this glove which interpretation to recognize sign language and subsequently
comes with a number of sensors for each finger attached to converted it to audio. The device captured 20 human joins
it. A motion tracker [4] is also employed along with the and gestures. The device kept a track of the human gestures
glove to track the orientation and position of hands which is and eventually the data was matched with the user defined
then connected to a computer via serial ports. It provides an gestures to yield an outcome. The range of motion, which
easy way of detecting sign language, however, a lot of they could identify, was from 40 centimeters to 4 meters and
equipment needs to be appropriately set and configured for gestures of maximum 2 people at a time could be identified.
use. This is not feasible in real-world situations such as on Accuracy of upto 90% was achieved in this process.
roads, ships, shopping centers, etc. Moreover, it is also
unable to capture facial features and symbols which can be Sarbjeet Kaur et al provided a solution [12] to interpret
easily done in camera-based systems. the Indian Sign Language(ISL) which involves alphabet

IJISRT21APR505 www.ijisrt.com 776


Volume 6, Issue 4, April – 2021 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
recognition. An image of hand gestures is captured, Saha et. al [18] proposed a novel approach to sign
processed and converted to an Eigenvector. These language recognition by using a modification over the
Eigenvectors are then compared with those of the training traditional Adaline neural network. They performed
set of signs. MATLAB coding is used for feature extraction recognition on English alphabets. Adaline networks are not
in the form of Eigenvectors. A dataset of 650 samples of capable of working on non-linear classification on their own
hand signs was implemented and tested with 25 images for and hence they were extended by using multiple Adaline
each sign. Almost 100% accuracy was obtained in this neurons. The classification was done by using a voting
experiment. technique, finally achieving an accuracy of about 94% with
Amira Ragab et al explored a new method [13] for the an F-Score of almost 90.
representation of hand-based images. It was based on the
Hilbert space-filling curve. After segmentation of the hands, IV. PROPOSED METHODOLOGY
Hilbert space-filling curve was applied for feature vector
extraction. Further, these gestures were classified by Our project aims to capture sign language performed
utilizing classifiers such as Random Forest and Support by signers on a real-time basis and interpret the language to
Vector Machines. The accuracy was 99% when it came to produce textual and audio output for the illiterate. For this, a
images with Uniform Backgrounds but fell to 69% when camera-based approach will be made use of, owing to the
introduced to noise, non-uniform backgrounds. ease of portability and movement that the camera-based
method offers over other techniques.
W. Tangsuksant, S. Adhan and C. Pintavirooj et al
[14] discussed the following procedure: one or more The video of the signer will be first captured by a
Standard Definition (SD) cameras captured the subject and camera-enabled device. This video will then be processed by
used the DLT algorithm to extract 3D Maker Coordinates. our application. The video would be divided into a number
They then then used the 3D coordinate triplets to compute of frames which will convert the video into a raw image
triangular area patches. For training the model, an Artificial sequence. This image sequence will then be processed to
Neural Network is used with feed-forward backpropagation initially identify the boundaries. This will be useful to
training. The training process used around 2,100 images and separate the different body parts being captured by the
the average accuracy of the algorithm turns out to be 95%. camera into two major subparts - head and hands.

Md. Mohiminul Islam et al proposed a real time hand The head subpart will be further categorized into pose
gesture recognition system [15] that worked on the and movements as well as facial expressions. Postures and
American Sign Language Dataset. It achieves higher gestures will be extracted from the movement of the hands.
accuracy by using a novel approach in the feature extraction All of the data will then be matched against the WLASL
step which includes combining K Curvature and Convex Dataset which would then be used for classification
Hull Algorithms allowing for better detection of fingertips purposes. The classification will result in generation of
in sign language gestures. This allows their Artificial Neural words.
network to recognize 37 signs of the ASL with a 94.32%
accuracy.

[16] In contrast to earlier research which focuses on


identifying hand gestures from the American Sign Language
from a set of fairly distinguishable gestures which makes the
classification easier and seem more robust, Oyebade K.
Oyedotun and Adnan Khashman work on distinguishing 24
Signs that are modelled into sets of relatively similar
gestures. By deploying Convolutional Neural Networks and
Stacked deionized decoders, they achieve an accuracy of
92.8% on test data that the model has not been trained for.
They further opine that the problems with using CNNs for
an increasing depth could be overcome by using rectified
linear activations in the hidden layers and hence control the
effect of neuron saturation and vanishing gradients.

A Multilayer Perceptron neural network was also used


by Karayilan and Kiliç [17] on the Marcel Static Hand
Posture Database which is an American Sign Language
dataset. They used the camera-based approach and were
successful in extracting histogram and raw features from
their data. They use these raw and histogram features on two
different classifiers. 70% and 85% accuracy was obtained on
these classifiers respectively.

IJISRT21APR505 www.ijisrt.com 777


Volume 6, Issue 4, April – 2021 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
Words generated from sign language will not adhere to [8]. M. G. Bleichner and N. F. Ramsey, "Give Me a Sign:
grammatical rules of English. Hence semantically correct Studies on the Decodability of Hand Gestures Using
sentences will be generated by the sentence generation Activity of the Sensorimotor Cortex as a Potential
module. For this Google’s T5 model [19] will be put into Control Signal for Implanted Brain Computer
use. Finally, this output will be sent through an audio Interfaces, " in Brain-Computer Interface Research, C.
generator to generate speech from the same. This provides Guger, T. Vaughan, and B. Allison, Eds. Springer
support for illiterate people, who are not entitled to International Publishing, 2014, pp. 7-17.
understanding written text. [9]. Munib, Qutaishat & Habeeb, Moussa & Takruri,
Bayan & Al-Malik, Hiba. (2007). American sign
V. CONCLUSION language (ASL) recognition based on Hough transform
and neural networks. Expert Systems with
In this survey paper, we go over a vast variety of Applications. 32. 24-37. 10.1016/j.eswa.2005.11.018.
camera-based implementations for one hand, two hands, [10]. Rewari, Hardik & Dixit, Vishal & Batra, Dhroov &
static and dynamic images. We also go over other novel Nagaraja, Hema. (2018). Automated Sign Language
approaches like using Microsoft Kinect, Electromyography Interpreter. 1-5. 10.1109/IC3.2018.8530658.
(EMG) Signals from armbands, Gloves, Motion Trackers, [11]. Rajaganapathy, S. & Aravind, B. & Keerthana, B. &
the cost-effective Leap Motion Controller, and much more. Sivagami, M.. (2015). Conversation of Sign Language
As most common people do not know sign language, we to Speech with Human Gestures. Procedia Computer
believe that our research could pave the way for making Science. 50. 10.1016/j.procs.2015.04.004.
society more inclusive towards the historically isolated and [12]. Kaur, Sarabjeet and V. Banga. “Vision Based Static
disenfranchised speech impaired people. Applications of this Hand Pose, Hand Movement Recognition System For
research at scale, provide a simple, seamless, and highly Sign Language Using EigenVector Theory in
available means to communicate with other members of MATLAB.” viXra(2014).
society. [13]. Ragab, Amira & Ahmed, Maher & Chau, Siu-Cheung.
(2013). Sign Language Recognition Using Hilbert
REFERENCES Curve Features. 7950. 143-151. 10.1007/978-3-642-
39094-4_17.
[1]. Sun C, Zhang T, Bao BK, Xu C (2013a) Latent [14]. Tangsuksant, Watcharin & Adhan, Suchin &
support vector machine for sign language recognition Pintavirooj, Chuchart. (2014). American Sign
with Kinect. In: 20th IEEE international conference on Language recognition by using 3D geometric invariant
image processing (ICIP), pp 4190–4194 feature and ANN classification. 1-5.
[2]. Savur C, Sahin F (2016) American Sign Language 10.1109/BMEiCON.2014.7017372.
recognition system by using surface EMG signal. In: [15]. M. M. Islam, S. Siddiqua and J. Afnan, "Real time
IEEE international conference on systems, man, and Hand Gesture Recognition using different algorithms
cybernetics (SMC), pp 002872–002877 based on American Sign Language," 2017 IEEE
[3]. S. S. Fels and G. E. Hinton, “Glove-talk: a neural International Conference on Imaging, Vision & Pattern
network interface between a data-glove and a speech Recognition (icIVPR), Dhaka, 2017, pp. 1-6, doi:
synthesizer,” IEEE Transactions on Neural Networks, 10.1109/ICIVPR.2017.7890854.
vol. 4, no. 1, pp. 2–8, 1993. [16]. Oyedotun, Oyebade & Khashman, Adnan. (2017).
[4]. Oz C, Leu MC (2011) American Sign Language word Deep learning in vision-based static hand gesture
recognition with a sensory glove using artificial neural recognition. Neural Computing and Applications. 28.
networks. Eng Appl Artif Intell 24(7):1204–1213 10.1007/s00521-016-2294-8.
[5]. Chuan CH, Regina E, Guardino C (2014) American [17]. Karayılan T, Kılıç Ö (2017) Sign language
Sign Language recognition using leap motion sensor. recognition. In: IEEE international conference on
In: 13th IEEE international conference on machine computer science and engineering (UBMK), pp 1122–
learning and applications (ICMLA), pp 541–544 1126
[6]. AlQattan D, Sepulveda F (2017) Towards sign [18]. Saha S, Lahiri R, Konar A, Nagar AK (2016) A novel
language recognition using EEG-based motor imagery approach to American Sign Language recognition
brain computer interface. In: 5th IEEE international using MAdaline neural network. In: IEEE symposium
winter conference on brain–computer interface (BCI), series on computational intelligence (SSCI), pp 1–6
pp 5–8 [19]. Exploring Transfer Learning with T5: the Text-to-Text
[7]. N. A. Mehta, T. Starner, M. M. Jackson, K. O. Transfer Transformer by Adam Roberts and Colin
Babalola, and G. A. James, "Recognizing Sign Raffel on 24/2/2020, Google AI Blog,
Language from Brain Imaging, " in 2010 20th https://ai.googleblog.com/2020/02/exploring-transfer-
International Conference on Pattern Recognition learning-with-t5.html
(ICPR), 2010, pp. 3842-3845.

IJISRT21APR505 www.ijisrt.com 778

You might also like