0% found this document useful (0 votes)
42 views

Silent Conversation 1

Uploaded by

geyahap662
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views

Silent Conversation 1

Uploaded by

geyahap662
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

SILENT CONVERSATION:REFLECTION OF A SIGN

LANGUAGE INTERPRETER
Mangalambigai.
D

Assistant Professor
Dept of Computer Science and Engineering
Kings College of Engineering
Punalkulam, Thanjavur, Tamilnadu, India

Deepak kumar.D Krishnakumar.G Rajkumar.K Suriyaprakash.M


IV Year - Dept. of IV Year - Dept. of IV Year - Dept. of IV Year - Dept. of
Computer Science and Computer Science and Computer Science and Computer Science and
Engineering Engineering Engineering Engineering
Kings College of Kings College of Kings College of Engineering Kings College of
Engineering Punalkulam, Engineering Punalkulam, Punalkulam, Engineering
Thanjavur, Thanjavur, Thanjavur, Punalkulam, Thanjavur,
Tamilnadu,India Tamilnadu,India Tamilnadu,India
Tamilnadu,India
[email protected] [email protected] [email protected] [email protected]

Abstract : Sign language is the only tool of need of a system which recognizes the different signs,
communication for the person who are not able to speak gestures and conveys the information to the normal
and hear anything. This project aims to develop a sign people. To address this issue, we propose a Sign
language interpreter system using machine learning Language Interpreter using Machine Learning
techniques. The system will capture hand gestures and techniques. The system uses a camera to capture the
movements through a camera and translate them into sign language gestures and translates them into spoken
text. Machine learning algorithms are used to recognize or written language for the hearing person. The project
and classify the hand gestures and movements, which has the potential to improve communication and
will be mapped to corresponding words or phrases. The accessibility for the deaf and hard of hearing, and
project will involve collecting and annotating a large reduce the communication barriers.
dataset of sign language gestures, training and fine-
tuning a deep learning model, and building a user- II. BACKGROUND
friendly interface for the interpreter system.
For a sign language interpreter project, the ideal
background would be a plain, solid-colored
Keywords : Machine Learning , Hand Sign
background with good lighting. A solid-colored
Recognition , Image Processing
background can help to reduce distractions and make it
easier for the computer vision system to detect the
I. INTRODUCTION
hands and facial expressions of the signer. Good
Sign Language is the primary means of communication lighting is also important to ensure that the colors of the
for the deaf and hard of hearing. However, signer's hands and face are accurately captured by the
communication between deaf and hearing people can camera. A green screen background can also be used,
be a challenge. Normal people face difficulty in which can be useful for removing the background
understanding the sign language. Hence there is a entirely and replacing it with a virtual background.
However, this can be more complicated to set up and
may require additional software or hardware. It's IV. METHODOLOGY
important to avoid busy or cluttered backgrounds, as The methodology for building a sign language
well as backgrounds with patterns or colors that are interpreter typically involves several steps:
similar to those of the signer's clothing. This can make
it more difficult for the computer vision system to 1) Data Collection: The first step is to collect a large
accurately detect and track the signer's hands and facial dataset of sign language gestures. This may involve
expressions. recording videos of people performing different signs,
or using pre-existing datasets.
III. PROPOSED SYSTEM
Sign Language Recognition (SLR) system has been 2) Data Preprocessing: The collected data needs to be
widely studied for years. This project aims to analyze preprocessed to extract the key features that will be
and compare the methods employed in the SLR used for recognition. This may involve techniques like
systems, classifications methods that have been used segmentation to isolate the hand and arm, and
and suggests the most promising method for future normalization to ensure consistency across different
research. Due to recent advancement in classification recordings.
methods, many of the recent proposed works mainly
contribute on the classification methods, such as hybrid 3) Feature Extraction: Once the data has been
method and Deep Learning. This paper focuses on the preprocessed, the next step is to extract the relevant
classification methods used in prior Sign Language features that will be used for recognition. This may
Recognition system. Based on our review, HMM based include factors like hand shape, orientation, and
approaches have been explored extensively in prior movement.
research, including its modifications. Our proposed
system is sign language recognition system using 4) Training a Model: With the features extracted, the
convolution neural networks which recognizes various next step is to train a machine learning model to
hand gestures by capturing video and converting it into recognize different signs. This may involve using
frames. Then the hand pixels are segmented and the techniques like support vector machines, neural
image it obtained and sent for comparison to the trained networks, or decision trees.
model. Thus our system is more robust in getting exact
text labels of letters. 5) Testing and Validation: Once the model has been
trained, it needs to be tested and validated on a
separate dataset to ensure that it is accurate and
reliable. This may involve techniques like cross-
validation or holdout testing.

6) Deployment: Finally, once the model has been


validated, it can be deployed in the real world. This
may involve integrating it into a larger application or
system, such as a mobile app or website.

Overall, building a sign language interpreter requires


a combination of technical skills in areas like
computer vision, machine learning, and data science,
as well as an understanding of the linguistic and
cultural aspects of sign language.
V. MODULES
transforms, basic linear algebra, basic statistical
a) Tensorflow operations, random simulation and much more.
TensorFlow is an open-source end-to-end
platform for creating Machine Learning applications. VI. CONCLUSION
It is a symbolic math library that uses dataflow and
differentiable programming to perform various tasks Our project aims to bridge the gap by introducing an
focused on training and inference of deep neural inexpensive computer in the communication path so
networks. It allows developers to create machine that the sign language can be automatically captured,
learning applications using various tools, libraries, and recognized and translated to speech for the benefit of
community resources. Currently, the most famous deaf and dumb people. In the other direction, speech
deep learning library in the world is Google’s must be analyzed and converted to either sign or
TensorFlow. Google product uses machine learning in textual display on the screen for the benefit of the
all of its products to improve the search engine, hearing impaired.
translation, image captioning or recommendations.
REFERENCES
b) Mediapipe 1. Khan Rafiqul Zaman and Noor Adnan
MediaPipe is a Framework for building Ibraheem, "Hand gesture recognition: a
machine learning pipelines for processing time-series literature review", International journal of
data like video, audio, etc. This cross-platform artificial Intelligence & Applications, vol. 3,
Framework works on Desktop/Server, Android, iOS, no. 4, pp. 161, 2012.
and embedded devices like Raspberry Pi and Jetson
Nano. The MediaPipe perception pipeline is called 2. Chen Zhi-Hua, Kim Jung-Tae, Liang Jianning,
a Graph. Let us take the example of the first solution, Jing Zhang and YuBo Yuan, "Real-time hand
Hands. We feed a stream of images as input which gesture recognition using finger
comes out with hand landmarks rendered on the segmentation", The Scientific World Journal
images. 2014, 2014.

c) OpenCV 3. M. Hasan, Mokhar and Pramod K. Mishra,


OpenCV is the huge open-source library for the "Features fitting using multivariate gaussian
computer vision, machine learning, and image distribution for hand gesture
processing and now it plays a major role in real-time recognition", International Journal of
operation which is very important in today’s systems. Computer Science & Emerging Technologies
By using it, one can process images and videos to IJCSET, vol. 3, no. 2, 2012.
identify objects, faces, or even handwriting of a
human. When it integrated with various libraries, such 4. H. Dardas, Nasser and Nicolas D. Georganas,
as NumPy, python is capable of processing the "Real-time hand gesture detection and
OpenCV array structure for analysis. To Identify recognition using bag-of-features and support
image pattern and its various features we use vector vector machine techniques", IEEE
space and perform mathematical operations on these Transactions on Instrumentation and
features. Measurement, vol. 60, no. 11, pp. 3592-3607,
2011.
d) Numpy
NumPy is the fundamental package for 5. Ayman El-Sawah, Nicolas D. Georganas and
scientific computing in Python. It is a Python library Emil M. Petriu, "A prototype for 3-D hand
that provides a multidimensional array object, various tracking and posture estimation", IEEE
derived objects (such as masked arrays and matrices), Transactions on Instrumentation and
and an assortment of routines for fast operations on Measurement, vol. 57, no. 8, pp. 1627-1636,
arrays, including mathematical, logical, shape 2008.
manipulation, sorting, selecting, I/O, discrete Fourier
6. Rios Soria, J. David, Satu E. Schaeffer and Sara
E. Garza-Villarreal, Hand-gesture recognition
using computer-vision techniques, 2013.

7. Youngwook Kim and Brian Toomajian, "Hand


gesture recognition using micro-Doppler
signatures with convolutional neural
network", IEEE Access, vol. 4, pp. 7125-7130,
2016.

8. Gongfa Li et al., "Hand gesture recognition


based on convolution neural network", Cluster
Computing, pp. 2719-2729, 2017.

9. Pavlo Molchanov et al., "Hand gesture


recognition with 3D convolutional neural
networks", Proceedings of the IEEE conference
on computer vision and pattern recognition
workshops, pp. 1-7, 2015.

10. Chaudhary Anita and Sonit Sukhraj Singh,


"Lung cancer detection on CT images by using
image processing", 2012 International
Conference on Computing Sciences, pp. 142-
146, 2012.

11. Siddharth S. Rautaray and Anupam Agrawal,


"Interaction with Virtual Game through Hand
Gesture Recognition", International Conference
on Multimedia Signal Processing and
Communication Technologies, 2011.

12. V M Sethu Janaki, Satish Babu and S S


Sreekanth, "Real Time Recognition of 3D
Gestures in Mobile Devices", Recent Advances
in Intelligent Computational Systems (RAICS),
2013.

You might also like