Silent Conversation 1
Silent Conversation 1
LANGUAGE INTERPRETER
Mangalambigai.
D
Assistant Professor
Dept of Computer Science and Engineering
Kings College of Engineering
Punalkulam, Thanjavur, Tamilnadu, India
Abstract : Sign language is the only tool of need of a system which recognizes the different signs,
communication for the person who are not able to speak gestures and conveys the information to the normal
and hear anything. This project aims to develop a sign people. To address this issue, we propose a Sign
language interpreter system using machine learning Language Interpreter using Machine Learning
techniques. The system will capture hand gestures and techniques. The system uses a camera to capture the
movements through a camera and translate them into sign language gestures and translates them into spoken
text. Machine learning algorithms are used to recognize or written language for the hearing person. The project
and classify the hand gestures and movements, which has the potential to improve communication and
will be mapped to corresponding words or phrases. The accessibility for the deaf and hard of hearing, and
project will involve collecting and annotating a large reduce the communication barriers.
dataset of sign language gestures, training and fine-
tuning a deep learning model, and building a user- II. BACKGROUND
friendly interface for the interpreter system.
For a sign language interpreter project, the ideal
background would be a plain, solid-colored
Keywords : Machine Learning , Hand Sign
background with good lighting. A solid-colored
Recognition , Image Processing
background can help to reduce distractions and make it
easier for the computer vision system to detect the
I. INTRODUCTION
hands and facial expressions of the signer. Good
Sign Language is the primary means of communication lighting is also important to ensure that the colors of the
for the deaf and hard of hearing. However, signer's hands and face are accurately captured by the
communication between deaf and hearing people can camera. A green screen background can also be used,
be a challenge. Normal people face difficulty in which can be useful for removing the background
understanding the sign language. Hence there is a entirely and replacing it with a virtual background.
However, this can be more complicated to set up and
may require additional software or hardware. It's IV. METHODOLOGY
important to avoid busy or cluttered backgrounds, as The methodology for building a sign language
well as backgrounds with patterns or colors that are interpreter typically involves several steps:
similar to those of the signer's clothing. This can make
it more difficult for the computer vision system to 1) Data Collection: The first step is to collect a large
accurately detect and track the signer's hands and facial dataset of sign language gestures. This may involve
expressions. recording videos of people performing different signs,
or using pre-existing datasets.
III. PROPOSED SYSTEM
Sign Language Recognition (SLR) system has been 2) Data Preprocessing: The collected data needs to be
widely studied for years. This project aims to analyze preprocessed to extract the key features that will be
and compare the methods employed in the SLR used for recognition. This may involve techniques like
systems, classifications methods that have been used segmentation to isolate the hand and arm, and
and suggests the most promising method for future normalization to ensure consistency across different
research. Due to recent advancement in classification recordings.
methods, many of the recent proposed works mainly
contribute on the classification methods, such as hybrid 3) Feature Extraction: Once the data has been
method and Deep Learning. This paper focuses on the preprocessed, the next step is to extract the relevant
classification methods used in prior Sign Language features that will be used for recognition. This may
Recognition system. Based on our review, HMM based include factors like hand shape, orientation, and
approaches have been explored extensively in prior movement.
research, including its modifications. Our proposed
system is sign language recognition system using 4) Training a Model: With the features extracted, the
convolution neural networks which recognizes various next step is to train a machine learning model to
hand gestures by capturing video and converting it into recognize different signs. This may involve using
frames. Then the hand pixels are segmented and the techniques like support vector machines, neural
image it obtained and sent for comparison to the trained networks, or decision trees.
model. Thus our system is more robust in getting exact
text labels of letters. 5) Testing and Validation: Once the model has been
trained, it needs to be tested and validated on a
separate dataset to ensure that it is accurate and
reliable. This may involve techniques like cross-
validation or holdout testing.