Sign language Recognition
Submitted in partial fulfillment of the requirements for the award of degree of
BACHELOR OF ENGINEERING
IN
COMPUTER SCIENCE & ENGINEERING
Submitted to: Dr. Raman Chadha
Submitted by: Rajan Singh
19BCS2094
DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING
Chandigarh University, Gharuan
June 2022
INTRODUCTION:
Communication is very crucial to human beings, as it enables us to express ourselves. We
communicate through speech, gestures, body language, reading, writing or through visual aids,
speech being one of the most commonly used among them. However, unfortunately, for the
speaking and hearing impaired minority, there is a communication gap. Visual aids, or an
interpreter, are used for communicating with them. However, these methods are rather
cumbersome and expensive, and can't be used in an emergency. Sign Language chiefly uses
manual communication to convey meaning. This involves simultaneously combining hand
shapes, orientations and movement of the hands, arms or body to express the speaker's thoughts.
The project aims at building a machine learning model that will be able to classify the various
hand gestures used for fingerspelling in sign language. In this user independent model,
classification machine learning algorithms are trained using a set of image data and testing is
done on a completely different set of data. For the image dataset, depth images are used, which
gave better results than some of the previous literatures, owing to the reduced pre-processing
time. Various machine learning algorithms are applied on the datasets, including Convolutional
Neural Network (CNN). An attempt is made to increase the accuracy of the CNN model by pre-
training it on the Imagenet dataset.
Sign Language consists of fingerspelling, which spells out words character by character, and
word level association which involves hand gestures that convey the word meaning.
Fingerspelling is a vital tool in sign language, as it enables the communication of names,
addresses and other words that do not carry a meaning in word level association. In spite of this,
fingerspelling is not widely used as it is challenging to understand and difficult to use. Moreover,
there is no universal sign language and very few people know it, which makes it an inadequate
alternative for communication.
A system for sign language recognition that classifies finger spelling can solve this problem.
Various machine learning algorithms are used and their accuracies are recorded and compared in
this report.
Technologies and Tool Using:
Machine Learning: Machine learning is a method of data analysis that automates
analytical model building.
Jupyter Notebook: Jupyter notebook is an open-source IDE that is used to create Jupyter
documents that can be created and shared with live codes.
Numpy: Numpy is a library for the python programming language.
OpenCV: OpenCV is tool used to image processing and performing computer vision
tasks.
Tensorflow: TensorFlow provides a collection of workflows to develop and train models
using Python
Feasibility study: In this project the main features is to predict, translate the sign language in
written form so the other person that don’t know about sign language. They are also able to
understand the what other person is trying to them. we can achieve this thing by using machine
learning various module. We have to create dataset then train the dataset so that prediction or
translation can happen in real time. It is fairly possible to get dataset of this on internet but in this
project, I will create dataset to our own.
Step or Methodology to Build the Project:
Install and Import Dependencies
Detect Face, Hand and Pose Landmarks
Extract Keypoints
Setup Folder for Data Colletion
Collect keypoint Sequences
Preprocess Data and Create Labels
Build and Train an LSTM Deep Learning Model
Make Sign Language Predictions
Save Model weights
Evaluation using a Confusion Matrix
Test in Real Time
Bibliography
1. Google.com
2. Youtube.com