0% found this document useful (0 votes)
1 views

American Sign Language Detection System

The American Sign Language Detection System project aims to develop a real-time ASL recognition system using TensorFlow.js and a webcam to bridge communication gaps between ASL users and non-users. It classifies hand gestures into 29 categories and provides an interactive web interface for users. Future enhancements include improving accuracy, extending recognition to phrases, and creating a mobile version.

Uploaded by

Faza Ulfath
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views

American Sign Language Detection System

The American Sign Language Detection System project aims to develop a real-time ASL recognition system using TensorFlow.js and a webcam to bridge communication gaps between ASL users and non-users. It classifies hand gestures into 29 categories and provides an interactive web interface for users. Future enhancements include improving accuracy, extending recognition to phrases, and creating a mobile version.

Uploaded by

Faza Ulfath
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

AMERICAN SIGN LANGUAGE

DETECTION SYSTEM
Machine Learning Internship

Faza Ulfath – 1DB21CI022


UNID - UMIP25141
American Sign Language Detection System

1. Introduction

American Sign Language (ASL) is a widely used visual language that enables communication among
individuals with hearing impairments. However, understanding ASL can be challenging for those
unfamiliar with the language. With advancements in deep learning and computer vision, real-time ASL
recognition has become feasible, providing a bridge between the deaf and hearing communities. This
project presents an ASL Detection System that recognizes hand gestures representing ASL alphabets
using TensorFlow.js and a webcam.

2. Problem Statement

Communication barriers exist between people who are deaf or hard of hearing and those who do not
understand ASL. Traditional methods such as text-based communication or interpreters may not
always be available. The need for an automated, real-time ASL detection system arises to assist in
recognizing and translating ASL signs into readable text.

3. Objectives

The primary objectives of this project are:

• To develop a real-time ASL recognition system using TensorFlow.js and Handpose models.

• To accurately classify hand gestures into 29 categories (26 letters A-Z and 3 additional signs:
SPACE, DELETE, NOTHING).

• To integrate a webcam-based gesture recognition system that detects and translates ASL
signs.

• To enhance accessibility and bridge communication gaps between ASL users and non-ASL
users.

4. Outcomes

By the end of this project, the system will:

• Accurately recognize ASL signs and translate them into corresponding letters.

• Provide a real-time, web-based ASL recognition interface.

• Utilize deep learning models to enhance detection accuracy.

• Offer an interactive and engaging user experience using React.js and Chakra UI.

5. The Dataset

5.1 Dataset Overview

The dataset used for training and testing consists of 29 classes:

• 26 letters (A-Z) of the ASL alphabet

• 3 additional classes (SPACE, DELETE, and NOTHING)

5.2 Dataset Structure

• The dataset is pre-sorted into train and test sets.

• Each class folder contains multiple images of hand gestures representing the respective ASL
sign.

• The model is trained to detect hand positions and finger shapes to classify signs accurately.
6. Implementation

6.1 Technology Stack

The project is developed using the following technologies:

• React.js – Frontend framework for creating an interactive user interface.

• TensorFlow.js – Machine learning library for browser-based deep learning.

• Handpose Model – Pre-trained model for detecting hand landmarks.

• Fingerpose Library – Used to recognize custom hand gestures.

• Chakra UI – Styling framework for UI components.

6.2 Model Development and Integration

• The system uses Handpose, a pre-trained model from TensorFlow.js, to detect hand
landmarks. The Fingerpose library helps classify ASL gestures into predefined categories.

• The model processes real-time webcam input and estimates hand positions. The detected
hand landmarks are mapped to predefined sign language gestures, which are then classified
into their respective letters.

6.3 Gesture Detection and Classification

The detect() function captures webcam input, processes hand positions, and predicts gestures using
the Fingerpose library.

6.3.1 Detecting Hand Gestures


• Captures real-time video feed using the webcam.

• Uses Handpose to estimate hand positions.

• Passes hand landmarks to Fingerpose for gesture classification.

• Sets the detected sign to setSign(), updating the UI with the recognized letter.

6.4 Code

6.5 User Interface (React.js + Chakra UI)

The front-end interface is built using React.js and Chakra UI for a clean and user-friendly design.

• The user interface is designed to provide an intuitive and seamless experience for users. The
interface includes:

• Webcam Feed: Displays the live video stream from the user’s webcam for gesture recognition.

• Real-Time Gesture Detection: Recognized gestures are displayed in text format.

• Camera Toggle Feature: Users can enable or disable the webcam for privacy.

• Interactive UI Elements: Built using Chakra UI to ensure a clean and responsive design.

6.6 Snapshots
7. Conclusion

The American Sign Language Detection System successfully classifies ASL hand gestures into 29
categories using deep learning models in TensorFlow.js. The system enables real-time ASL recognition
through a web-based interface, bridging the communication gap between ASL users and non-ASL
users. Future improvements could involve expanding the dataset, improving recognition accuracy,
and adding sentence-level ASL translation.

8. Future Scope

• Improved Accuracy: Fine-tune the model using more data and transfer learning techniques.

• Sentence Recognition: Extend recognition beyond single letters to detect words and phrases.

• Mobile Support: Develop a mobile-friendly version for on-the-go ASL translation.

9. References

1. TensorFlow.js Documentation - https://www.tensorflow.org/js

2. Handpose Model - https://github.com/tensorflow/tfjs-models/tree/master/handpose

3. Fingerpose Library - https://github.com/andypotato/fingerpose

You might also like