0% found this document useful (0 votes)
250 views220 pages

SENYAS Filipino Sign Language Translation Device and System For Two Way Communication BOOKBIND

Uploaded by

adrian.abaigar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
250 views220 pages

SENYAS Filipino Sign Language Translation Device and System For Two Way Communication BOOKBIND

Uploaded by

adrian.abaigar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 220

SENYAS: Filipino Sign Language Translation Device and System

for Two-Way Communication

An Undergraduate Thesis
Presented to
The Faculty of the College of Engineering
Samar State University
Catbalogan City

In Partial Fulfillment
Of the Requirements for the Degree
Bachelor of Science in Computer Engineering

Ruben Lorenz S. Yboa


Mark D. Berio
Kharl Angelo S. Obong
Laurie Mae R. Bacsal

January 2024
APPROVAL SHEET
In partial fulfillment of the requirements for the course of Bachelor of

Science in Computer Engineering (BSCpE) this project entitled “SENYAS: Filipino

Sign Language Translation Device and System for Two-Way Communication” has

been prepared and submitted by Ruben Lorenz S. Yboa, Kharl Angelo S. Obong,

Mark D. Berio and Laurie Mae R. Bacsal is hereby approved

Date: January 19, 2024

ENGR. MAYNARD R. DALEMIT


CPE 29-Practice & Design Instructor
________________________________________________________________

Approved by the committee on oral examination with the grade of ___________

Noted:
ENGR. MAYNARD R. DALEMIT ENGR. MEDDY S. MANGARING
Panel member Panel member
ENGR. MARICRIS M. EDIZA ENGR. NIKKO ARDEL P. FLORETES
Panel member Panel member
ENGR. FRANCISCO M. DEQUITO JR.
Panel Chairman
________________________________________________________________

Approved by the Dean of the College of engineering, in partial fulfillment of

the degree of Bachelor of Science in Computer Engineering with the grade of

PASSED.

DATE: January 19, 2024


ENGR. MEDDY S. MANGARING
Dean, College of Engineering
Samar State University

ii
ACKNOWLEDGEMENT

The researcher wishes to thank the funders who provided financial support

that allowed this project to be developed and implemented.

A never-ending gratitude to our very own understanding and encouraging

family especially our parents who constantly motivate ad inspire us to strive hard

to make the project possible. To Yboa, Obong, Berio, and Bacsal Family who never

failed to sustain but continually give their moral and financial support, we offer our

heartfelt gratitude.

Our deepest gratitude to Engr. Maynard R. Dalemit, Instructor, College of

Engineering for his outstanding support and guidance throughout the entirely of

this research endeavor. His expertise, dedication, and unwavering commitment

were instrumental in shaping the direction and success of this project.

Engr. Raven C. Tabiongan, Instructor, College of Engineering as a thesis

adviser contributing significantly to the depth and quality of our findings. His

mentorship not only enhanced our research skills but also broadened our

perspective, encouraging us to explore new avenues.

Special thanks to Mr. Rhum O. Bernate, Secondary School Principal, Samar

National School and Mrs. Julita B. Tanseco, SPED Teacher III, Samar National

School for their collaboration and contributions in beta-testing of the Senyas

System. Their dedication in providing feedback and expertise greatly enriched the

project.

iii
We also would like to thank our classmates and friends who helped and

supported us in any way they could to finish the project. Their insights and opinions

during the development of the system was valuable until the conclusion of its

progress. To the people unstated above who have been part of this project, our

deepest gratitude.

Above all, we thank the Lord Almighty for the knowledge, guidance,

provisions good health, and enlightenment he had given to us. The journey was

no easy task and didn’t come with no complication. The blessings that serve as

the light to our problems and ensured the completion of the research being

conducted.

iv
DEDICATION

We proudly dedicate this Capstone to our families - the pillars of strength

who kept us going throughout this challenging journey and reminded us to never

give up on our dreams.

To our parents, words cannot express our gratitude for the countless

sacrifices you have made to provide us the opportunity to pursue higher education.

You worked tirelessly to support us financially and emotionally during difficult

times. You celebrated every little victory with such joy and pride that kept us

motivated. We hope we have made you proud through this work.

To our beloved partners, thank you for your incredible patience,

understanding and encouragement when we had to spend long hours away from

you, locked away with our research and coursework. You took over responsibilities

without complaint, so we could completely dedicate our time and energy towards

completing this program. You kept us balanced, reminded us of self-care, and

loved us through our highs and lows. We could not have persevered without having

you by our side.

Our families - our biggest fans and loudest cheerleaders. Everything we

achieved here is a reflection of the love, faith and support our families surround us

with. This dissertation stands testament to that. We love you and cannot wait to

have more quality time to spend together again.

v
ABSTRACT

This study aimed to develop "Senyas", an innovative Filipino Sign Language

(FSL) translation system to facilitate two-way communication between deaf, mute,

and hearing individuals. Senyas utilizes a glove device equipped with a joystick

sensor to track finger movements and an accelerometer to determine arm angles.

These inputs are translated into corresponding FSL gestures and displayed as text

in a mobile application. The app also features speech-to-text and text-to-speech

conversion, enabling hearing users to understand FSL and deaf users to

understand speech.

A Neural Network Classification machine learning model was implemented

to accurately recognize FSL gestures from the glove's sensor data. The model was

trained on dataset of various FSL signs recorded by the researchers. Product

evaluation involved 5 respondents aged 14-19 from Samar National School testing

the system's accuracy and providing feedback through a survey questionnaire.

Results showed a mix of neutrality and satisfaction ratings across the

device, app, and overall system performance. Identified limitations include issues

with accuracy for certain complex gestures and handling continuous use in

conversations. Recommendations focused on expanding the FSL words,

improving accuracy, adding features like fluency analysis, and investigating

alternative sensor technologies.

Overall, the Senyas system demonstrated effectiveness in translating FSL

and facilitating communication between deaf, mute, and hearing users. The

vi
system contributes significantly to accessibility technology and has much potential

for improvement to further enhance inclusivity in society. Limitations provide

opportunities for future research to build upon this foundation.

Keywords: Filipino Sign Language, Machine Learning, Neural Network

Classification, Text-to-speech conversion, Deaf-mute communication,

Accessibility Technology, Sensor Glove.

vii
TABLE OF CONTENTS

Title Page …………………………………………………………………... i

Approval Sheet …………………………………………………………….. ii

Acknowledgement …………………………………………………………. iii

Dedication ………………………………………………………………….. v

Abstract …………………………………………………………………….. vi

Table of Contents …………………………………………………………. viii

List of Tables ………………………………………………………………. xi

List of Figures ……………………………………………………………… xii

Chapter I. Introduction

Background of the Study ………………………………………... 1

Objectives of the Study …………………………………………. 4

Conceptual Framework …………………………………………. 5

Scope and Delimitation of the Study …………………………... 9

Significance of the Study ……………………………………….. 10

Definition of Terms ………………………………………………. 11

Chapter II: Review Related Literature & Studies

Review of Related Literature …………………………………… 16

Review of Related Studies ……………………………………… 25

Chapter III. Methodology

Research Design ………………………………………………… 37

Requirement Analysis …………………………………. 39

Gantt Chart ……………………………………….. 39

viii
System Design …………………………………………… 41

Hardware Design ………………………………… 41

System Circuit Design …………………………… 43

Hardware Description ……………………………. 44

Software Design ………………………………….. 48

System Flowchart ………………………………... 49

Software Description …………………………….. 59

Interface Design ………………………………….. 60

Coding and Implementation…………………………….. 76

Product Description ……………………………… 76

Product Evaluation ………………………………. 77

Algorithm Training ……………………………….. 79

Product Development ……………………………. 80

Cost Benefit Analysis ……………………………. 86

Integration and Testing ………………………………… 90

System Deployment .…………………………………… 91

Maintenance …………………………………………….. 93

Research Procedure …………………………………………….. 94

Research Instrument ……………………………………………. 95

Statistical Treatment of Data …………………………………… 96

Chapter IV. Results and Discussions

Requirements Analysis and Specification …………………….. 98

Presentation, Analysis and Interpretation of Data ……………. 106

ix
Chapter V. Summary, Conclusion and Recommendation

Summary …………………………………………………………. 114

Conclusion ………………………………………………………... 116

Recommendation ………………………………………………... 118

Bibliography ………………………………………………………………... 121

Appendices ………………………………………………………………… 128

A. Letter for Request of Adviser ………………………………. 129

B. Letter for Implementation Approval ……………………….. 130

C. Ethical Clearance Certificate ………………………………. 131

D. Research Questionnaire Approval ………………………… 132

E. Questionnaire ……………………………………………….. 133

F. Disclaimer ……………………………………………………. 139

G. Terms and Condition ……………………………………….. 140

H. User Manual …………………………………………………. 144

I. Data Sheet …………………………………………………... 164

J. Source Code ………………………………………………… 173

Plagiarism Check Results ………………………………………………… 185

Curriculum Vitae …………………………………………………………… 202

x
LIST OF TABLES

Table No. Page

Table 3.4.1 Hardware Cost 86

Table 3.4.2 Software Subscription 87

Table 3.4.3 Documentation Cost 88

Table 3.4.4 Total Cost 89

Table 4.1 Age of Respondents 106

Table 4.2 School of the Respondents 107

Table 4.3 Gender of the Respondents 107

Table 4.4 Mode of the Respondents 108

Table 4.5 Device Performance 109

Table 4.6 Mobile Application Performance 110

Table 4.7 System Performance 111

Table 4.8 User Experience 112

Table 4.9 Overall Satisfaction 113

xi
LIST OF FIGURES

Figure No. Page

Figure 1.1 Senyas Conceptual Framework 6

Figure 2.1 The World Health Organization’s (WHO) reports on 16

disability

Figure 3.1 Waterfall Model 38

Figure 3.2.1 Senyas Study Gantt Chart in Monthly deliverables 40

Figure 3.3.1 Block Diagram 41

Figure 3.3.2 Schematic Design 43

Figure 3.3.3 ESP32 30PIN 44

Figure 3.3.4 MPU6050 44

Figure 3.3.5 TP4056 Battery Charger 45

Figure 3.3.6 3D Analog Joystick 45

Figure 3.3.7 2A Boost Converter Module 46

Figure 3.3.8 Lithium-ion Polymer (Li-Po) Batteries 46

Figure 3.3.9 PCB Board 47

Figure 3.3.10 Rocker Switch 47

Figure 3.3.11 System Architecture 48

Figure 3.3.12 System Flow of Kodular Application When Opening 50

Figure 3.3.13 Continue Process Side Menu & Disconnect Button 51

Figure 3.3.14 Navigation Flow of Main Page 52

Figure 3.3.15 Continue Process Checking Bluetooth Connection 53

xii
Figure 3.3.16 Continue Process Checking Wi-fi Connection 54

Figure 3.3.17 System Flow of Bluetooth Notification Pages When 55

Connection Lost

Figure 3.3.18 System Flow of Wi-Fi Notification Pages When 56

Connection Lost

Figure 3.3.19 Hidden Menu Containing Page References and 57

Functions

Figure 3.3.20 Help Page system flow from the hidden menu 58

Figure 3.3.21 Main Page App 61

Figure 3.3.22 Bluetooth Permission Request 62

Figure 3.3.23 Ask for Permission 63

Figure 3.3.24 Paired Device 64

Figure 3.3.25 Wi-fi Notification 65

Figure 3.3.26 Connecting the Senyas App 66

Figure 3.3.27 Send text to Bluetooth Device 67

Figure 3.3.28 Side Navigation Menu 68

Figure 3.3.29 Help Page 69

Figure 3.3.30 Basic Questions 70

Figure 3.3.31 Terms and Conditions 71

Figure 3.3.32 About Page 72

Figure 3.3.33 User Manual Page 73

Figure 3.3.34 Senyas Prototype Design 75

Figure 3.4.1 Test Joystick & Gyroscope Value 78

xiii
Figure 3.4.2 Collect Dataset 79

Figure 3.4.3 The Kodular Platform Home Page 80

Figure 3.4.4 The Kodular Platform Environment 81

Figure 3.4.5 Kodular Companion 82

Figure 3.4.6 Edge Impulse Homepage Interface 83

Figure 3.4.7 Arduino IDE Interface 84

Figure 3.4.8 Fritzing Interface 85

Figure 3.5.1 App & Device Testing 90

Figure 3.6.1 Senyas Deployment 91

Figure 3.7.1 Device and App Update 93

Figure 4.1.1 Feature Extraction 102

Figure 4.1.2 Result of Accuracy and Loss 103

Figure 4.1.3 Speech Recognizer 103

xiv
CHAPTER I

INTRODUCTION

Background of the Study

In today's society, individuals who are Deaf-Mute often face significant

challenges when it comes to communicating with others. This disability creates a

significant barrier that affects their ability to function normally in society and often

leads to isolation and a lack of engagement with the world around them. An article

by Larsson, E., et al. (2022) claims, that while being unable to hear presents some

challenges in communication, it is not necessarily a barrier to verbal

communication. As Deaf-Mute have no correlation, as many others can speak well,

it is important to recognize distinct experiences of Deaf-Mute individuals. However,

when an individual is both Deaf-Mute, it can create an immense communication

barrier that makes it challenging to interact with individuals who do not know sign

language or have no experience with communicating with people who are Deaf-

Mute.

Over 1 billion people worldwide have some disability, with between 110 and

190 million experiencing significant functional challenges, according to the World

Health Organization (WHO). Around 34 million of the estimated 466 million people

who have hearing loss that is incapacitating are youngsters. Although it is

challenging to determine the prevalence of speech problems due to the wide range

in their severity and underlying causes, the American Speech-Language-Hearing

1
Association (ASHA) estimates that about 10% of the population is affected by a

communication impairment.

According to Montefalcon et al. (2021), Individuals who are deaf or hard-of-

hearing populations in the Philippines utilize Filipino Sign Language (FSL), a

unique visual language. FSL still needs to be discovered and understood by many

Filipinos despite being recognized as the nation's official sign language in 2018.

As a result, linguistic and cultural limitations prevent social participation and

accessibility for people who are deaf or hard of hearing and hard-of-hearing. The

law known as RA 11106, An Act Declaring the Filipino Sign Language as the

National Sign Language of the Filipino Deaf and the Official Sign Language of the

Government in All Transactions involving People Who are Deaf or Hard of Hearing,

and Mandating its Use in Schools, Broadcast Media, and Workplaces, was signed

into law by President Rodrigo Duterte on October 30, 2018.

Based on Notarte-Balanquit (2021), using Filipino Sign Language (FSL) has

helped Deaf-Mute people communicate better. However, most Filipinos need help

understanding FSL because the community remains behind the country's fast-

paced and technologically advanced society.

According to Narte & Rupero (2023), the exclusion of the Deaf for many

years not only continued a culture of discrimination but also pushed them to the

ends to the point that they are no longer included in the majority of aspects of

Philippine society. In such a historical context, a change in opinion on deafness

expresses itself. As a result, deafness is no longer viewed as a disability but rather

as a language challenge in the nation. The R.A. 11106, or the FSL Act of 2018,

2
has recently recognized Filipino Sign Language (FSL) as the country of the

Philippines' official visual-gestural language. This has widened the scope of the

academic discussion of deafness, and, more importantly, it has created an

environment for FSL to develop and gain recognition as a language system

fundamental to the formation of the Filipino Deaf identity and a natural language

of the Deaf.

Therefore, Montefalcon et al. (2021) and Narte & Rupero (2023) note R.A.

11106, or the FSL Act of 2018, the formal designation of Filipino Sign Language

as the national sign language of the Philippines. This shows that a legislative

framework has been established to promote the use of FSL. But Notarte-Balanquit

(2021) points out that although FSL has helped the deaf community communicate

better, most people do not understand it, probably due to social and technical

improvements. The discussion of FSL as a national visual-gestural language

follows, in which Narte and Rupero (2023) explain that it will allow FSL to develop

and be acknowledged as an essential component of Filipino Deaf identity.

Both Montefalcon et al. (2021) and Notarte-Balanquit (2021) emphasize the

significance of addressing the communication challenges experienced by the deaf

and hard of hearing in the Philippines when discussing communication and

accessibility. While Notarte-Balanquit (2021) acknowledges the continued

challenges of recognizing FSL due to societal and technical advancements,

Montefalcon et al. (2021) emphasize the acknowledgment of FSL as a solution.

The Deaf population has traditionally been excluded in Philippine culture, and

3
Narte & Rupero (2023) examine how their participation may be improved if FSL is

recognized as a language issue.

In summary, while the cited claims discuss the recognition and challenges

of FSL in the Philippines, our study appears to be a practical response to these

challenges by developing a device and Android application aiming to improve

communication and accessibility for the deaf community and potentially contribute

to the maintenance of the language and cultural identity.

Objectives of the Study

The researchers aim to validate the accuracy of the device in effectively

providing two-way communication for the deaf-mute person by developing a device

to detect their sign language and, through the use of your mobile phone, translate

it to a language that can be understood by a person with no such conditions or

knowledge with sign language, through audio and text.

The study aims to achieve these objectives:

1. To develop a system that can recognize FSL and interpret it to a normal

person while also enabling two-way communication through recognizing

speech from the normal person in real-time setting.

1.1 To develop a wearable device that can recognize hand

gesture and interpret FSL in real-time.

1.2 To develop an android application for the system to display

translated FSL text and recognize normal person speech

using speech-to-text technology.

4
1.3 And enable communication between the wearable device to

Android application in real-time.

2. To identify the limitations of the system for future development and research

by evaluating these factors:

2.1 Device Performance

2.2 Software Performance

2.3 System Performance

2.4 User Experience

2.5 Overall Functionality

Conceptual Framework

The Senyas device and system aims to facilitate communication between

deaf-mute persons and people who do not know sign language. The conceptual

framework consists of a wearable glove device and mobile application that work

together to enable two-way communication. The glove device uses sensors to

recognize hand gestures and sign language, which is then transmitted to the

mobile app and translated into text-to-speech. The mobile app also provides

speech recognition to translate spoken language into text for the hearing user. By

enabling sign language recognition and speech translation in both directions, the

Senyas system bridges the communication barrier between deaf-mute persons

and those who do not know sign language.

5
Figure 1.1. Senyas Conceptual Framework

6
The Senyas glove device is composed of the multiple components with their

own functionality and role in the system.

The ESP32-WROOM is a microcontroller with built-in Bluetooth and Wi-Fi,

it is also the brain of our device. It is a powerful and versatile microcontroller that

processes data from the other components, controls their operation, and performs

the calculations needed for recognizing hand gestures and sign language

translation.

The MPU6050 is a motion sensor that combines an accelerometer with a

three-axis gyroscope. It detects the movement and orientation of the glove,

allowing to track the arm angle for more accurate sign language recognition.

3D analog joystick is an additional data input from the user. It can be used

to recognized finger movements and contribute to gesture recognition for specific

signs.

The TP4056. This is a lithium-ion battery charging and protection chip. It

manages the charging process of the Li-Po battery safely and efficiently,

preventing overcharging, over-discharging, and short circuits.

The Boost Converter Module. This is a module that converts 3.7V power

supply from the battery into 7V to power the ESP32 microcontroller through the

Vin pin. This ensures they receive the correct voltage for optimal operation.

The Lithium-ion polymer (Li-Po): This is the rechargeable battery that

powers the entire glove device. It provides long-lasting power so our device could

be use without frequent charging.

7
The Rocker switch: This is a simple on/off switch that allows our device to

easily turn on and off.

Figure 1.1 assumes a normal person using the Android application of the

device; then, the data or speech generated by the android application that came

from the normal person, who was the user, would be transmitted and translated

into text. This action will happen in mobile applications and solve the problem of

translating sign language into text. As for the Deaf-Mute person using the wearable

device worn by the user’s hand, the Deaf-Mute person would then connect the

wearable device through Bluetooth to the mobile application; by doing so, it will

enable them to express themselves using sign language, which is visually-based,

and the mobile application will translate these signs into text that can be

understood by people who do not know sign language.

The microcontroller will read gestures through a joystick and accelerometer.

We can now quickly determine the corresponding sign languages by reading the

data input. The data generated by these sensors would be transmitted to the phone

and translated it into a text form. This action will happen in mobile applications and

solve the problem of translating sign language into text.

Therefore, the mobile application and the wearable device will allow two-

way communication by speech recognition, providing a solution for the

communication barrier between Deaf-Mute person to easily understand each

other. The accuracy and efficiency of the system will be then assessed, as well as

its effectiveness as a medium of communication for Deaf-Mute persons.

8
Scope and Delimitations

This study is to focus on individuals who are Deaf-Mute or speech-impaired

person that often face significant challenges when it comes to communicating with

others.

This study article aims to provide a user experience environment that

facilitates face-to-face, close-proximity communication and conversation between

Deaf-mute and regular users. This system's other goal is to give Deaf-Mute and

non-sign language users a simple, practical, and economical way to communicate

with one another in one system at the same time.

The proposed Android application serves as an interpreter, facilitating

bilateral communication between individuals who are Deaf-mute and those without

any hearing, vocal impairments, or communication disabilities, commonly referred

to as normal individuals.

This study is limited to those people with hearing impairment and muted,

and those who are speech-impaired, thus other disabilities and/or impairments are

excluded from the study and which the device will not focus on. The device may

be limited by technical constraints, such as processing power, memory, and

battery life, which can affect its performance and usability. The device may also be

limited by environmental factors, such as noise, which can affect its ability to

accurately capture and translate sign language.

9
Significance of the Study

The study aims to develop Senyas, a wearable device and android

application that uses sensors to detect sign language gestures and translates them

into oral or written text in real-time, to benefit the following:

Deaf-Mute person. To provide a more convenient and efficient way of

communication for deaf-mute individuals, allowing them to communicate with

anyone, even those who do not know sign language.

Community. To allow Deaf-Mute person to function within the community

and lessen the hassle of basic tasks like asking for help, providing assistance,

listening, and communicating with other person in a public environment like school

and park.

Future Researcher. The results of this study will help future researchers

have knowledge and foundation if they will be conducting a study similar to this

one. They will be guided by the information that will be gathered during this study.

10
Definition of Terms

To facilitate the understanding of this study, different terms are defined

herein. The following terms are conceptually defined for the researchers to have a

better understanding of the relevance of these terms in the study.

Communication. Conceptually, Communication can be defined as the

process of conveying messages between parties using verbal means like speech

or writing, or nonverbal means like signs, signals, and behaviors Richard

Nordquist, (2019). Operationally, it involves the transmission of information from a

sender to a receiver and the receiver's comprehension of the information. Effective

communication requires encoding a message in a format the receiver understands

and successfully decoding the message to extract the intended meaning. It is a

complex process with the goal of shared understanding between the

communicators.

Deaf. Conceptually, Deaf people mostly have profound hearing loss, which

implies very little or no hearing. They often use sign language for communication

G. Vaughan, (2023). Operationally, "deaf" can be defined as a condition in which

an individual has a hearing loss of 90 decibels (dB) or more in their better ear. This

means that the individual is unable to hear most sounds, including speech, without

the use of assistive devices such as hearing aids or cochlear implants.

Edge Impulse. Conceptually, A cloud-based machine learning operations

(MLOps) platform for developing embedded and edge machine learning systems

that can be deployed across diverse hardware, as defined by Hymel et al. (2022).

11
In practical application, it is a machine learning platform enabling data collection

from sensors, signal processing, training ML models, and deploying them onto

embedded devices, as noted by Hong (2021). Edge Impulse aims to streamline

the end-to-end workflow for developing TinyML applications on resource-

constrained devices. It utilizes TensorFlow and offers tools to simplify the

processes of data preprocessing, model training, evaluation, and optimization for

embedded deployment. Edge Impulse provides a cloud-based collaborative

platform for managing the machine learning model lifecycle from data to

deployment.

Deep Learning. Conceptually, deep learning is a subset of machine learning

that employs multi-layered artificial neural networks to learn hierarchical

representations of data, as defined by Karray et al. (2019). Operationally, it is a

class of machine learning algorithms that use multiple layers of artificial neural

networks to learn complex representations of data and improve the accuracy of

predictions, noted by Chen, J., & Wang, Z. (2020).

Filipino Sign Language (FSL). Conceptually, FSL can be defined as a

distinct visual language used naturally by the Filipino Deaf community, with its own

grammar, vocabulary, and discourse elements, as described in a study by

Cristobal and Martinez (2021). Operationally, Mabalot and Mendoza (2018) define

Filipino Sign Language (FSL) as a visual-spatial language that utilizes hand

motions, facial expressions, and body gestures to express meaning. It is used by

the Filipino Deaf community as their primary communication method, as described

in their study on developing a FSL mobile application for teaching mathematics.

12
Kodular. Conceptually, Kodular is a platform that makes it easy for anyone

to create Android applications, even if they have no prior programming experience.

Android-based student presence applications are software programs that can be

used to record student attendance in a more efficient and effective way than

traditional methods, such as paper-based attendance sheets in a study on the

development of a Witriyono, H., et al., (2022). Operationally, Develop an Android-

based student presence application using Kodular, users should first create a new

project and select the "Android App" template. Next, they can design the user

interface of the application using the Kodular drag-and-drop block programming

environment. Once the user interface is complete, users can add functionality to

the application using Kodular's built-in components and blocks. Finally, users

should test and debug the application before building and publishing it to the

Google Play Store or other app stores.

Machine learning. Conceptually, Machine learning is a subfield of artificial

intelligence, which is broadly defined as the capability of a machine to imitate

intelligent human behavior. Artificial intelligence systems are used to perform

complex tasks in a way that is similar to how humans solve problems Sara Brown,

(2021). Operationally, machine learning can be defined as a type of artificial

intelligence that allows computers to automatically learn and make predictions or

decisions without being explicitly programmed.

Mute. Conceptually, mute person is someone who does not speak, either

due to an inability or lack of desire to speak. More specifically, the term "mute"

refers to someone with profound deafness from birth or early childhood, which

13
prevents them from developing spoken language. As a result, they are unable to

use articulate speech and are considered deaf-mute Melissa Conrad Stöppler, MD,

(2021). Operationally, “mute” can be defined as a condition in which an individual

is unable to produce speech sounds or has significant difficulty with speech

production.

Portable Device. Conceptually, portable device is a small, lightweight

computing device that can be conveniently carried and moved from place to place

with minimal effort (Catherine Roseberry 2021). Operationally, a portable device

can be defined as a device that is battery-powered, small enough to be easily

carried in a pocket or bag, and a user interface that allows for easy interaction with

the device.

Realtime. Conceptually, Real-time means to occur immediately. This term

is typically used as an adjective to describe a level of computer responsiveness

that is immediate in a human sense of time (Vangie Beal 2021). Operationally, real

time can be defined as a system that is able to respond to input or events

immediately or within a specified time frame, typically in fractions of a second.

Sensor. Conceptually, a sensor is a device that can measure the physical

quantity and converts into a signal that can be read by another device or

instrument, which generates functionally related output in the form of an electrical

or optical signal, defined by Stouffer K., et al., (2015). Operationally, the

researchers would define sensor as a device such as flex sensor and

accelerometer, is used to measure physical range of the motion of hand and which

generate into a signal that can be read.

14
Sign languages. Conceptually, Sign languages serve as visual means of

communication, utilizing hand gestures, facial expressions, and body movements.

These languages act as a crucial link, facilitating communication with individuals

who experience hearing impairments or challenges in verbal expression Mirabel

Onyeije, (2021). Operationally, sign language can be defined as a language that

uses a system of gestures and signs to convey meaning between individuals who

are deaf, hard-of-hearing, or difficulty speaking.

Verbal Communication. Conceptually, Verbal communication involves the

transmission of messages through spoken words, constituting a form of oral

communication Zoe Kaplan, (2023). Operationally, verbal communication can be

defined as a form of communication that uses language, either spoken or signed,

to convey meaning between individuals.

Wearable Technology. Conceptually, Wearable technology comprises

electronic devices designed to be worn as accessories, integrated into clothing,

implanted in the user's body, or even applied as tattoos on the skin Adam Hayes

(2022). Operationally, wearable technology can be defined as a type of technology

that is integrated into clothing or accessories and provides users with additional

functionality or capabilities beyond what is available through traditional devices.

15
CHAPTER II

REVIEW OF RELATED LITERATURE & STUDIES

This chapter encompasses relevant literature and studies related to the

research, serving as a guide for researchers to comprehend the foundational

concepts of the study.

Review of Related Literature

A person's physical, human-built, psychological, and sociopolitical

environments interact with their health issues or limitations to cause disability.

These characteristics could be a person's living environment, access to

healthcare, opinions held by others, or social support. As per WHO (2021), these

might also encompass the accessibility and utilization of personal assistance and

assistive products.

Figure 2.1. The World Health Organization's (WHO) reports on disability

The highest educational attainment according to disability levels is shown

in Figure 2.1, which was given by DOH (2016), as we can observe there are fewer

college graduates or even attended college of people with disability. But we can

16
also observe that there are more people with severe disability with no education

but have finished elementary. Still, the gap between moderate and severe is not

too great for those who have or finished high school. This is due to programs like

the Special Education (SPED) program and Sign Language Interpreting Service

(SLIS) that help a person with a disability, like those who have a hearing

impairment.

In the Philippines, SPED stands for Special Education. The program

provides educational services to students with disabilities or special needs. The

program aims to address the unique learning requirements of these students and

help them achieve their full potential, according to a statement by the Department

of Education (2021). In contrast, the Sign Language Interpreting Service is a

service that provides interpreting services for individuals who are hard of hearing

or deaf. The interpreters use Philippine Sign Language (PSL) to facilitate

communication between the deaf or individuals with difficulty in hearing and muted

individuals. SLIS can help deaf people attend school and facilitate in school

settings such as interpret classroom lectures, discussions, and other activities,

cited by National Council on Disability Affairs (2021).

In summary, while SPED and SLIS primarily address the educational and

communication needs of individuals with disabilities, our study may focus on

technological innovation to improve two-way communication using Filipino Sign

Language. Our study could use these existing services by providing an innovative

tool for real-time communication accessibility for the deaf community, which is one

of our main objectives.

17
Rao P (2022) and their team have developed a stand-alone sign language

translator that can be deployed on a Raspberry Pi. The translator uses the Hand-

mesh model from MediaPipe to convert dynamic fingerspells into words and form

sentences. It also uses the face-mesh model from MediaPipe to recognize

emotions. Additionally, the translator can recognize images of text embedded on

surfaces such as boards or flyers and translate them into a regional language of

the user's choice using the Google text-to-speech API.

Wearable devices, such as gloves or wristbands, are used for gesture

recognition to assist individuals with disabilities. These devices can detect and

interpret specific movements of the hands, fingers, or body, according to Almasri

et al., (2021). In combination with software that translates the gestures into text or

speech, wearable devices can assist individuals with the hearing and speech

impairments to communicate effectively.

Others find it challenging to communicate with deaf persons. Therefore, it

is possible to communicate with deaf and mute individuals through the use of sign

language, but because Understanding sign language is challenging for most

individuals, there is a wide gulf between them and it is impossible for them to share

ideas and thoughts with others. New technologies should arise in order to close

this gap, which has been present for years. Therefore, a bridge between the deaf-

mute and others, an interpreter, is required. This article presented a sign language

translation system. The technique made use of an American Sign Language (ASL)

dataset that had been threshold and intensity-preprocessed. Akshatha, et al.,

(2021).

18
Based on Rao P (2022), Almasri et al., (2021) and Akshatha, et al., (2021),

they discuss the communication of barrier that exists between deaf and speech-

impaired individuals and the hearing community due to the difficulty in

understanding sign language while our study will focus to directly address the

communication barrier by providing a means for both sign language users and non-

users to communicate effectively using technology.

Through the use of the sensors embedded in the gloves like a flex-sensor

and accelerometer to detect arm motion and position, the system can detect and

read the different hand gesture and movement of the sign language, identify these

particular gestures they may relate to sign language terms and phrases. To

achieve communication, sign language will be translated by the device into speech

through both a speaker and text that may be displayed on an LCD screen, noted

by Yadav M., et al., (2019).

According to Elmahgiubi et al., (2020), They created a system known as the

Sign Language Translator and Gesture Recognition system, which is a Data

Acquisition and Control (DAC). This system involves the creation of a smart glove

which takes hand motions and turns them into readable text. This text can be

Transmitted wirelessly to a smartphone or showcased on an embedded LCD

display. The Sign Language Translator begins with the design of an elastic glove

equipped with 5 flex sensors, 5 contact sensors, a three-dimensional

accelerometer (Ax, Ay, Az), and a three-dimensional gyroscope (Gx, Gx, Gz). The

primary goal is to mount most of these components into the flexible glove to

capture all hand gestures and translate them into letters, effectively conveying sign

19
language. Their experimental results demonstrate that these gestures can be

detected using cost-effective sensors that track finger positions and orientations.

The current version of the system can precisely interpret 20 out of 26 letters,

attaining a recognition accuracy of 96%.

Chheda et al., (2021) has introduced a computer-based vision-driven

system for translating text from Indian Sign Language (ISL) automatically. This

system uses a built-in web camera to capture video, which is then pre-processed

and subjected to gesture recognition. It focuses on translating single-handed sign

language for the letters (A–Z) and digits (0–9). The process to complete the

functions are; First the image acquisition, video is recorded at approximately 30

frames per second, which is considered sufficient for efficient computation. A

higher frame rate would increase processing time due to more data to handle.

Second, the environmental setup, the image acquisition process affected by

environmental factors such as lighting, background, and foreground objects. Using

a plain white background is preferred to facilitate feature extraction. Third is the

image pre-processing which involves hand segmentation and subsequent

morphological operations. One proposed method employs to segment the hand,

use an adaptive skin color model that maps the YCbCr color space to the YCbCr

color plane. Then the sign recognition technique will depend on the preprocessing

method employed. For instance, when color thresholding and fingertip position

extraction are applied, recognition is based on the finger's position within a

bounding box. Lastly, the database creation to enhance efficiency, a database is

created to store information for pattern matching. This can be achieved by

20
installing an ODBC driver using the Database Explorer App provided in the

MATLAB program, allowing real-time translation of signs. However, similar

gestures and postures between signs may lead to misinterpretations, potentially

reducing the system's accuracy. Achieving an efficient system for real-time sign

language translation is an attainable goal, but optimizing the system poses

significant challenges.

In summary, Yadav M., et al., (2019) developed a smart glove equipped

with flex sensors and an accelerometer that can detect and read hand gestures

and movements of sign language. Elmahgiubi et al., (2020) developed a similar

smart glove system that can correctly interpret 20 out of 26 letters, reaching a

recognition accuracy of 96%. Chheda et al., (2021) developed a computer-based

vision-driven system with a built-in web camera that can translate single-handed

sign language for the letters (A–Z) and digits (0–9). The researchers will also

develop a similar concept involving a glove device system, but in this case, they

will employ a joystick to monitor finger movements. This existing literature could

potentially serve as a valuable resource for the researchers, offering insights and

inspiration to enhance the efficiency and accuracy of the glove device sign

language translator.

It is proven by many researchers that a glove-based device can achieve

communication through speakers and text display, which is one-way

communication, so how can we achieve two-way communication? To achieve this,

a system developed by Kumar R.M., et al., (2021), an application that can convert

human speech to a text input that can be further translated into a sequence of

21
images that displays sign language. To convert input audio to text, speech

recognition is used and through Natural Language Processing algorithms, it can

extract root words and remove filler words such as ‘is’, ‘are’, or ‘was’ that is

irrelevant to Sign Language translation.

Glove-based system became the most popular solution because of its

mobility and low-cost. But other system requires cable connection with computer,

but with Bluetooth technology, the device can be wireless. And to save the device

from being bulky and heavy, the processing responsibility will be given to user’s

Android phone, since most people owns an android phone apart from Apple’s

Iphone, we utilize this advantage to reduce the cost and simplify the design that

should be ease-of-use for the hearing-impaired community, noted by Phing, T.C.,

et al., (2019).

Shah et al., (2022) has developed and deployed a translator for Indian Sign

Language (ISL) fingerspelling. This translator utilizes a convolutional neural

network and is divided into two main modules. The first module is responsible for

capturing user input through a device camera. The second module handles the

preprocessing of the input images. It uses Convolutional Neural Network (CNN) to

distinguish distinct sign images and then applies an Artificial Neural Network (ANN)

to process them. The identified signs are then compared with a vast dataset of

stored gestures and their associated outputs. The corresponding words are

displayed to the user. Similarly, if the user provides voice input, the system will

display the corresponding gesture. This system acts as an interpreter, much like a

human sign language interpreter, understanding sign language and translating it

22
into speech for individuals with normal hearing. Three parties are involved: the

hearing-impaired individual, the system (comprising a computer), and a person

with normal hearing. The hearing-impaired person performs signs in front of the

system, which tracks and transmits sign language to the computer. The computer

then analyzes the signs and conveys their meaning in speech to the person with

normal hearing.

In summary, the system developed by Kumar R.M., et al., (2021) is a mobile

app that can convert human voice to sign language and vice versa. It used speech

recognition and natural language processing algorithms to convert audio input to

text and text input to sign language images. Then the system developed by Phing,

T.C., et al., (2019) is a glove-based sign language translation system. It uses

sensors in the glove track the user's hand movements and translate them into text

or speech. The system is wireless and portable, making it a convenient option for

everyday use. However, it is limited to one-way communication, from sign

language to text or speech. Shah et al. (2022) created a sign language translation

system based on convolutional neural networks. It used a camera to capture

images of the user's hands and a CNN to recognize the signs. The system can

then translate the signs into text-to-speech, or vice versa. The system has the

potential to be a powerful tool for two-way communication between deaf and

hearing people. The researchers will use these resources to provide an effective

method for two-way communication. As the researchers will used a Bluetooth

communication to establish the connection between the device and the mobile

app.

23
In the realm of Filipino Sign Language (FSL) translation, the Neural Network

Classification model emerges as a cornerstone, playing a pivotal role in endowing

hardware with the capability to discern and comprehend hand gestures. Trained

on an amassed dataset of FSL gestures, this model adeptly discerns patterns

within sensor data, thereby enabling precise and efficient real-time recognition of

gestures. This technological innovation not only ensures the reliability of Filipino

Sign Language translation but also catalyzes enhanced communication between

deaf and hearing communities (Deep Learning Methods for Sign Language

Translation, 2023).

Our project, Senyas, builds upon existing patented technologies in sign

language translation. Lee and Kim (2019) in their US Patent No. 9,952,072

describe a glove-based system for recognizing hand gestures and finger

movements, similar to Senyas' approach. However, Senyas differentiates itself by

incorporating an accelerometer for arm angle detection, potentially improving

translation accuracy for complex signs as suggested by Choi and Park (2022) in

their US Patent No. 10,285,380. Additionally, US Patent No. 10,285,380

demonstrates the feasibility of accelerometer-based sign language recognition

using a wristband. Senyas combines the benefits of both approaches, utilizing both

glove sensors and an accelerometer for comprehensive hand and arm tracking.

These existing patents, such as Lee and Kim (2018) and Choi and Park

(2022), highlight the growing interest and potential of technology-aided sign

language communication. They pave the way for further improvements, and

Senyas contributes to this effort by exploring a study that combined sensors and

24
focusing on a wearable, user-friendly device. Building upon the foundation made

by previous inventions, Senyas aims to improve sign language translation

accuracy, accessibility, and ease of use, ultimately empowering communication

with the deaf and hard of hearing community.

The deployment of this Neural Network Classification model on our

hardware device has been a transformative endeavor, facilitated by the Edge

Impulse platform. This platform not only expedites the training process but also

streamlines deployment directly onto the device. The consequential reduction in

latency not only augments efficiency but also simplifies the developmental

intricacies associated with this cutting-edge technology (Edge Impulse, 2023). The

amalgamation of the Neural Network Classification model with the Edge Impulse

platform underscores a harmonious synergy, ensuring that the trained artificial

intelligence thrives seamlessly on our hardware device, marking a significant stride

in the realm of FSL translation and fostering improved communication channels.

Review of Related Studies

The first national disability survey in the Philippines was carried out by the

Department of Health in partnership with Philippine Statistics Authority and World

Health Organization. The survey's objectives were to gather data on the various

aspects of disability, including impairments, activity restrictions, participation

limitations, and environmental factors that either support or impede full

participation. The goal is to satisfy the demands of the nation's disability

stakeholders for comparable, dependable, and evidence-based data that will be

25
the foundation for the creation of adaptable services and programs for people with

disabilities. (NDPS/MFS, 2016).

According to DOH, (2016), out of the 10.464 Filipinos who are interviewed,

1,256 have reported to have a severe disability. Of the 1,256-sample size, 21%

have hearing impairment and 7% of them find it very problematic with day-to-day

life while 14% find it extremely problematic. Those who have severe disability are

unable to work due to their health condition and situation.

A study conducted by K. Brady et al., (2018), demonstrated the potential for

enabling seamless communication between Deaf ASL signers and English

speakers through real-time automatic translation on mobile devices. A key

component is developing robust ASL sign recognition technology, an active

research area. However, deploying this on mobile presents additional challenges

like managing device constraints and handling variable input that may not match

training data well. Though some companies have tried developing mobile

translation apps, academic research is still needed to address the core technical

hurdles. The ultimate goal is creating an automated system that can reliably

translate between spoken English and ASL in real time on handheld devices to

enable natural communication for Deaf individuals anytime and anywhere.

A series of sign motions are used to communicate visually in American sign

language. Four basic elements make up a sign: palm orientation, position,

movement, and hand shape. Additionally, the signer's current attitude might be

conveyed by facial expression. For instance, a raised eyebrow always denotes a

query, whereas a neutral expression always denotes a fact. In addition to

26
objectivity and skepticism, this study also takes into account happy and negative

emotions. For recognition, a total of 40 frequently used statements representing

neutral, questioning, positive, and negative emotions were employed. These 40

phrases are taken from well-known online videos of sign language. These

sentences are performed by the signers with clear facial expressions, noted by

Yutong Gu et al., (2020).

A study by Singh et al., (2020) found by giving Deaf-Mute people a reliable

way to communicate, wearable technology and mobile apps have potential to

significantly enhance their quality of life. The accuracy and efficiency of gesture

recognition technology in wearable devices have significantly increased recently,

according to another study by Zhang et al. (2021), making assistive communication

tools like Senyas more dependable and available.

The creation of wearable devices for sign language translation and

recognition has drawn more attention in recent years. For instance, a study

published in 2020 offered a novel method for identifying sign language gestures

utilizing an inertial measuring unit (IMU) and machine learning algorithms, noted

by Li et al., (2020). Another study in 2021 investigated the use of similar glove-

based system in American Sign Language (ASL) recognition, demonstrating its

potential to achieve high accuracy rates Zhang et al., (2021).

A study from Gadekallu T.R., et al., (2021). Claims that by fostering

meaningful engagement, hand gestures would help and promote communication

among people. Hand gestures are embedded in a variety of applications, including

human-robot interface, gaming control systems, and vision-based recognition

27
systems. Wearable sensors are typically used by researchers to record hand

movements. After that, the data are analyzed using any method for hand gesture

recognition.

In recent years, greater focus has been placed on the development of

wearable gadgets that can translate and recognize the sign language. One study

that used an inertial measurement unit (IMU) and machine learning algorithms to

identify sign language gestures was published in 2020. (Li et al., 2020). The

adoption of a comparable glove-based system for the American Sign Language

(ASL) recognition was the subject of another study in 2021, demonstrating its

potential to reach high accuracy rates Zhang et al., (2021).

According to Statcounter (2023), the operating system market share in the

Philippines have recorded an impressive 88% of share for the Android OS as of

March 2023, which is colossal compared to Iphone’s iOS market share of only

11%. This analytics service uses a tracking code that is installed on more than 1.5

million websites globally and they also cover different range of activities and

geographic locations. The methodology of Statcounter is that they record the

website visitors operating system, which identifies the type of phone of the visiting

user. The code then sends the data back to server of the system and label their

phone’s OS and geography. This evaluation provides insight of the researcher’s

decision on forgoing with Android environment for the development of the device’s

app. But this doesn’t restrict it from having an iOS-based app in the future.

Senyas is a wearable glove device created to convert sign language into

computer-displayed text and speech. A number of sensors on the glove would

28
record the hand positions. The Arduino controller on the glove would wirelessly

transmit the captured data to the Arduino controller connected to the computer

screen via Bluetooth. A speaker would utter the term linked with the gesture if the

data matched one of the motions stored in the computer. Flex sensors are

mounted to a glove, and an Arduino circuit board is utilized to translate ASL to

audio, a system designed Babour, A. et al., (2023) Additionally, the board would

use analog-to-digital converters (ADC) to transform the audio to text so that it could

be seen on the LCD screen.

Anupama, H.S. conducted a study. et al., (2021) collected data using motion

sensors attached to a glove while employing an automated Sign Language

Interpreter (SLI). An Arduino board is utilized to obtain the sensor data. Following

data acquisition, a machine-learning technique is applied to process the data. The

accuracy rate attained by the method is 93%. The voice of a registered speaker is

recognized by a speech recognition system that is displayed. This implementation

uses a computer to store language patterns.

The advancement of sensor technology is receiving a lot of attention, and

this is anticipated to have a big impact on communication for Deaf-mute

individuals. Leap-Motion-Controller (LMC), Claimed by Ameur, S. et., al (2020) is

one example of such a gadget. Even though state-of-the-art algorithms have been

developed with success, they still have limits since they have not addressed the

issue of processing sequential hand gesture data fast or accurately describing the

discriminative representation of various classes of hand gestures. A novel

29
Chronological-Pattern Indexing (CPI) method is used to sort the pattern of hand

motions and hand motion data that the LMC sensor collected.

Senyas is a wearable device that can translate ASL, by tracking hand

gestures using sensors, these sensors will generate signals and would then be

translated to language understood by normal individuals, it does this through

datasets gathered from the generated signals of the sensors, for the device to be

able to read and predict the expected output using these datasets, it must use an

algorithm which would be used to implement machine learning.

Those algorithms can be used for machine learning and increasing the

accuracy of the device. A study conducted has found that KNN clustering algorithm

has better accuracy than Neural Network and Decision Tree Classifier, statement

from Johnny S., et al., (2022), results collected from testing those three mentioned

algorithms to train their Fifth Dimension Technologies Gloves and assessing the

output from the expected output to measure accuracy.

Kim & Seo, (2023), has introduced an innovative system for recognizing

dynamic gestures, which relies on a combination of triaxial acceleration signals

and image-based deep neural networks. Utilizing Angelo's specialized glove

device, it becomes possible to capture 1D acceleration signals from each finger.

These signals are then transformed into a time-frequency representation using

wavelet transformation, creating an image-like format known as a scalogram.

To effectively recognize gestures that involve temporal patterns, a single

2D convolutional neural network is employed to process the scalogram. This

30
approach eliminates the need for complex systems like Long short-term memory

(LSTM) networks and recurrent neural networks (RNNs), or spatiotemporal

features such as 3D CNNs.

For the purpose of classification, the scalograms are transformed into RGB

images by numerically combining fifteen scalograms into a single RGB

representation, using various methods. The off-the-shelf EfficientNetV2 model,

ranging from small to large variants, is utilized for image classification, with fine-

tuning performed.

To assess the system's performance, a custom dataset of bicycle hand

signals is created to represent dynamic gestures within the transformation

framework. The reconstruction technique employed for scalograms is qualitatively

compared against matrix representation methods. The outcome reveals that

employing scalograms yields the highest accuracy of 99.37% for recognizing

dynamic gestures within the system.

Although another approach for gesture recognition other than wearable

device is plausible, but the machine learning and deep learning techniques used

in the vision-based approach enable gesture detection through the processing of

digital photos and videos. The You Only Look Once (YOLO) v3 and DarkNet-53

convolutional neural networks (CNNs) are the foundation of a real-time system for

hand gesture identification, study proposed Kim G.M., et al., (2019). But this

approach is limited due to environmental factors like lighting and brightness.

31
It is frequently necessary to establish beyond a reasonable doubt that the

hearing-impaired person's communication was recognized. This system suggests

implementing real-time American Sign Language perception in Convolutional

Neural Networks (CNNs) with the use of the You Only Look Once version (YOLO)

algorithm, noted by Bhavadharshini, M., (2021). The method first performs data

capture, then gesture pre-processing is done, then hand movement is tracked

using a combinational algorithm. By identifying the existence of training snapshots

contained in the input snapshot, YOLO evaluation will handle the picture. A

bounding box will be constructed with a name that concentrates the intended

object if the prepared image is present in the input. The sign language dataset

from an internet source, which consists primarily of regularly used phrases of

American Sign Dialect images, was aggregated as part of the information

gathering for this extent.

The device being developed can translate sign language in to a language

that can be heard or read to achieve an almost normal conversation with anyone.

According to the World Federation of the Deaf (2023), there are over 300 distinct

sign languages used by Deaf people around the world. These are distinct sign

languages from one another and are not mutually understandable, so without

learning the other sign language, a signer of one sign language might not be able

to understand a signer of the other.

It's crucial to highlight that RA 11106, officially known as "The Act

Designating the Filipino Sign Language as the National Sign Language of Filipino

Deaf and Official Sign Language of the Government All Transactions Concerning

32
the Deaf and Mandating Its Use in Schools, Broadcast Media, and Workplaces,"

is the nationwide legislation in the Philippines. This law designates Filipino Sign

Language (FSL) as the official sign language for citizens with hearing impairment

or other disabilities hindering verbal communication in formal settings. The

researcher's objective is to create a device specifically tailored to translate FSL in

real-time, emphasizing its practical application in facilitating communication for

individuals using this sign language.

The FSL is now widely used sign language in the Philippines and it is

currently 54% of sign language users utilize it, mentioned by study of Ong C., et

al., (2018). Their study developed the SIGMA system that aims in facilitating the

communication barrier being experienced with person who is speech-impaired,

using a glove-based system with flex sensors and a complementary vision system

that is used in detecting hand positon with respect to the body, it can translate

hand gestures of FSL in to a readable text.

This study conducted by Cabigon et al., (2021), intended to develop a

machine translation system that can translate FSL into written Filipino text. The

researchers collected data from 50 Deaf FSL signers and used it to create a

bilingual corpus of FSL and Filipino text. The resulting machine translation system

was evaluated for accuracy and achieved a BLEU score of 44.92, indicating

moderate translation quality. It is mentioned in their paper that FSL is the sign

language used by Deaf Filipinos is accepted as the official sign language of the

nation.

33
The researchers must also be able to train and test the device, literature

about ASL can be used as resource and reference. For learning and using signs

in Filipino Sign Language (FSL), the Filipino Sign Language Online Dictionary is a

useful tool. A complete list of FSL signs is provided in the dictionary, together with

video examples and explanations of each sign's usage.

The FSL Online Dictionary has helped to promote FSL as a recognized

language in the Philippines, claims a study by Jacinto et al., (2019). According to

the study, the dictionary has improved communication between hearing and Deaf

people who use FSL and raised awareness of FSL as a distinct language having

its own syntax and grammar. The School of Deaf Education and Applied Studies

at De La Salle-College of Saint Benilde (DLS-CSB SDEAS), which has been

actively promoting FSL as a recognized language in the Philippines, maintains the

FSL Online Dictionary. The films are created by native FSL signers, and the lexicon

is frequently updated with new signs.

Senyas device can generate speech from the user’s gesture. To implement

this, the researchers will use existing studies and tools that can achieve this. The

purpose of this concept is to convert text to speech which the speakers are used

for an output.

Using tools like the Talkie library, one can turn text into speech and produce

speech. The library makes use of a mathematical approach for analyzing voice

signals called Linear Predictive Coding (LPC). Formant frequencies are produced

using the LPC model and then utilized to synthesis speech. Several pre-recorded

34
speech samples are available in the Talkie library and can be played back by the

microcontroller.

A study by B. G. Kadam and S. S. Tandale, (2018), used the Talkie library

to implement a text-to-speech system for the English language. The authors found

that the Talkie library was able to generate speech output that was intelligible and

of high quality, and that the library was easy to use and integrate with other

microcontroller-based systems.

A study by B. G. Kadam and S. S. Tandale, (2018), used the Talkie library

to implement a text-to-speech system for the English language. The authors found

that the Talkie library was able to generate speech output that was intelligible and

of high quality, and that the library was easy to use and integrate with other

microcontroller-based systems.

For two-way communication to be possible, a system for recognizing

speech that will be converted in to a text must be included in the device. This will

allow the Deaf-Mute user to read to text through display. Such systems can be

hardware demanding so the researchers will explore other methods like using

cloud-based systems.

Based on the information provided by Microsoft (2023). Speech-to-text,

text-to-speech, and speech translation are just a few of the speech-related

services offered by the cloud-based platform Azure Speech Services. The speech-

to-text service converts audio input into text using the deep neural network (DNN)

acoustic model and a statistical language model (SLM). To identify speech

35
patterns and features, The DNN acoustic model is trained using a large volume of

voice data. Given audio input, the SLM is used to anticipate the most likely word

order. To increase accuracy, Azure Speech Services additionally makes use of

language-specific models that have been developed using data from certain

languages or dialects.

Another cloud-based service called Google Cloud Speech-to-Text can

recognize speech in audio and video files. Deep neural networks (DNNs) are used

by the service to convert audio input into text. The DNNs can distinguish between

different languages and accents since they were trained on a lot of voice data.

Speaker diarization is a method that Google Cloud words-to-Text employs to

recognize individual speakers in a discussion and assign their words to them

individually. Google (2023).

These services may be used for speech-to-text translation that can enable

two-way communication. It’s also important to note that both of the services have

support for Filipinos.

The gathered studies and literatures will be an outline in conducting this

research, suggest the methods and instruments to be employed, and supports the

significance of this research. The findings and conclusions will be used as a guide

to develop Senyas device and application contend its ability to become a medium

for the Deaf-Mute person with normal individuals.

36
CHAPTER III

METHODOLOGY

In this chapter, the researchers will discuss the methods to conduct this

research through developing the device. Senyas is a device in development that

will act as a medium and facilitate communication between Deaf-Mute people with

normal individuals, and this section will explain the research design and the

approach in the assessment of its accuracy and effectiveness. The chapter will

discuss the model the researchers will use to develop the device. This will outline

the methods to be used and the research instruments for testing and assessment

of the effectiveness of the device.

Research Design

The Software Development Life Cycle (SDLC) methodology, commonly

referred to as SDLC methodologies, is based on Alexandra (2023), these are

several frameworks or techniques that help the whole software or hardware

development process. These approaches offer a systematic way to control the

overall project development phases, ensuring that the process is efficient, and

effective, and results in high-quality output. There are several SDLC

methodologies, and each has a unique set of guidelines and procedures. One of

the SDLC methodologies is the Waterfall model.

According to Othman1 et al., (2020), The waterfall model is a step-by-step

approach to project development that progresses sequentially, much like a

waterfall flowing downwards through different phases shown in the Waterfall

model. It begins with analyzing current systems, followed by system design to

37
specify hardware and system requirements and define overall system architecture.

The next phase is implementation or coding, where design specifications are

coded using a specified programming language. The fourth phase is integration

testing where all the units developed during implementation are combined into a

single system. The fifth phase is system deployment and the final phase is

maintenance, which involves making modifications to improve system

performance.

Figure 3.1. Waterfall Model

For our research study, the Waterfall model is a suitable research design

since it includes the various stages that we need to process, including data

gathering, design, implementation, testing, deployment, and maintenance. During

the data gathering stage, we will identify the specific features and functionalities

that the system must possess to effectively translate sign language. The system

design stage would involve creating detailed specifications for the system, while

the implementation stage will involve constructing and programming it. The testing

stage will verify the wearable device accuracy and reliability in translating sign

language. The deployment phase will take place in making the system available

38
for end-users and ensuring it works correctly, while the maintenance phase will

involve continuous monitoring and updating, including bug fixes, new feature

additions, and performance improvements. The maintenance stage will continue

for the life of the device. Therefore, we have decided to use the Waterfall model,

as it aligns with the phases of our research study, including data gathering, design,

implementation, testing, deployment, and maintenance.

1. Requirement Analysis. The first step would be a Requirements analysis is the

initial phase in the development process, involving the identification and

understanding of the objectives and functions of the device and app. This crucial

step aligns these objectives with the necessary specifications for constructing the

device and app. Through requirements analysis, the project team delves into

defining the essential features, performance criteria, and constraints that the

device must meet. This process ensures a comprehensive understanding of what

the device and app need to achieve and guides subsequent stages in the

development lifecycle.

1.1 Gantt Chart

A Gantt chart is a visual representation of a project schedule that employs

horizontal bars to illustrate the commencement and completion dates of tasks or

activities. It offers an extensive overview of the project's timeline, tasks,

dependencies, and milestones. In the context of our study, this aligns with the

Waterfall model, as outlined in our research design, commencing from system

conceptualization and progressing sequentially through each stage until the final

system drafts.

39
Figure 3.2.1. Senyas Study Gantt Chart in monthly deliverables

40
2. System Design. Follows the planning stage to create an outline of technical

design requirements, such as the hardware design, system circuit design,

hardware and software description, software design, system flowchart, and

interface design.

2.1 Hardware Design

The design of the device should focus on functionality and portability. The

device should be able to read hand gestures in real-time by placing the Joystick

Sensor on top of the fingers so that it can register the bend of the fingers

accurately. An accelerometer is placed on top of the hand and tracks the distance

between the hand and the body, as well as hand motion, and the signal generated

by these sensors will be processed by the microcontroller. The microcontroller is

supplied by a LiPo battery, which is enough to supply the overall systems.

Figure 3.3.1. Block Diagram

This block diagram shows the system designed for gesture recognition and

communication between a deaf-mute individual and a normal person with typical

41
speech and hearing capabilities. The joystick sensor and accelerometer are both

integrated into wearable devices worn on the hand. The joystick sensors quantify

finger bending, while the accelerometer gauges hand acceleration. The Esp-

Wroom-32 digitized signal from the sensor and accelerometer.

ESP-32 Bluetooth Module to recognize gestures that are transmitted to a

smartphone for further interaction between a normal person and a deaf-mute. Esp-

Wroom-32 communicates wirelessly with the smartphone using the Bluetooth

module. This module communicates with the Android app to allow data transfer.

On the Android side, a dedicated app receives the gesture data from the Esp-

Wroom-32. This android app utilizes a pre-trained gesture recognition model to

translate and receive signals into meaningful gestures. This step ensures a mutual

understanding of gestures between the deaf-mute individual and a normal person

using the smartphone.

The Android application provides a communication interface to allows both

parties to engage in a conversation using gestures. These recognized gestures

from the deaf-mute individual are translated into text-to-speech to the person using

the smartphone. To facilitate real-time communication, the Android app converts

the received text messages into synthesized speech. This conversion enables the

person using the smartphone to hear the messages, enabling bidirectional

communication. The synthesized speech is played through a speaker integrated

into the smartphone and device. This ensures that the person using the

smartphone can listen to the responses generated from the recognized gestures.

42
2.2 System Circuit Design

The researchers will discuss the circuit design of the system to be

implemented. The system indicates the hardware components to be used, and we

will elaborate more on its uses and how data flows in the system as device for

translating sign language in real-time.

Figure 3.3.2. Schematic Design

This Figure 3.3.2 show the schematic diagram for device. Esp-Wroom-32 is

the microcontroller of the device. The joystick sensor and accelerometer are

integrated into wearable devices worn on the hand. The joystick sensors quantify

finger bending, while the MPU6050 gauges hand acceleration. Esp-Wroom-32

processes the MPU6050 and Joystick data to determine the sign language

gesture. Once it recognizes the sign language, it will send it to Bluetooth to receive

text-to-speech in the app. A boost converter module will convert the 3.7V supply

from the battery into a 7V supply, this ensures a sufficient and stable power supply

43
for the Esp-Wroom-32, MPU6050, and joystick, while a lithium battery serves as

the primary power source.

2.3 Hardware Description

This development section describes the hardware components of the

system in terms of how they contribute to achieving the overall system objectives.

The hardware system is made up of various modules including the following:

ESP32 microcontroller is versatile and

powerful device that is widely used in various IoT

and automation applications. The ESP32 offers a

broad range of peripherals, including built-in Wi-

Fi and Bluetooth connectivity for projects

requiring wireless communication. Using the


Figure 3.3.3. ESP32 30PIN
ESP32 microcontroller involves integrating

sensors to capture sign language gestures, processing the data, and translating it

into a text or spoken language. The resulting text or speech is displayed on a

screen or transmitted via speaker.

An MPU6050 is a 3-axis gyroscope sensor

that measures the rate of rotation or angular velocity

around a particular axis of an object. In the sign

language translation device, a gyroscope is used to

detect and measure the rotational movements of the

hand and fingers, providing information about the


Figure 3.3.4. MPU6050

44
gestures associated with sign language. By combining the data from the gyroscope

with other sensors and algorithms, the sign language translation device can

accurately interpret and translate sign language gestures into meaningful output,

contributing to enhanced communication for individuals who use sign language.

TP4056 is commonly used charger for

lithium-ion batteries IC (integrated circuit). It is often

used in applications where a single-cell lithium-ion

or lithium-polymer battery needs to be charged. In

the context of sign translation device, the TP4056

can be used to charge the device's battery, ensuring Figure 3.3.5. TP4056 Battery
Charger
it remains powered for extended periods. Using the

TP4056, ensures a safe and controlled charging process for the battery in the sign

translation device, helping to prolong the battery life and maintain reliable

operation.

A 3D analog joystick is an input device commonly

used for controlling movement and direction in various

electronic devices. In the context of a sign translation

device, a 3D analog joystick can be employed to capture

hand movements, gestures, or other user inputs. The

Figure 3.3.6. 3D Analog joystick input is integrated into the overall sign language
Joystick
translation system. The translated gestures are then

sent to a translation algorithm to convert them into text or spoken language. By

using the 3D analog joystick as an input device in a sign translation device, you

45
can capture the user's hand movements and gestures, providing an intuitive and

interactive way to interact with the device.

The Boost Converter is a type of electronic

module used in electrical circuits for power

conversion. It performs the specific function of a

boost converter, which raises the input voltage to a

greater level from 3.7V to 7V. This module is

frequently utilized in many different applications Figure 3.3.7. Boost Converter


Module
where a constant, high voltage is needed, as in

renewable energy systems or battery-powered gadgets. The boost converter

provides a dependable power source for electronic components and stable supply

to power our microcontroller and sensors.

Lithium-ion Polymer (Li-Po) batteries are a type of rechargeable battery that

uses a solid polymer electrolyte to exchange ions between the positive and

negative electrodes. These batteries are commonly used in portable electronic

devices, including sign translation devices,

because of their high density of energy, lightweight

nature, and the compact design. In sign translation

device, a lithium-ion polymer battery provides a

reliable and efficient power source. It offers the

advantages of high energy density, lightweight


Figure 3.3.8. Lithium-ion
design, and flexibility, contributing to the overall Polymer (Li-Po) Batteries

46
performance and portability of the device. Proper care, handling, and charging

practices are essential to maximize the lifespan and safety of Li-Po batteries.

A Printed Circuit Board (PCB) is a crucial

component in the construction of electronic

devices, including sign translation devices. The

PCB serves as a platform to connect and mount

various electronic components, facilitating the flow


Figure 3.3.9. PCB Board
of electrical signals between them. The PCB acts

as the central nervous system of a sign translation device, facilitating the flow of

electrical signals between components. It provides a structured and organized

platform for the assembly of electronic circuits, contributing to the functionality and

performance of the device.

A Rocker Switch is a type of switch that rocks

back and forth to control the flow of electrical current.

In terms of a sign language translation device, a

rocker switch might be used as an input method for

the user will interact with the device. The device is

designed with a rocker switch, typically consists of a


Figure 3.3.10. Rocker
Switch lever that can be tilted or rocked in two directions up

and down or left and right. The switch is used to turn lights on and off, activate

motors, control appliances, and perform various other functions in electrical

circuits.

47
2.4 Software Design

The design of the software will be discussed in this section and the software

to be proposed will work in synchronous with the device in real time. The device

will translate sign language in real-time, and different words and phrases have a

unique gesture, to process this large varying data, machine learning will be

adapted to process the data and develop an algorithm that can learn from this data.

The number of data can affect the required processing power and memory, to

solve this issue, having a mobile device to carry that task will make the device

lighter and cheaper. The research seeks to highlight such technology and how can

it be implemented to create a device to make one’s life better.

Figure 3.3.11. System Architecture

Application. This will be the front-end of the system. This will be the UI of

the software to navigate and control the device. The interface will display the

48
translated Speech-to-Text from the normal person for the Deaf-Mute person the

application will display the Text-to-Speech.

Data Communication. The device will track the hand gesture through

sensors and send the data to the smartphone via Bluetooth. This will make

translation real-time while being wireless. The phone will then process the data

and be sent back to the device to produce and output similar to speech.

Data Processing. The data will be processed using a device that is

developed to collect the data from the device and using machine learning, identify

and translate the data to a comprehensible language. Which will then be sent back

to the device to produce speech through speakers. The application will also

process speech-to-text using kodular speech recognition tools like Google Speech

to Text Cloud, the converted speech will then be processed which would then be

converted in to Sign Language to text.

2.5 System Flowchart

The Senyas system's flowchart, also known as a data flow diagram,

visualizes the user interaction with the interface through Bluetooth on their Android

device for two-way communication.

49
Given below are diagrams of the multiple pages and components within the

Senyas system application detailing the data flow that occurs on each page.

Figure 3.3.12. System Flow of Kodular Application When Opening

The first step is displaying the app splash screen, which a screen is

displayed when you open the app. After the splash screen display, the app

navigates to home page. The main page is the main screen of the app, and it is

where users can access the app features and functionality.

50
Figure 3.3.13. Continue Process Side Menu & Disconnect Button

This figure 3.3.13 shows the side menu can be opened and closed by

clicking and swiping from right or left, respectively. The disconnect button checks

if the Bluetooth connection is disconnected. If it is, it displays an alert that says

"Disconnected".

51
Figure 3.3.14. Navigation Flow of Main Page

52
The main page of the app contains several functionalities, including a side

menu, disconnect button, 3000 millisecond delay, SDK version check, clear button,

and microphone button. On opening the main page, the app automatically checks

the SDK version. If the SDK version is greater than or equal to 31, the app

terminates. Otherwise, it notifies nearby devices. The clear button clears the text

generated by the text-to-speech feature or vice versa.

Figure 3.3.15. Continue Process Checking Bluetooth Connection

53
If it is, it displays an alert that says "Disconnected". Otherwise, it terminates

the app. The 3000 millisecond delay checks if the Bluetooth is enabled on the

device. If it is not, it notifies the user and redirects them to the Bluetooth settings

page on their device to enable. Otherwise, it checks if the Bluetooth is connected

to a device. If it is, it displays an alert that says "Bluetooth Connected" Then you

can do the Text-to-speech by clicking the Gesture button sign language device

then it will display it text-to-speech. If it is not, it displays an alert that says

"Bluetooth not connected".

Figure 3.3.16. Continue Process Checking Wi-fi Connection

54
This figure 3.3.16 shows the microphone button checks the Wi-Fi

connection. If the Wi-Fi is turned on, it allows the user to do speech-to-text by long

pressing the microphone button. The app will automatically stop the speech-to-text

feature when the user stops speaking. If the Wi-Fi connection is turned off, it

notifies the user and redirects them to the Wi-Fi settings page on their device to

enable it.

Figure 3.3.17. System Flow of Bluetooth Notification Pages When Connection Lost

Figure 3.3.17 shows the system flow of Bluetooth notification pages when

the connection is lost. On Bluetooth, if it is not turned on, a notifier with a "Continue"

button will be displayed. Clicking the "Continue" button will open the Bluetooth

55
settings menu on your mobile phone. Then you can now enable the Bluetooth.

Select and pair the device of the Senyas Device connection from the android

Bluetooth settings. If your mobile phone is successfully connected to the glove

device, the message "Bluetooth Connected" will be displayed. If the connection is

not successful, try reconnecting.

Figure 3.3.18. System Flow of Wi-Fi Notification Pages When Connection Lost

Figure 3.3.18 shows the system flow of Wi-Fi notification pages when the

connection is lost. If Wi-Fi is not turned on, a notifier with a "Continue" button will

be displayed. Clicking the "Continue" button will open the Wi-Fi settings menu on

your mobile phone. Then enable your Wi-Fi connection. After that, you can now

utilize some features of the app that require a Wi-Fi connection.

56
Figure 3.3.19. Hidden Menu Containing Page References and Functions

Like many contemporary applications, the Senyas system includes a

concealed menu accessible through either the menu button or by swiping right

from the edge of the screen. This menu provides access to a range of page

options, including the 'Help' page, for navigating and utilizing the app more

effectively. ‘About’ page, containing a short description of system, user manual to

57
describe all process & finally, a toggle button to switch between Light and Dark

mode, depending on the user's preference.

Figure 3.3.20. Help Page system flow from the hidden menu

Figure 3.3.20 shows the system flow of the help page from a hidden or side

menu. When you click the help page from the side menu, it will open a new screen

with three general questions or details about the app. Each question or detail has

the same functionality, but different content. Clicking on each question or detail will

58
open another screen with the answer to the question. Terms and Conditions will

open a new screen with the legal terms and conditions for using the app.

2.6 Software Description

The mobile application serves as the main place for translation and

communication.

The study of Tan et al. (2019) provides valuable support for the inclusion of

a mobile application in our thesis project as this study demonstrates the

effectiveness of using a mobile app alongside a wearable device for sign language

translation. The authors successfully developed a low-cost, user-friendly system

that translates sign language gestures into text using an Android app. This

approach will help us as it shares similar goals with our project, aiming to enhance

communication accessibility for the deaf-mute community.

The application receives data from the glove device and utilizes an

algorithm to recognize the corresponding FSL signs, ensuring accurate recognition

of a wide range of gestures. The application converts spoken language into text,

enabling hearing individuals to communicate with deaf users through the Senyas

system and this feature will also allow deaf users to understand the translated

speech of hearing individuals. The application translates recognized FSL signs into

text or words, which are displayed on the mobile screen. The mobile application

will feature an interface that is easy to navigate and provides clear visual user

interface, ensuring a great user experience

59
2.7 Interface Design

The interface design of Senyas mobile application was developed using

Kodular, a user-friendly visual programming platform that facilitates rapid app

development without extensive coding knowledge. Kodular's drag-and-drop

interface and block-based programming approach made it an ideal choice for this

project, enabling the researchers to focus on the core functionalities of the

application without getting into complex programming. Kodular's efficiency and

accessibility allowed the team to create, design and bring the Senyas application

to life, effectively bridging the communication gap between deaf and hearing

individuals.

The mobile application will include splash screen, animation transition

between pages, and contains main page. The main page contains two main

sections, the app bar and the content section. On right side of the app bar, when

you tap on menu icon, the hidden side menu (navigation bar) will appear. On the

left side, the Bluetooth icon will serve as an indicator if you’re connected to the

glove device or not. You can connect the mobile application to the glove device

through the Bluetooth icon also.

60
Side Menu Bluetooth Icon

Speech-to-text by
long press button it
will display here
based of your
speech

Text-to-speech will
display based of your
sign language

Long Press button for


Speech-to-text

Click clear button for


clear text

Figure 3.3.21. Main Page App

Before going to main page, the user will see first the splash screen and then

automatically redirected to the main page. This Figure 3.3.21 shows the main page

of the app features a simple design with two main functionalities in the content

section: Speech-to-Text and Text-to-Speech. The Speech-to-Text section allows

users to convert their speech into text, while the Text-to-Speech section could

convert text into speech.

To use the Speech-to-Text functionality, users simply tap the microphone

icon and speak into their device. The app will then transcribe the user's Speech-

to-Text, which will be displayed in the text box.

To use the Text-to-Speech functionality, users simply type or the generated

corresponding gesture from the glove device will be displayed. The app will then

read the text with audio.

61
In addition to the Speech-to-Text and Text-to-Speech functionalities, the

app also features the clear button which can clears the text box.

Overall, the main page of the mobile application in the figure is a

straightforward and user-friendly interface that provides users with the essential

tools for converting speech to text and text to speech.

Click Continue to
navigate Android
Bluetooth settings

Figure 3.3.22. Bluetooth Permission Request

This screen shows the app asking the user to enable Bluetooth connectivity

so it can pair with the glove translation device. The app requires Bluetooth

permission in order to find and connect with the glove via Bluetooth wireless

technology. By allowing the Bluetooth permission, the user grants the app access

to turn on the phone Bluetooth and scan/connect to nearby Bluetooth devices -

which is essential for the app to work properly with the glove device.

62
NOTE: Only Android
Version 12 or higher
version will notify this

Figure 3.3.23. Ask for Permission

This figure 3.3.23 shows the permission request prompt for enabling

Bluetooth that appears on Android 12 and higher. On the latest Android versions,

users need to explicitly allow the required permissions for apps to access certain

features like Bluetooth. By clicking "Allow", the app will be granted permission to

turn on Bluetooth and utilize its full functionality. This extra permission step helps

improve privacy and security in newer versions of Android.

63
Name of our
Device Click
to paired the
device

Figure 3.3.24. Paired Device

This figure 3.3.24 shows before using the glove device with the Senyas app,

you need to pair it with your phone via Bluetooth. This allows the glove and your

phone to communicate.

1. To pair the glove device:

2. Open your phone's Bluetooth settings

3. Turn Bluetooth on

4. Scan for nearby Bluetooth devices

5. Select the Senyas Device Name

6. Tap "Pair" or enter the pairing code if prompted

After pairing the glove device, you can connect it to the Senyas app.

64
Click Continue to
navigate Android
Wi-Fi settings

Figure 3.3.25. Wi-fi Notification

This message will pop up when the user tries to use the speech-to-text

feature by long pressing the mic icon in the Senyas app without an internet

connection. The user will need to connect to the internet before they can use

speech-to-text.

To connect to the internet, the user can tap the redirect button “continue” in

the message box. This will take them to the Wi-Fi settings of their mobile phone.

Once they are connected to the internet, they can return to the Senyas app and

use speech-to-text. The generated text from speech-to-text will then display in the

text box.

The internet connection is necessary when using the speech-to-text even

though we utilize speech recognizer this is because the speech-to-text feature in

65
the Senyas app uses a cloud-based speech recognition service. According to

Google Cloud (2023), it works by sending your voice recording from the app to

Google's servers in order to transcribe it into text. Once the transcription is

complete, the server sends the text back to our app.

Click to
Automatic
Bluetooth
connected

Figure 3.3.26. Connecting the Senyas App

This figure 3.3.26 shows after pairing the glove device in the Bluetooth

settings with your mobile phone, you can now go in this section to connect the

Senyas app to glove device. To connect it, you can easily tap the Senyas Device

then it will be automatically connected.

66
Disconnected
Button

Click and Sign


Language your
device

Figure 3.3.27. Send text to Bluetooth Device

This figure 3.3.27 shows the gesture button serves as the control

mechanism for activating the FSL translation process. Once the glove device and

mobile application are connected, the user can activate the translation by tapping

the button. This triggers the glove device to begin capturing and interpreting hand

movements and translating them into corresponding FSL signs. As the user

performs various signs, the translated text will be displayed in a text box. By

requiring a button press to initiate translation, the system avoids continuous

processing of sensor data and reduces background noise, ultimately improving the

accuracy and efficiency of the system.

67
Help Page

About Page

User Manual Page

Switch Dark or
light mode

Figure 3.3.28. Side Navigation Menu

The app features a hidden side menu for navigation, accessed by tapping the

menu icon in the top left of the main screen. This vertical menu slides out from the

left edge and provides access to the following features:

1. Help - This option provides some user guides or resources for using the

app correctly like how will you connect the device to this app. It offers

basic questions, tutorials, contact information for support, and terms and

conditions.

2. About - This section likely presents information about the Senyas

project, its purpose, and its creators.

3. User Manual - Comprehensive guide for using all app functions.

4. Theme Switcher - Toggle between light and dark mode color schemes.

68
Questions
& Answers

Terms and
Condition

Figure 3.3.29. Help Page

The Help page provides some overview of the Senyas app's features and

functionality, along with instructions on how to use them. It is divided into two main

sections:

1. Basic Questions

This answers common questions about the Senyas app, such as:

• Do I need to connect the device to use this app?

• How can I use Senyas to communicate with someone who is

deaf?

• How can I get support?

69
2. Terms of Use

This section provides a link to the Senyas app's terms of use, which

outline the legal agreement between users and the creators of the app.

(a) (b) (c)

Figure 3.3.30. Basic Questions

In Figure 3.3.30, the Question 1 Page for the Senyas app describes the key

differences between using the app with a connected device and without a

connected device. The main difference is that when the device is connected, the

app can receive the user's hand gestures and translate them into text. This allows

for two-way communication between the user and the app. When the device is not

connected, the user cannot use the text-to-speech you need to connect Bluetooth

to work the text-to-speech.

70
The Question 2 Page contains instructions on how to use the Senyas app

to translate Filipino Sign Language (FSL) into text and speech, and vice versa.

This section also provides steps involving using the Senyas app from installing to

your mobile phone to communicate with deaf-mute person.

The Question 3 Page contains the contact information where you can

communicate with creators to entertain your questions or inquiries about the app.

Figure 3.3.31. Terms and Conditions

The Terms and Conditions page for the Senyas app, which is a mobile app

that translates Filipino Sign Language (FSL) into text and speech, and vice versa,

outlines the legal agreement between users and the app's developers. It covers a

variety of topics, including:

71
• Acceptance of Terms

• Prohibited Conduct

• Intellectual Property

• Termination

• Limitation of Liability

• Entire Agreement

• Changes to Terms

• Contact Information

By using the Senyas app, users agree to the terms and conditions. It is

important for users to carefully read and understand the terms and conditions

before using the app.

Figure 3.3.32. About Page

72
This Figure 3.3.32 shows for about page provides users with information

about the app and its purpose. The page begins with a brief overview of the Senyas

app and its purpose:

The Senyas mobile application will serve as a part of the solution for the

communication barrier between deaf-mute individuals and hearing-impaired

people. It will translate sign languages into a readable text and hearable speech.

The purpose of the Senyas app is to "bridge the communication gap

between deaf and hearing individuals by providing a real-time, two-way translation

solution". The about page states that the app is designed to be "accessible to

everyone, regardless of their technical expertise".

Figure 3.3.33. User Manual Page

73
This figure 3.3.33 shows for User Manual page to provides users with

detailed instructions on how to use the Senyas app to translate Filipino Sign

Language (FSL) into text and speech, and vice versa. It also covers a variety of

topics, including:

• System Overview

• To Setup Senyas

• To Connect Device via Bluetooth

• To do Speech to Text

• To do Text-to-Speech

The page is divided into several sections, each of which focuses on a specific

aspect of using the app. Each section includes clear and concise instructions and

it provides users with comprehensive and informative instructions on how to use

the app to its full potential.

74
(a) (b)

(c) (d)

Figure 3.3.34. Senyas Prototype Design

75
Our Senyas prototype is designed to be a working system that can

recognize and translate movements in sign language through the use of hardware

and software components. The selection of components is based on its ability to

accurately translate, analyze data quickly, and record intricate actions. To

guarantee correct gesture recognition in sign language, calibration procedures are

put into place. During development, issues like improving the accuracy of gesture

recognition and ensuring real-time translations are resolved. A dataset of sign

language gestures is used to validate the gesture recognition system's accuracy.

3. Coding and Implementation. Phase following the planning stage, we employ

prototyping and application development methodologies. This involves

constructing a working prototype based on technical design requirements and

product description, product evaluation, algorithm training, and product

development and cost benefit analysis.

3.1 Product Description

SENYAS, an innovative Filipino Sign Language (FSL) translation device

and system, bridges the communication gap between deaf and hearing individuals

through real-time, two-way translation. A glove device equipped with finger

tracking sensors specifically, the joystick and an arm angle accelerometer which

captures hand movements and orientation, translating them into FSL to text that

would be displayed on a mobile application. Simultaneously, the application also

can convert spoken language into text and vice versa, enabling two-way

communication between deaf and hearing individuals.

76
3.2 Product Evaluation

Descriptive survey. The use of surveys can be useful in our research study

as it helps us identify the research questions we need to answer. Before creating

a survey, the researchers must first determine the research questions to ensure

that the questions asked in the survey are relevant. In our study, we opted to use

a descriptive survey to collect data on user perceptions and satisfaction with the

Senyas device and system. We use this type of survey as it aims to gather

feedback on the performance, usability or comfortability and the overall user

experience with Senyas across different aspects like device design, mobile app

design, functionality etc. The survey data results will be analyzed using statistical

analysis techniques outlined in the "Statistical Treatment of Data" section. After

analyzing the survey data, the researchers will interpret the results and draw

conclusions based on the research questions. The survey data will support our

findings and enable us to make recommendations for future research or product

development.

77
Figure 3.4.1. Test Joystick & Gyroscope Value

The evaluation of the joystick and gyroscope is the main topic of this

section. These elements are essential to user engagement and system

responsiveness. Data is carefully gathered throughout the joystick and gyroscope

testing processes. This comprises raw sensor data, system reactions, and any

anomalies or inconsistencies found throughout testing. After the data is gathered,

it is evaluated to find trends, patterns, and possible areas for development.

78
3.3 Algorithm Training

Figure 3.4.2. Collect Dataset

In Figure 3.4.2, the dataset is being gathered since it is essential for

machine learning models used in training and assessment. The dataset is created

by extracting relevant features from the device. These features may include hand

movements and other visual cues associated with sign language gestures. Each

entry in the dataset corresponds to a specific sign language gesture, labeled with

the corresponding translation or meaning. The device's sensors are instrumental

in capturing visual data. Specialized sensors may be employed for capturing

additional information, like hand or finger positions. The validity and dependability

of the research findings are critically dependent on the suitable dataset being

chosen. This section provides details about the sources, methods, and ethical

issues related to data collection.

79
3.4 Product Development

Figure 3.4.3. The Kodular Platform Home Page

Kodular. The Senyas app was made through the Kodular which allow you

to create Android apps easily with a blocks-type editor without needing to know

deeply into the code. With the Material Design UI, your apps will stand out

(Kodular, 2023). It is proved to be the ideal platform for developing our mobile

application due to its drag-and-drop interface, which significantly simplified the app

creation process and eliminated the need for extensive coding knowledge. This

accessibility enabled us to provide a reliability mobile application with just limited

programming expertise, in the development process. Kodular's visual

programming approach made app development more efficient. Kodular is a free

and open-source nature made it an accessible and cost-effective platform for our

project. Overall, Kodular proved to be an invaluable tool for creating our mobile

application, enabling us to develop a functional and user-friendly app.

80
Figure 3.4.4. The Kodular Platform Environment

Kodular. The Kodular is an innovative no-code platform optimized for

building native Android apps. It employs a visual programming interface and drag-

and-drop tools to convert UI designs directly into functional code (Kodular, 2023).

Kodular version name 1.5C.0 Fenix specifically leverages advanced UI rendering,

integration templates, and other features to streamline app development. By using

this rapid no-code platform, we efficiently constructed a feature-rich mobile

application for translating Filipino sign language. This demonstrates the potential

of Kodular as a promising solution for quickly building robust mobile apps under

modern software practices

81
Figure 3.4.5. Kodular Companion

Kodular Companion. Based on Kodular (2023), the Kodular Companion is

a mobile app that allows developers to test their Kodular apps on their Android

devices. The app connects to the Kodular Creator web development platform and

allows developers to see their app changes in real time. This can be extremely

helpful for debugging and testing purposes. With the Kodular Companion, you

don't need to export or compile your app before testing it, which can save you a lot

of time and effort. The Kodular Companion works with all Android devices running

from late and updated version, so you can test your app on a wide range of

devices. The Kodular Companion is very easy to use. Simply connect your Android

device to the Kodular Creator web development platform and scan the QR code

that is displayed or enter the code. The Kodular Companion is a valuable tool for

82
developers who are using the Kodular platform to create Android apps. It can help

you to save time, improve your workflow, and create better apps.

Figure 3.4.6. Edge Impulse Homepage Interface

Edge Impulse. We train our dataset using the Edge Impulse. Edge Impulse

users can get data from various sources, including data from sensors, public

datasets, and data generated through simulations or synthetic data generation.

Edge Impulse is an excellent roadmap for the future of embedded machine

learning, allowing developers to construct and optimize solutions using real-world

data. They simplify and accelerate embedded machine-learning application

development, deployment, and scaling.

83
Figure 3.4.7. Arduino IDE Interface

Arduino IDE. A cross-platform program called the Arduino IDE (Integrated

Development Environment) Arduino (2023) offers a simple user interface for

creating and uploading code to Arduino boards. It is a popular choice for

prototyping and developing electronics projects due to its ease of use and wide

range of supported boards. In our system we utilized the Arduino IDE to bridge the

gap between trained sign language recognition model also the microcontroller

embedded in the glove device. The trained model, which likely resides on a

computer or cloud platform, needs to be converted into a format that can be

executed by the microcontroller's limited processing resources. This is where the

Arduino IDE comes into play. Using the Arduino IDE, the researchers can convert

the trained model into a lightweight version that can be stored in the

84
microcontroller's memory. Once the optimized model is embedded in the

microcontroller, the glove device can perform real-time sign language recognition

to generate corresponding hand sign language gestures.

Figure 3.4.8. Fritzing Interface

Fritzing. Fritzing (2023) is a free and open-source platform that makes

electronics accessible to everyone, regardless of their technical expertise. It

provides a user-friendly software tool, an online community, and various services,

inspired by Processing and Arduino, to create a vibrant ecosystem for electronics

enthusiasts. Fritzing empowers users to seamlessly document their prototypes,

share their creations with others, teach electronics in educational settings, and

even design and manufacture professional printed circuit boards (PCBs). We used

Fritzing to design the schematic and hardware layout of our glove device. Fritzing's

user-friendly interface and comprehensive library of electronic components

enabled us to efficiently visualize and connect the various components of the

85
device, including the joystick, accelerometer, and microcontroller. Fritzing played

a crucial role in the design of our Senyas glove device.

3.5 Cost Benefit Analysis

The table presents a cost breakdown for a project involving the

development of Senyas device and system. It provides detailed tables outlining the

costs associated with acquiring hardware components, software subscriptions,

documentation cost and total cost.

Table 3.4.1:

Hardware Cost

Unit Total
Component Product Description Quantity
price Price

ESP32 Development
Microcontroller 1 205.00 205.00
Board

3D Analog Movement of the hand


5 46.00 230.00
Joystick gesture

Three-axis gyroscope
MPU6050 and a three-axis 1 84.00 84.00
accelerometer
Lithium-ion
Battery 1 400.00 400.00
Polymer
TP4056 Li-ion
Charger Module 1 25.00 25.00
Lithium Battery
Boost 2A DC-DC Power
1 75.00 75.00
Converter Module

PCB Board Board 1 15.00 15.00

Switch On and Off Switch 1 7.00 7.00

86
Jumper Wires Wires 1 29.00 29.00

Hand and Fingers


Glove 1 250.00 250.00
Cover
hand tool used to heat
Soldering Iron 1 499.00 499.00
solder
Others 100.00 100.00
Total 1, 919

The total cost of 1,919.00 pesos is obtained by summing up the total prices

of each component. It represents the cumulative cost of all the individual

components required for the project. This total cost is essential for budgeting and

understanding the financial investment needed to acquire the specified

components for the development of the described system or product.

Table 3.4.2:

Software Subscription

Description Quantity Unit Price Total Price

1 Month
Kodular $ 3.50 per Month $ 0.00 (Free Plan)
Subscription

$ 0.00 (Free
Kodular
Not Applicable Free Download at Play
Companion
Store)

Edge Impulse Not Applicable Free $ 0.00 (Free Plan)

Arduino IDE Not Applicable Free $ 0.00 (Free Plan)

87
Fritzing Not Applicable Free $ 0.00 (Free Plan)

Total $ 0.00 – 0 PHP

The total cost of $ 0.00 indicates that all the mentioned tools and

subscriptions are currently being used under free plans or are freely available. The

inclusion of "0 PHP" emphasizes that there are no associated costs in the given

context. It's common for certain software tools to offer free plans or be open-

source, making them accessible without any monetary expenditure.

Table 3.4.3:

Documentation Cost

Description Quantity Unit Price Total Price

Tokens 3 pcs 55 165


Transportation 5 15 100
Questionnaire Copies 10 38 380
Pre-Oral Manuscript Printing 63 pgs 5 315
Pre-Oral Manuscript Copies 5 315 1,575
Finals Manuscript Printing 145 pgs 5 725
Finals Manuscript Copies 6 725 4,350
Book Bind 4 1000 4000
Total 11,610

The total cost of P 11,610 is obtained by summing up the total prices of

each item. This total represents the overall cost for the various items and services

88
mentioned, including tokens, transportation, questionnaire copies, manuscript

printing, manuscript copies, and bookbinding.

Table 3.4.4:

Total Cost

Description Cost
Hardware Cost P 1,919.00
Software Subscription P 0.00
Documentation Cost P 11,610
Total Cost P 13,529

The hardware cost refers to the expenses associated with acquiring

physical components and devices for a project. In this case, the hardware cost is

P 1,919.00. The breakdown of the hardware cost may include expenses for items

like microcontrollers, joysticks, batteries, and other hardware components, as

mentioned in a previous conversation. The software subscription cost indicates the

expenses related to using software services that may require a subscription fee.

In this case, the cost is P 0.00, suggesting that either the software tools being

utilized are open-source, freely available, or currently being used under a free

subscription plan. The documentation cost pertains to the expenses associated

with various documentation processes, including printing and copying of

questionnaires, pre-oral manuscripts, final manuscripts, and book binding. The

detailed breakdown of this cost includes expenses such as printing, copying, and

binding services, resulting in a total documentation cost of P 11,610.

89
4. Integration and Testing. Occur after the device and app coding and

implementation stages. During this phase, the system is thoroughly tested to

assess the accuracy of the device's functionalities and its ability to communicate

effectively with the application. This testing ensures seamless two-way

communication between the device and the app, validating the overall reliability

and performance of the integrated system.

Figure 3.5.1 App & Device Testing

In Figure 3.5.1 our testing phase's main goal is to thoroughly evaluate the

mobile application and device's multiple features. The thorough testing plan

guarantees that the technology solution satisfies industry standards and user

expectations in addition to meeting technical requirements.

Some challenges occurred during the testing phase, even with careful

planning. These difficulties are openly discussed and range from unpredictable

user behavior to complex technical issues. Furthermore, we accept the limits of

90
our testing methodology, which offers a framework for interpreting the findings and

conclusions.

5. System Deployment. The user is actively engaged in testing the device and

app to evaluate comfortability and accuracy. This phase involves soliciting user

feedback and observing their interactions with the system to ensure seamless two-

way communication between the device and the app.

(a) (b)

91
(c) (d)

Figure 3.6.1. Senyas Deployment

The System Deployment phase marks the transition from development to

practical implementation. In this section, we detail the procedures, strategies, and

considerations involved in deploying the mobile application and its associated

device. The goal is to ensure a smooth and efficient rollout, enabling users to

access and benefit from the developed technological solution.

The entire deployment procedure is guided by a well-organized deployment

plan. The tasks, responsibilities, schedules, and materials needed for an effective

rollout are described in this plan. It considers things like hardware provisioning,

user training, and any infrastructure modifications that may be required.

The deployment plan provides a comprehensive guide for executing the

deployment process. The System Deployment section outlines the careful

planning, execution, and post-deployment support mechanisms implemented to

92
transition the developed mobile application and device from the development

environment to real-world usage. This comprehensive approach ensures a

successful deployment, minimizing disruptions and maximizing user satisfaction.

The deployment process described herein is fundamental to realizing the practical

impact of our technological solution.

6. Maintenance. This final phase is to be carried out indefinitely to improve,

update, and enhance the device and app its functionality.

Figure 3.7.1. Device and App Update

Figure 3.7.1 illustrates the process of maintaining the device and app

through updates. The diagram showcases a systematic approach to improve and

enhance both the device and the associated application. The update cycle involves

93
identifying and addressing issues, implementing new features, and ensuring

compatibility with the latest technologies.

Research Procedure

To begin the research process, it was essential to gather all the necessary

information and data, which was done through a literature review. The researchers

specifically focused on the existing devices related to the study to identify any gaps

or areas where the proposed research could contribute significantly. The data

gathered influences the design of the data and the procedure to be used for

developing the device.

The researchers aim to employ the Senyas device to test and evaluate the

effectiveness and accuracy of wearable translation in real-time. Allows the

researchers to evaluate the objectives of the study if it satisfies it. The device can

be used first to gather dataset of FSL to train the model. The datasets are then

used to train the algorithm and validate the accuracy of the algorithm.

The algorithm or model will be integrated on the glove device and the

Speech-to-Text is included in the application as an extension to allow two-way

communication.

Finally, the researchers would determine the accuracy of the device through

the series of test, conclude its usability and effectiveness in real-setting based on

user experience and results from the tests and surveys conducted.

94
Research Instrument

According to Cleave (2021), we may evaluate our entire device efficiency

and reliability by doing a pilot test. The main advantage of pilot testing is to find

issues before launching the complete or final device. Pilot testing is to evaluate the

device overall usability. It has concerns about whether the device is gathering the

data that it is meant to measure.

Pilot testing can be applied in our study by selecting diverse participants,

preparing the functional device, realistic testing scenarios, providing clear

instructions, observing user interactions, gathering feedback, continuously refining

the technology based on findings, analyzing results for patterns and

improvements, documenting insights for further development, ensuring ethical

considerations, and planning subsequent steps to enhance usability and

functionality.

User experience (UX) in research, according to Rosencrance (2023) is the

study of learning what end users of a system or product need and want, then

employing those insights to enhance the design process for products, devices,

services or software.

Using the user experience (UX) testing in our study we could analyze the

mobile application and wearable device's overall user experience. This involves

monitoring people while they use the system and taking note of their contentment,

usability, and any issues. UX testing may reveal information about how effective

and user-friendly the system is.

95
An assessment or evaluation known as an accuracy test focuses on

determining the accuracy and precision of a device or system. Accuracy tests are

performed in a variety of industries to evaluate how closely the outcomes to

expected result. The goal is to determine the level of accuracy and identify any

errors.

Accuracy test could be part of our testing methods as it involves evaluating

how accurately the device translates sign language gestures into the

corresponding audio and text representations. The test would compare the

translations produced by the system to a set of predetermined and accurate

translations to determine the correctness of the generated outputs. This type of

test helps ensure that the technology is providing reliable and precise translations,

which is crucial for effective communication and user satisfaction.

Statistical Treatment of Data

Specifically, we will employ the Weighted Mean as the statistical tool for this

purpose. This approach will enable us to extract meaningful insights from the

gathered data, aiding us in making informed decisions based on the results.

Weighted Mean

𝑆𝑐𝑎𝑙𝑒 5(𝑥) + 𝑆𝑐𝑎𝑙𝑒 4(𝑥) + 𝑆𝑐𝑎𝑙𝑒 3(𝑥) + 𝑆𝑐𝑎𝑙𝑒 2(𝑥) + 𝑆𝑐𝑎𝑙𝑒 1(𝑥)
𝑊=
𝑇

Where:

W – Weight Mean x – Number of Respondents

T – Total number of Respondents

96
Percentage

We used tables to visualize the percentage frequencies of the respondents

collected in the initial part of the questionnaire analysis. This provided a

comprehensive visual representation of the data we acquired.

Formula for frequency & Percentage

𝐹
𝑃= × 100
𝑇

Where:

P – Percentage of Distribution T – Total number of Respondents

F – Frequency of Respondents

97
CHAPTER IV
RESULTS AND DISCUSSION

This chapter will cover the system functionality and features that meets with

the objectives for the study. This will provide further details on the system and its

performance during deployment. Within this chapter, the researchers meticulously

examine the Requirement Analysis and Specification, interpreting the data being

presented. Furthermore, it undertakes a comprehensive analysis and

interpretation of the data collected, offering valuable insights and facilitating the

nuanced understanding of the findings in the study. This section serves as a critical

bridge between research objectives and real-world application, fostering a

comprehensive understanding of study’s outcomes.

Requirements Analysis and Specification

1. To develop a system that can recognize FSL and interpret it to a normal person

while also enabling two-way communication through recognizing speech from

the normal person in real-time setting.

1.1 To develop a wearable device that can recognize hand gesture and

interpret FSL in real-time.

For recognizing hand gestures and movements, we have developed a

device that can track both the finger and hand movements. When reading sign

language, the range of motion for hand gestures must be must be considered. This

involves the bending of the fingers, as well as the orientation and movement of the

hands, hand gestures like waiving to greet someone. An accelerometer and

98
gyroscope can be used to track their orientation and movement. The

accelerometer is used to detect the velocity of the hand movement, while the

gyroscope tracks the hand's orientation and movement in three dimensions.

Another sensor is utilized to measure the finger bending and Flex sensor is usually

used in this case, but due to its cost, it makes the system more expensive. This

research is a great opportunity to use a new low-cost sensor and analyze its

performance when integrated with the system. Joystick is most commonly used to

control machines and for video games. It is cheap can easily be replaced and with

that, we opted to use it as a sensor to estimate bending of the fingers. This sensor

will be connected to a microcontroller that will process their data and its calculated

output. The device is in compliance with the objective for the study, to recognize

hand gestures of FSL.

One of the major functions of our system is the hand gestures/sign

recognition of the FSL and be able to classify it to its appropriate meaning. The AI

that will be implemented in our system will handle this workload and its

performance will be determined based on the sensors and the data gathered. To

avoid any confusion, our approach differs from the vision-based system, as noted

by Tan Ching Phing, et al (2019), vision-based systems recognize hand and finger

movements by applying feature extraction techniques to images. On the other

hand, wearable technology for sign language identification often uses glove-based

or user-attached sensors.

To build and develop our model, we can use AI implementation platform

that are available today like Roboflow and Edge Impulse. Since we are handling

99
raw data from sensors unlike images and videos for vision-based system, we have

decided to use Edge Impulse for the development of FSL recognizing AI and

implementing it to the system. Edge Impulse is suitable for developers at all skill

levels, ranging from beginners to experts. It provides a user-friendly interface that

simplifies the integration of machine learning into edge devices. It also includes

optimization features which supports edge devices requiring it less storage and

power, suitable for microcontrollers such as ESP32 and Teensy series.

The platform also includes models that is available to be used by the user.

Custom models can be used; however, you need an enterprise description to be

able to do so. The model we used is the Neural Network Classification model,

which is included for free in the platform. Inspired by the human brain, this model

uses algorithms to learn from our data and predict possible outcomes based on

new data points. This model is claimed to be ideal for anomaly detection, predictive

maintenance, and gesture recognition. We need our AI to be able to classify

different features based on unseen data.

The first procedure of creating an AI for your system is acquiring the

datasets needed for machine learning, datasets are very important to having an

accurate system. Acquiring the datasets for our system is achieved from recording

the hand gestures and sign of FSL while wearing the device. The raw data from

sensors will extracted through serial communication. An 80:20 split of the data is

made into training and test sets. This ratio will effectively train the model while also

test it to evaluate the accuracy.

100
After the data is captured, it will be then pre-processed by extracting the

pattern where the event occurred. Pre-processing also involves adding

timestamps and labelling the dataset. The datasets acquired will be then used for

machine learning, and during the development of the FSL recognizer.

Features will be then generated from the raw data on the datasets. Since

raw data is frequently complex and high-dimensional, models have a hard time

learning from it. By identifying the most pertinent and instructive features, feature

extraction lowers the data's dimensionality and improves accuracy and efficiency

of learning process.

Refer to figure 4.1.1, you will observe the features derived from the raw data

and visualizing within the platform by grouping and assigning different labels to a

color. Because the complexity of different datasets gathered, you can see some

features overlapping, like “A” and “7”. This can affect the accuracy of recognizing

the hand gestures.

101
Figure 4.1.1. Feature Extraction

The generated features from the datasets will be then used by the Neural

Network Classification to learn the different patterns from the data. The model was

Training for 50 epochs with a learning rate of 0.0005., while also feeding it with

data in 32 batches. The neural network architecture consists of Input layer (231

features), dense layer of 80 neurons, another dense layer of 100 neurons, and

another 100 neurons of dense layer with dropout layer in-between them.

With this architecture, the model was then trained with the use of the

gathered dataset with several dense layers with ‘softmax’ activation and dropout

layers of 0.1 to avoid overfitting. The result of the evaluation is then produced and

the model was able to achieve 92.3% with 2.68 loss.

102
If you refer to the figure 4.2, “A”

struggled to be recognized because it

is conflicting with label “6” and “J”.

While others were able to be identified

well with no conflicts, but this will differ

from real-time data interpretation. This

is due to features being too similar, and

the AI doesn’t have enough data to


Figure 4.1.2. Result Accuracy and Loss
differ them from one another.

1.2 To develop an android application for the system to display translated FSL

text and recognize normal person speech using speech-to-text technology.

The android application utilizes speech recognition technology to allow real-

time translation of speech-to-text. Specifically, the Kodular platform was used to

integrate the speech recognition capability. A speech recognizer module in Kodular

was added to the application to enable voice-to-text transcription.

Figure 4.1.3. Speech Recognizer

The use of the Kodular speech recognition component enabled

straightforward integration of speech-to-text capability into the application for two-

way communication. It avoided the need to build a speech recognition system from

scratch. The module provided an off-the-shelf solution that could be easily

103
incorporated through simple configuration and by connecting it to the text display

module.

When a normal person speaks into the android devices microphone, the

speech recognizer module converts the speech-to-text in real-time. The module

utilizes Google's speech recognition API to perform speech-to-text transcription.

Key parameters like language and accent were configured to optimize speech

recognition performance. The speech-to-text output from the speech recognizer is

then passed to the text display to dynamically generate the corresponding

translated text. This allows seamless conversion of verbal speech into visual text

in real-time within the application through the integration of the speech recognizer

module.

The speech recognizer module allowed real-time conversion of speech-to-

text. Additionally, the displayed text could be converted into speech output using

text-to-speech functionality and a sign language device designed for deaf-mute

individuals. The speech recognizer enabled normal verbal communication to be

speech-to-text in real-time. The text-to-speech could then be processed by the sign

language device to produce synthesized speech output suitable for deaf-mute

users. This functionality allowed two-way communication - a normal person's

speech could be captured as text, and then text-to-speech output tailored for deaf-

mute individuals by the sign language device.

Overall, integrating the speech recognizer module provided an efficient way

to facilitate real-time communication for normal person and deaf-mute users. It

enabled seamless speech-to-text transcription for capturing normal speech, and

104
text-to-speech synthesis via the sign language device to vocalize messages for

deaf-mute recipients.

1.3 And enable communication between the wearable device to Android

application in real-time.

The device will need to continuously stream data of recognized FSL to the

software application. To achieve this, the device will send the string output as bytes

that will then streamed through Bluetooth connection between the device and the

application. Bluetooth is known to not have great distance for communication, but

we need to reduce the power consumption for the device. We can use Bluetooth

for less power consumption and at the same data transfer rates compared to other

means of data communication.

2. To identify the limitations of the system for future development and research

by evaluating these factors:

In our second objective we have to evaluate the performance and identify

the limitations of the system for future development and improvement.

To achieve this, we conducted user testing with 5 respondents to evaluate

the device and system performance. The results showed accuracy limitations with

certain complex gestures, indicating opportunities to improve recognition

capabilities. Survey results from respondents also indicate to improve the system

for long-term continuous use. This evaluation enabled us to identify limitations and

make recommendations for further enhancement. Based on these findings, we'll

add more sign language gestures to the device's library and make it more accurate

105
at recognizing them. We'll also explore ways to make it more comfortable to hold

and use it for a long duration of time or for continuous conversation. By fixing these

things, we'll make the device more reliable and get the best user experience when

using in the real-world scenario.

Presentation, Analysis & Interpretation of Data

During the deployment, a survey questionnaire is also given to the

participants. This will provide feedback on the overall performance of the system:

Senyas Glove and App. When analyzing and interpreting the computed value of

the weighted mean, the scale employed was as follows:

Part 1. Demographics Profile of the Respondents

Table 4.1

Age of the Respondents

No. of Respondents Age Percentage


2 14 40%
1 15 20%
1 16 20%
1 19 20%

Table 4.1 shows the percentage of respondents according to their Age. Out

of 5 respondents, 2 is 14 years old (40%), 1 is 15 years old (20%), 1 is 16 years

old (20%), and 1 is 19 years old (20%).

106
Table 4.2

School of the Respondents

School No. of Respondents Percentage


Samar National School 5 100%

Table 4.2 shows the percentage of respondents according to their School.

Out of 5 respondents, all of them (100%), were from Samar National School. We

chose this school since they have a Special Education class for deaf, mute and for

both deaf-mute individuals.

Table 4.3

Gender of the Respondents

Gender Percentage
M 4 80%
F 1 20%
Total: 5 100%

Table 4.3 shows the percentage of respondents according to their Gender.

Out of 5 respondents, 4 are male (80%), 1 is female (20%).

107
Table 4.4

Mode of the Respondents

Mode No. of Respondents Percentage


Deaf 1 20%
Mute 0 0
Deaf-Mute 4 80%

Table 4.4 shows the percentage of respondents according to their mode.

Out of 5 respondents, 1 is deaf (20%), and 4 are Deaf-mute (80%).

This table present the results of a survey evaluating the Senyas Filipino

Sign Language Translation Device and System for Two-Way Communication. The

survey collected feedback from 5 respondents aged 14-19 years old, who are

students at the Samar National School. Of the respondents, 1 is female, 4 are

male, 1 is deaf, and 4 are deaf-mute.

The survey covers several aspects of the system including device

performance, mobile application performance, overall system performance, and

user experience. Respondents rated on a scale from 1 to 5, with 1 being "Very

Dissatisfactory" and 5 being "Very Satisfactory." A weighted mean score is

calculated for each metric to summarize the average rating. An interpretation guide

is provided to categorize the weighted mean scores as "Very Dissatisfactory" (1.0-

1.7), "Dissatisfactory" (1.8-2.5), "Neutral" (2.6-3.4), "Satisfactory" (3.5-4.2), and

"Very Satisfactory" (4.3-5.0).

108
Table 4.5

Device Performance

Scale Weighted
Indicators Interpretation
5 4 3 2 1 Mean

Part 2: Device Performance


How would you rate the ease of
setting up the wearable device for 3 1 1 0 0 4.4 Satisfactory
initial use?
How would you rate the
0 0 3 1 1 2.4 Dissatisfactory
comfortability of the device?
The device demonstrates
0 0 1 1 3 1.6 Dissatisfactory
adjustability to the user’s hand.

The wearable device was able to fit 0 0 3 2 0 2.6 Neutral


onto user’s hand.
The wearable device can be easily
2 1 0 0 2 3.2 Neutral
worn and remove.
The wearable device's weight
0 3 2 0 0 3.6 Satisfactory
doesn't impede mobility.
The wearable device showcases
1 0 2 2 0 3.0 Neutral
an ergonomic design.
The device was able to operate for
3 0 1 0 1 2.6 Satisfactory
a long period of time.
Are you satisfied with the overall
1 0 0 3 1 2.4 Dissatisfactory
design of the device?
Legend: “Very Dissatisfactory" (1.0-1.7), "Dissatisfactory" (1.8-2.5), "Neutral" (2.6-3.4),
"Satisfactory" (3.5-4.2), and "Very Satisfactory" (4.3-5.0)

During the part of device performing that talks about the overall

performance of comfortability in the hands of user, the results were obtain that

resulted in a neutral rating of 3 for comfort. Which means that the device is not

easy to wear or remove, but it's not uncomfortable either. Overall, users have

mixed feelings about the device's comfort.

109
Table 4.6

Mobile Application Performance

Scale Weighted
Indicators Interpretation
5 4 3 2 1 Mean

Part 3: Mobile Application Performance


The software was able to connect
1 2 0 1 1 3.2 Neutral
to the device with no issues.
The user interface is visually
2 0 0 3 0 3.2 Neutral
appealing and comprehensible.
The app was easy to navigate and
0 0 2 1 2 2.0 Dissatisfactory
use.
The app was able to translate FSL Very
0 0 1 0 4 1.2
from using the connected device. Dissatisfactory
The app was able to record speech
from external users and convert 0 1 2 2 0 2.8 Neutral
into text.
The output text of the software is
0 1 2 1 1 2.6 Neutral
comprehensible.
How satisfied are you with the
application overall user- 3 1 0 1 0 4.2 Satisfactory
friendliness and ease of use?
Legend: “Very Dissatisfactory" (1.0-1.7), "Dissatisfactory" (1.8-2.5), "Neutral" (2.6-3.4),
"Satisfactory" (3.5-4.2), and "Very Satisfactory" (4.3-5.0)

In mobile application performance, the respondents gave a neutral rating of

2.7 for the mobile app. The app interface is easy to understand and navigate. The

app connects to the device without any issues. Users are generally happy with the

mobile app.

110
Table 4.7

System Performance

Scale Weighted
Indicators Interpretation
5 4 3 2 1 Mean

Part 4: System Performance


The whole system is easy to set-up
1 0 4 0 0 3.4 Neutral
and can be used immediately.
Different features included in the
1 2 1 1 0 3.6 Satisfactory
system were able to function.
There were no issues with the
connectivity of the device and the 1 2 0 2 0 3.4 Neutral
software.
The system was able to recognize
1 1 1 1 1 3 Neutral
different FSL letters/words.
System was able to construct
2 0 2 0 1 3.4 Neutral
comprehensive sentence from FSL.
How would you rate the accuracy of
the system when it comes 1 1 0 3 0 3.0 Neutral
recognizing and translating FSL?
How satisfied are you with the
number of letters/words that system 3 0 1 0 1 3.8 Satisfactory
was able to translate from FSL?
How would you rate the overall
functionality and the features 0 3 0 1 1 3.0 Neutral
included in system?
Legend: “Very Dissatisfactory" (1.0-1.7), "Dissatisfactory" (1.8-2.5), "Neutral" (2.6-3.4),
"Satisfactory" (3.5-4.2), and "Very Satisfactory" (4.3-5.0)

In talking about the system performance, the survey results from the

respondents were neutral with total rating of 3.3 for the system’s accuracy. The

respondents found that the system performance in translating Filipino Sign

Language were not totally accurate but it performs well in other areas of the

system. Thus, the users were satisfied with system’s overall performance.

111
Table 4.8

User Experience
Scale Weighted
Indicators Mean
Interpretation
5 4 3 2 1
Part 5: User Experience
The system includes instructions and
user manual that are very 1 1 0 2 1 2.8 Neutral
comprehendible.
The overall system demonstrates
0 3 1 1 0 3.4 Neutral
ease-of-use and user-friendliness.
The system able to operate in any Dissatisfactor
0 0 1 1 3 1.6
conditions with no problem. y
How would you rate the system when Neutral
1 0 2 0 2 2.6
it comes to long term use?
The system was able to function with Neutral
1 0 2 1 1 2.8
no issues in a normal setting.
The system can be used for informal
2 2 1 0 0 4.2 Satisfactory
conversation.
The system will be able to be used in Neutral
2 0 1 2 0 3.4
formal conversation.
How would you rate the system in Neutral
1 1 2 0 1 3.2
interpreting FSL in real-time?
The system was able to keep-up with Neutral
1 1 2 0 1 3.2
the conversation.
Rate the overall usability of the system
3 0 0 0 2 3.4 Neutral
for day-to-day activities.
Legend: “Very Dissatisfactory" (1.0-1.7), "Dissatisfactory" (1.8-2.5), "Neutral" (2.6-3.4),
Legend: “Very Dissatisfactory" (1.0-1.7), "Dissatisfactory" (1.8-2.5), "Neutral" (2.6-3.4),
"Satisfactory" (3.5-4.2), and "Very Satisfactory" (4.3-5.0)

The user experience section talks about how the system performs in the

perspective of the user. Users gave a neutral rating of 3.1 for the user experience.

This is because the respondents found that during the formal conversation it

cannot function well but there is a room for improvement for the user experience.

112
Table 4.9

Overall Satisfaction
Scale Weighted
Indicators Interpretation
5 4 3 2 1 Mean

Part 6: Overall Satisfaction


How satisfied are you overall
with the performance of the
SENYAS: Filipino Sign
0 3 2 0 0 3.6 Satisfactory
Language Translation Device
and System for Two-Way
Communication system?
Legend: “Very Dissatisfactory" (1.0-1.7), "Dissatisfactory" (1.8-2.5), "Neutral" (2.6-3.4),
"Satisfactory" (3.5-4.2), and "Very Satisfactory" (4.3-5.0)

The last part of the survey is all about the overall satisfaction of the

respondents. The survey results that gathered from respondents was resulted in a

3.6 rating that means they have been satisfied by the overall performance of the

system.

In summary, users have mixed feelings about the device's comfort, giving it

a neutral rating of 3. The mobile app is easy to use and connect, and users are

generally happy with it. It received a neutral rating of 2.7. The system's accuracy

is not perfect, but users are satisfied with its overall performance. It received a

neutral rating of 3.3. There is room for improvement in the user experience, but

users are satisfied with the overall performance of the system. It received a neutral

rating of 3.1. Users are overall satisfied with the system, giving it a rating of 3.6.

113
Chapter V

SUMMARY, CONCLUSION & RECOMMENDATION

This chapter is the final part of our research project. It summarizes the most

important things we found and achieved. We will start by explaining what we

wanted to do with our project, how we did it, and what we discovered. Then, we

will carefully analyze the results and explain what they mean for our study. Finally,

we will suggest ways to improve the Senyas system in the future so that it can help

even more people with hearing and speech impairments participate in society.

Summary

People who are deaf and mute face big problems talking to others. Their

disability makes it hard for them to live normal lives and often makes them feel

lonely and disconnected from the world. It's even harder for people who are both

deaf and mute, as they have a huge communication barrier when interacting with

people who don't know sign language or haven't talked to deaf and mute people

before.

Thus, this chapter summarizes our thesis project, "Senyas: Filipino Sign

Language Translation Device and System for Two-Way Communication." This

project aimed to bridge the communication gap between hearing individuals and

those who are deaf or mute and for both. We achieved this by developing a unique

combination of hardware and software:

1) Glove Device: This glove is equipped with a joystick to track finger

movements and an accelerometer to determine arm angles. These inputs

114
are then translated into corresponding Filipino Sign Language (FSL) hand

gestures.

2) Mobile Application: The generated hand gestures are displayed on the

mobile application, allowing hearing individuals to understand the user's

communication. Additionally, the application features speech-to-text and

text-to-speech functionalities, enabling two-way communication for users

who are deaf or hard of hearing.

In our thesis project, collecting raw data and train this using the Neural

Network Classification model to recognize Filipino Sign Language (FSL) plays a

crucial role. This data will serve as the foundation for the model to learn the

complex relationships between sensor readings (finger movements and arm

angles) and their corresponding FSL gestures. By understanding the importance

and process of collecting raw data and using it to train a model, we can enhance

the effectiveness and accuracy of our Senyas system for FSL recognition.

The joystick we use needs to be pulled back for it to understand some

gestures. This is because some sign language gestures only require a small bend

of the finger, but the joystick doesn't detect or read that. It mostly recognizes

gestures where the finger is not slightly bent.

However, our research has achieved significant results, demonstrating the

effectiveness of the Senyas system in facilitating communication between

individuals who are deaf and mute. We successfully translated finger movements

and arm angles into their FSL representations, and the mobile application provided

a clear and accessible platform for communication.

115
The development of Senyas represents a significant contribution to the field

of technology. This innovative system has the potential to improve the lives of

individuals who are deaf and mute by enabling them to communicate more

effectively and participate more fully in society.

While our research has achieved significant progress, we acknowledge that

there is a room for further development. Future efforts will focus on enhancing the

accuracy and efficiency of the translation process, expanding the vocabulary of

supported signs, and exploring potential integration with other technologies. We

believe that by continuing to refine and develop the Senyas system, we can

contribute to a more inclusive and accessible world for all.

Conclusion

In conclusion, our thesis project "Senyas: Filipino Sign Language Translation

Device and System for Two-Way Communication," has successfully achieved the

following objectives objectives:

1.) To develop a system that can recognize FSL and interpret it to a normal

person while also enabling two-way communication through recognizing

speech from the normal person in real-time setting.

2.) To identify the limitations of the system for future development and research

by evaluating: Device Performance, Software Performance, System

Performance, User Experience, and Overall Functionality.

The utilization of a Neural Network Classification model proved instrumental

in achieving these objectives. The model's ability to learn complex patterns and

relationships in data allowed for accurate FSL recognition and translation.

116
Additionally, its adaptability facilitated the continuous improvement of the system

through retraining with new datasets.

The product evaluation using survey questionnaire were used to gather data

from the respondents in this thesis project. The survey questionnaire mainly

consists of five parts. The device and system performance with the rating of 3 and

3.3 respectively means that the respondents were neutral in this part. The mobile

application performance with the rating of 2.7 and the overall satisfaction with the

rating of 3.6 which means that they’re satisfied with it. Then the user experience

section has the total of 3.1 rating which means that the respondents are satisfied

with it.

During our survey we had a limited number of deaf-mute people who

participated in our product evaluation. Also, most of the participants could not read

or write well. This means that we didn't get enough feedback to make any

recommendation or suggestion coming from our respondents.

While limitations exist, the "Senyas" system has demonstrated significant

potential in facilitating two-way communication for deaf and muted individuals. Its

ability to translate FSL into text-to-speech and speech-to-text language opens up

new approach for communication and social interaction for those who face

communication barriers. In relation to the second objective, the performance of the

system was measured through survey questionnaire and we can analyze the result

from its data.

The respondents were 5 students aged 14-19 years old from the Samar

National School. There was 1 female and 4 male respondents. In terms of hearing

117
ability, there was 1 deaf respondent and 4 deaf-mute respondents. All 5

respondents completed the survey, so there were 5 users that responded.

The questionnaire utilized in the survey is Likert scale. It is a common type

of questionnaire because they are relatively easy to design and administer, and

they can provide quantitative data that can be easily analyzed. In our surveys, it

had sections on device performance, mobile application performance, overall

system performance, user experience, and overall satisfaction. Respondents rated

each metric on a scale from 1 ("Very Dissatisfactory") to 5 ("Very Satisfactory").

To analyze the data, a weighted mean score was calculated for each survey

metric. The formula for the weighted mean is provided, where W is the weighted

mean, x is the number of responses for each scale rating, and T is the total number

of responses. An interpretation guide categorizes the weighted mean scores as

"Very Dissatisfactory" (1.0-1.50), "Dissatisfactory" (1.51-2.50), "Neutral" (2.51-

3.50), "Satisfactory" (3.51-4.50), and "Very Satisfactory" (4.51-5.0). The weighted

means enabled statistical analysis of the average user ratings on the various

aspects of the system.

In summary, a survey questionnaire with Likert scale responses was

completed by 5 student respondents. A weighted mean methodology was utilized

to analyze the user rating data statistically and measure the systems performance.

Recommendation

Based on our findings, we recommend the following for future development

and improvement:

118
• More datasets gathering: Make a more datasets to improve accuracy of the

sign language translation. Increasing the number of datasets will result in

more specific output.

• Expanding the FSL vocabulary: Increase the number of FSL signs

recognized by the system to cater to a wider range of communication needs.

• Improving accuracy for complex signs: Address limitations in recognizing

specific FSL signs and develop techniques to handle complex hand

movements for increased accuracy.

• Integrating additional features: Explore the inclusion of features like sign

language fluency analysis or real-time translation into spoken languages for

enhanced functionality.

• Investigating alternative sensor technologies: Explore the use of other

sensor technologies like EMG or vision-based systems for potential

improvements in accuracy and functionality.

• Using two gloves device: By capturing both hands independently, the

system can differentiate between signs that involve simultaneous

movements on both hands, further enhancing its accuracy.

• Animated sign languages: Translate sign languages into animated gestures

as it will allow more expressive representation of sign languages in

conveying emotions.

• Mobile app responsiveness: Users with different devices (phones, tablets,

etc.) and screen sizes can access and use the app comfortably. This

includes users with visual impairments who may rely on larger text.

119
• Battery life indicator: Knowing the device's battery status makes the user

trust and confidence in the Senyas system.

• Automatic disconnection of the device: It refers when the glove device is

turn off then it will automatically disconnect its Bluetooth connection from

the mobile app

• Adjustable glove device: Develop a self-adjusting mechanism using elastic

materials or adjustable straps to accommodate different hand sizes

comfortably and securely.

• Alternative hardware design: Explore other wearable options like

armbands, rings, or wristbands that can track finger movements and hand

orientation accurately.

• Use smaller components: Incorporate or used miniaturized versions of the

PCB and battery to make it more compact and reduce overall size and

weight.

• Adding calibration feature: Develop a software-based calibration tool that

allows users to personalize the system for their specific hand size and

movement patterns.

120
BIBLIOGRAPHY

Ahmed, M.A., et al. (2021). "Real-time sign language framework based on

wearable device: analysis of MSL, DataGlove, and gesture recognition." Soft

Computing, 25, 11101-11122.

Akshatha Rani K & Dr. N Manjanaik. (2021). "Sign Language to Text-

Speech Translator using Machine Learning." International Journal of Emerging

Trends in Engineering Research, 9(7),

https://doi.org/10.30534/ijeter/2021/13972021.

Ambar, R., et al. (2018). "Development of a Wearable Device for Sign

Language Recognition." Journal of Physics: Conference Series, 1019(1),

https://doi.org/10.1088/1742-6596/1019/1/012017.

American Speech-Language-Hearing Association. (n.d.). "Communication

Disorders: Fast Facts." Retrieved from https://www.asha.org/About/Quick-Facts-

on-Communication-Disorders/.

Ameur, S., et al. (2020). "Chronological pattern indexing: An efficient feature

extraction method for hand gesture recognition with Leap Motion." Journal of

Visual Communication and Image Representation, 70, 102842.

Ananthanarayana, T., et al. (2021). Deep Learning Methods for Sign

Language Translation. ACM Transactions on Accessible Computing, 14(4).

https://doi.org/10.1145/3477498

121
Anupama, H.S., et al. (2021). Automated sign language interpreter using

data gloves. Proceedings of the International Conference on Artificial Intelligence

and Smart Systems (ICAIS), Coimbatore, India, 25-27, 472-476.

Babour, A., et al. (2023). Intelligent gloves: An IT intervention for deaf-mute

people. Journal of Intelligent Systems, 32(1). https://doi.org/10.1515/jisys-2022-

0076

Bill Vicars (2023). Lifeprint.com: ASL American Sign Language. Retrieved

from https://www.lifeprint.com/

Brady, K., et al. (2018). American Sign Language Recognition and

Translation Feasibility Study. National Technical Reports Library - NTIS.

Cabigon, J. V. L., et al. (2021). Development of a machine translation

system for Filipino Sign Language to written Filipino text. Journal of Information

Science, 47(1), 77-91. https://doi.org/10.1177/0165551520956071

Cheok, M. J., et al. (2019). A review of hand gesture and sign language

recognition techniques. International Journal of Machine Learning and

Cybernetics, 10, 131-153.

Choi, H., & Park, M. (2022). Method and apparatus for sign language

translation using accelerometer and gyroscope sensors. [US Patent No.

10,285,380]. United States Patent and Trademark Office.

122
Cristobal, S., & Martinez, L. B. (2021). Filipino Sign Language as

Endangered: A Case of Oppression and Empowerment. Journal of Multilingual and

Multicultural Development, 42(4), 331-344.

https://doi.org/10.1080/01434632.2020.1788813

Department of Education. (n.d.). Special Education (SPED) Program.

Retrieved from https://www.deped.gov.ph/k-to-12/curriculum-guides/special-

education-sped-program/

Edge Impulse. (2023). Edge Impulse. https://edgeimpulse.com/

Fisher, T. (2022). What Are Portable Devices? Definition and Examples.

Lifewire. https://www.lifewire.com/what-are-portable-devices-2377121

Forage. (2023). Verbal Communication - Definition, Examples, Importance,

Skills. Retrieved from https://www.theforage.com/blog/skills/verbal-

communication

Gadekallu TR, et al. (2021). Hand gesture classification using a novel CNN-

crow search algorithm. Complex Intelligent Systems, 7(4), 1855-1868.

https://doi.org/10.1007/s40747-021-00324-x

Garg, S., & Dhall, A. (2021). Assistive Technologies for the Deaf and Hard

of Hearing: A Comprehensive Review. Journal of Medical Systems, 45(3), 1-14.

Google Cloud. (2023). Cloud Speech-to-Text. Retrieved from

https://cloud.google.com/speech-to-text

123
Gu Y, et al. (2022). American Sign Language Translation Using Wearable

Inertial and Electromyography Sensors for Tracking Hand Movements and Facial

Expressions. Frontiers in Neuroscience, 16, 962141.

https://doi.org/10.3389/fnins.2022.962141

Kim G-M & Baek J-H. (2019). Real-time hand gesture recognition based on

deep learning. Journal of Korea Multimedia Society, 22(4), 424-431.

Kim, H. G., & Kim, H. W. (2020). Development of a mobile sign language

translator system using deep learning. IEEE Access, 8, 199278-199285.

Kodular (2023). Introduction. https://docs.kodular.io/

Kodular. (2023). Kodular companion.

https://play.google.com/store/apps/details?id=io.makeroid.companion&hl=en&gl=

US

Kodular. (2023). Kodular creator. https://www.kodular.io/creator/

Lee, J., & Kim, H. (2019). Glove-based sign language recognition system

and method. [US Patent No. 9,952,072]. United States Patent and Trademark

Office.

Mailonline, ROHF. (2021). 'SignAloud' gloves translate sign language

gestures into spoken English. http://www.dailymail.co.uk/sciencetech/article-

3557362/SignAloudglovestranslate-sign-language-movements-spoken

English.html.

124
Microsoft. (2023). Azure Speech Services. Retrieved from

https://azure.microsoft.com/en-us/services/cognitive-services/speech-services/

Mohamed Aktham Ahmed, et al. (2018). A Review on Systems-Based

Sensory Gloves for Sign Language Recognition State of the Art between 2007 and

2017. Sensors, 18, 2208. https://doi.org/10.3390/s18072208

Montefalcon, M. D., et al. (2021). Filipino Sign Language Recognition using

Deep Learning. ACM International Conference Proceeding Series, 219-225.

https://doi.org/10.1145/3485768.3485783

National Deaf Center on Postsecondary Outcomes. (n.d.). Employment.

Retrieved from https://www.nationaldeafcenter.org/employment

Núñez-Marcos, A., et al. (2023). A survey on Sign Language machine

translation. Expert Systems with Applications, 213.

https://doi.org/10.1016/j.eswa.2022.118993

RxList. (2023). Definition of Mute. RxList.

https://www.rxlist.com/mute/definition.html

Statcounter Global Stats. (n.d.). Mobile Operating System Market Share

Philippines. Retrieved from https://gs.statcounter.com/os-market-

share/mobile/philippines

Tan Ching Phing, et al. (2019). Wireless Wearable for Sign Language

Translator Device with Android-based App. University Tun Hussein Onn Malaysia.

https://doi.org/10.1007/978-981-13-6031-2_27

125
Terraskills. (2023). Sign language's importance in communication.

Retrieved from https://terraskills.com/sign-languages-importance-in-

communication/

What Is SDLC? Understand the Software Development Life Cycle. (n.d.).

Retrieved from https://stackify.com/what-is-sdlc/

What is UX Research and What Does a UX Researcher Do? (n.d.).

Retrieved from https://www.techtarget.com/searchsoftwarequality/definition/UX-

research

World Federation of the Deaf. (n.d.). Sign Language. Retrieved from

https://wfdeaf.org/our-work/sign-language/

World Health Organization. (2021). Deafness and Hearing Loss. Retrieved

from https://www.who.int/news-room/fact-sheets/detail/deafness-and-hearing-

lossWorld Health Organization. (2021). Disabilities. Retrieved September 2021,

from https://www.who.int/health-topics/disability#tab=tab_1

World Health Organization. (2021). Disability and Health. Retrieved from

https://www.who.int/news-room/fact-sheets/detail/disability-and-health

Yeomans, M. (2021). Machine Learning Explained. MIT Sloan Ideas Made to

Matter. https://mitsloan.mit.edu/ideas-made-to-matter/machine-learning-

explained

Yutonh Gu, Chao Zhen, Masahiro Todoh, & Fushen Zha. (2022). American

Sign Language Translation Using Wearable Inertial and Electromyography

126
Sensors for Tracking Hand Movements and Facial Expressions. Neuromorphic

Engineering, Volume 16 – 2022. https://doi.org/10.3389/fnins.2022.962141

Zheng, J., Zhao, Z., Chen, M., Chen, J., Wu, C., Chen, Y., Shi, X., & Tong,

Y. (2020). An Improved Sign Language Translation Model with Explainable

Adaptations for Processing Long Sign Sentences. Computational Intelligence and

Neuroscience, 2020. https://doi.org/10.1155/2020/8816125

127
128
APPENDIX A

Letter For Request of Adviser

129
APPENDIX B

Letter for Implementation Approval

130
APPENDIX C

Ethics Review Certifcate

131
APPENDIX D

Research Questionnaire Approval

132
APPENDIX E

Questionnaire

Part 1: Demographics

Name (Optional):

Age:

School:

Gender:

Mode:

• Deaf Mute Deaf-Mute

133
Part 2: Device Performance
Instructions: On a scale of 1 to 5, where 1 means "Very Dissatisfied" and 5
means "Very Satisfied," please choose the number that best represents your
satisfaction with the Senyas device performance.
1 = Very Dissatisfied
2 = Dissatisfied
3 = Neutral
4 = Satisfied
5 = Very Satisfied

Questions 1 2 3 4 5
How would you rate the ease of setting
up the wearable device for initial use?
How would you rate the comfortability of
the device?
The device demonstrates adjustability to
the user’s hand.

The wearable device was able to fit onto


user’s hand.
The wearable device can be easily worn
and remove.
The wearable device's weight doesn't
impede mobility.
The wearable device showcases an
ergonomic design.
The device was able to operate for a
long period of time.
Are you satisfied with the overall design
of the device?

134
Part 3: Mobile Application Performance
Instructions: On a scale of 1 to 5, where 1 means "Very Dissatisfied" and 5
means "Very Satisfied," please choose the number that best represents your
satisfaction with the Senyas mobile application performance.
1 = Very Dissatisfied
2 = Dissatisfied
3 = Neutral
4 = Satisfied
5 = Very Satisfied
Questions 1 2 3 4 5

The software was able to connect to the

device with no issues

The user interface is visually appealing

and comprehensible.

The app was easy to navigate and use.

The app was able to translate FSL from

using the connected device.

The app was able to record speech from

external users and convert into text.

The output text of the software is

comprehensible.

How satisfied are you with the application

overall user-friendliness and ease of use?

135
Part 4: System Performance
Instructions: On a scale of 1 to 5, where 1 means "Very Dissatisfied" and 5
means "Very Satisfied," please choose the number that best represents your
satisfaction with the Senyas system performance.
1 = Very Dissatisfied
2 = Dissatisfied
3 = Neutral
4 = Satisfied
5 = Very Satisfied

Questions 1 2 3 4 5
The whole system is easy to set-up and
can be used immediately.
Different features included in the system
were able to function.
There were no issues with the
connectivity of the device and the
software.
The system was able to recognize
different FSL letters/words.
System was able to construct
comprehensive sentence from FSL.
How would you rate the accuracy of the
system when it comes recognizing and
translating FSL?
How satisfied are you with the number of
letters/words that system was able to
translate from FSL?
How would you rate the overall
functionality and the features included in
system?

136
Part 5: User Experience
Instructions: On a scale of 1 to 5, where 1 means "Very Dissatisfied" and 5
means "Very Satisfied," please choose the number that best represents your
satisfaction with the Senyas user experience.
1 = Very Dissatisfied
2 = Dissatisfied
3 = Neutral
4 = Satisfied
5 = Very Satisfied

Questions 1 2 3 4 5

The system includes instructions and user


manual that are very comprehendible.
The overall system demonstrates ease-of-
use and user-friendliness.
The system able to operate in any
conditions with no problem.
How would you rate the system when it
comes to long term use?
The system was able to function with no
issues in a normal setting.
The system can be used for informal
conversation

The system will be able to be used in


formal conversation.
How would you rate the system in
interpreting FSL in real-time?
The system was able to keep-up with the
conversation.
Rate the overall usability of the system for
day-to-day activities.

137
Part 6: Overall Satisfaction
Please provide your insights regarding the overall satisfaction using the
Senyas: Filipino Sign Language Translation Device and System in addressing
Two-Way Communication. Please share your insights in the spaces provided
below.
Instructions: On a scale of 1 to 5, where 1 means "Very Dissatisfied" and
5 means "Very Satisfied," please choose the number that best represents your
satisfaction with the Senyas performance.
1 = Very Dissatisfied
2 = Dissatisfied
3 = Neutral
4 = Satisfied
5 = Very Satisfied

Question 1 2 3 4 5

How satisfied are you overall with the


performance of the SENYAS: Filipino Sign
Language Translation Device and System for
Two-Way Communication system?

Additional Comments and Suggestions:


Would you like to provide any additional comments or feedback regarding
your overall satisfaction with the Senyas: Filipino Sign Language Translation
Device and System for Two-Way Communication. And share your thoughts on
what can be included or improved in the system.

138
APPENDIX F

Disclaimer

139
140
APPENDIX G

Terms and Condition

SENYAS: Filipino Sign Language Translation Device and System for Two-Way

Communication

1. Acceptance of Terms

By downloading, installing, or using the Senyas mobile application,

you agree by these terms and conditions. If you do not agree to these

Terms, you may not use the App.

2. Prohibited Conduct

You agree not to use the App for any illegal or unauthorized purpose.

You also agree not to use the App in any way that could damage, disable,

or impair the App or interfere with any other party's use of the App. You

further agree not to use the App to transmit any content that is unlawful,

harmful, threatening, abusive, harassing, hateful, and any other

inappropriate actions.

3. Intellectual Property

The App and all of its contents, including but not limited to the text,

graphics, images, and audio, are the property of Senyas or its creators. You

agree not to copy, modify, distribute, or create plagiarized works of the App

or any of its contents without our prior written consent.

141
4. Termination

We may terminate your right to use the App at any time, for any

reason, without notice. You may terminate your right to use the App by

uninstalling the App from your device.

5. Limitation of Liability

Even if you are aware of the potential for harm, Senyas will not be

held accountable for any harm that comes from using the app. This includes

any direct, indirect, incidental, special, and any consequential damages.

6. Entire Agreement

This agreement is the only one that matters between you and Senyas

about the app. It replaces any other agreements or promises you may have

heard or seen before.

7. Change to Terms

We may change these rules anytime, and we'll let you know by

posting the new ones on the app or our website. You agree to check the

rules regularly and follow the latest ones. If you don't agree to the new rules,

you can't use the app anymore.

8. Contact Information

If you have any questions about these Terms, please contact us at:

[email protected].

142
143
APPENDIX H

USER MANUAL

USER MANUAL

2023
144
Senyas: Filipino Sign Language Translation Device
and System for Two-Way Communication

Senyas Device and System

Thank you for choosing Senyas, an innovative solution for


translating Filipino Sign Language into text-to-speech in the app. This
user manual will guide you through the various features and functions
of the Senyas system, enabling more accessible communication and
empowering deaf-mute users. Whether you are using our device to
sign, reading the translated text, utilizing speech-to-text to using the
two-way communication features in the app, this manual will provide
step-by-step instructions to enhance your experience.

145
Table of Contents:

1. System Overview

- What is Senyas?

- How it Works?

2. Getting Started

- Installation and Setup

3. Android 12 higher

- Ask for Permission

4. Bluetooth Connection

- Pair Device

5. Speech-to-text

- Turn On Wi-fi & Mic Permission

6. Senyas Device

- Setup and Maintenance

146
System Overview

What is Senyas?

The Senyas mobile application will serve as a part of the solution for the

communication barrier between deaf-mute individual and the normal

person. The app is capable of doing some features such as translating sign

languages into readable text and hearable speech. The app will fully utilize

its function if the sign language translator device is connected to it.

How it works?

The device empowers users by recognizing their hand movements and

translating sign language gestures into text and text-to-speech output in real

time through a mobile app. It also offers speech-to-text input options,

enabling a truly two-way inclusive communication experience.

147
Getting Started

Installation and Setup:

To begin using the Senyas System, Follow these steps;

Step 1: Visit the Application’s Facebook Page and look for the links for Senyas

Download Alternatively, you can download the App in the Applivery or scan QR

Code in your device.

Link of the Senyas


1 App download page

QR code of the
2 download page
1

148
Step 2: Click on “Install” to download the app to your device

149
Step 3: Once the installation is complete, locate the app on your device’s home

screen or app drawer and tap on the icon to launch it.

150
Android 12 higher

Ask Permission

Note:

If you are using Android 12 or higher, you need to grant certain permissions

for apps to access specific device features such as Bluetooth. By tapping

"Allow" when prompted, you give the app approval to activate Bluetooth and

make full use of its capabilities.

151
Bluetooth Connection

Paired Device

Step 1: Open the Senyas App on your Android device and Senyas Device.

1 Power Switch

152
Step 2: If Bluetooth is turned off on your android device, a notification will appear

to turn it on. Tap "Continue" to enable Bluetooth.

153
Step 3: Turn on Bluetooth

154
Step 4: In the Senyas App, under the "Available Devices" section, find and tap the

option that hardware device name "Senyas Device" to pair with the device.

155
Step 5: Wait for the pairing process to complete. You'll see a notification when

pairing is successful.

Step 6: Once the Senyas device is successfully paired, go back to the Senyas

App.

156
Speech-to-Text

Turn On Wi-fi & Mic Permission

Step 1: If you do not have an internet connection, a notification will appear

instructing you to turn on Wi-Fi. Tap "Continue" which will take you to your device's

Wi-Fi settings.

157
Step 2: In your device's Wi-Fi settings, turn on Wi-Fi and connect to a wireless

network.

158
Step 3: Once connected to Wi-Fi, go back to the Senyas App.

159
Step 4: You will see a pop-up asking for microphone permissions. Tap "While

using the App".

160
Step 5: You can now use the Speech-to-text by long press button.

161
Senyas Device – Setup and Maintenance

The Senyas Device is an FSL recognizing


device that can read sign language by recording
your hand gestures through the use of sensors. This
device is specifically designed to be used together
with the Senyas App. The device will read your hand
gestures to generate an output that can be
displayed on the mobile application.

Battery To use the device, simply turn the


LED
switch on and an LED indicator will glow
red when the device is turned on. You can
Power
LED now pair the device with the Senyas app.

Power Switch
I – Power On
O – Power Off

LED Indicators

Power LED – Will turn on when the device is turned on or in debugging


mode. Make sure to turn off the device first before debugging.

Battery LED – Will turn on when the battery is fully charged

162
Device Ports

The device has two ports on the right


Debugging
side. These ports enable access to the Port

processing unit of the device and Charging


Port
charging module.

Debugging Port – Used to debug and configure the code the device. When
monitoring the device, use a serial monitoring app with baud rate set to 115200.

NOTE: When using the debugging port, make sure to turn off the device first.
There is no protection in the device when it is supplied by two external power
supply.

Charging Port – This port is used to charge the device. The device comes with
550mAh Li-Po battery in which the device can be continuously used for 2 hours 30
minutes. It also takes 2 – 3 hours to fully charge the device.

NOTE: Even though the device has overcharge protection, make sure not to
charge the device for more than 3 hours. And make sure that the device isn’t
exposed to sunlight for a very long time.

163
APPENDIX I

DATA SHEET

ESP-WROOM-32

• Integrated Crystal - 40 MHz crystal

• Integrated SPI flash - 4 MB

• Operating voltage/Power supply - 3.0 V ~ 3.6 V

• Operating current - Average: 80 mA

• Minimum current delivered by power supply - 500mA

• Recommended operating ambient temperature range - –40 °C ~ +85 °C

• Package size - 18 mm × 25.5 mm × 3.10 mm

• Moisture sensitivity level (MSL) - Level 3

164
ESP32 Pin Diagram

3D Analog Joystick Datasheet


• Operating Voltage – AC 50V, DC 5V

• Voltage Divider Error – 44% ~ 56%

• Rated Power Taper – B: 0.0125W

• Withstand Voltage – 1 minute at AC 250V

• Insulation Resistance – 100mΩ 1Minute at DC 250V

• Rotational Life – 200,000 Cycles Min

• Rating Power – DC 12V 50mA

• Contact Resistance - 100mΩ Max

• Operating Force - 740±300gf

• Switch Life - 1000,000 Cycles Min

165
Schematic Diagram

166
MT3608 – 2A DC-DC Boost Power Module

• Input Voltage – 2 – 24V DC

• Output Voltage: 5 – 28V DC

• Maximum Output Current - 2A

• Switching Frequency: 1.2Mhz

• Output Ripple: <100mV

• Module Size: 37.2mm x 17.2mm x 14.0mm

• About 93% Efficiency

• Comes with features like under-voltage, over-voltage, and thermal

overload protection

Schematic Diagram

167
MPU6050

Gyroscope

• 3-axis sensing with a full-scale range of ±250, ±500, ±1000,

or ±2000 degrees per second (dps)

• Sensitivity of 131, 65.5, 32.8, or 16.4 LSBs per dps

• Output data rate (ODR) range of 8kHz to 1.25Hz

Accelerometer

• 3-axis sensing with a full-scale range of ±2g, ±4g, ±8g, or ±16g

• Sensitivity of 16384, 8192, 4096, or 2048 LSBs per g

• ODR range of 8kHz to 1.25Hz

Supply Voltage

• Operating voltage range of 2.375V to 3.46V for the MPU-6050,

and 2.375V to 5.5V for the MPU-6050A

Operating Circuits

168
Lithium-ion Polymer

• Capacity – Nominal 550mAh

• Nominal voltage - 3.7V

• Material – Cobalt

• Voltage at end of discharge - 3.0V

• Charging voltage - 4.2V

• Standard Charge - Constant current 0.2C5A

Constant voltage 4.2V

Cut-off current 0.01C5A

• Quick charge - Constant current 1C5A

Constant voltage 4.2V

Cut-off current 0.01C5A

• Standard discharge - Constant current 0.2 C5A,

End voltage 3.0V

• Maximum continuous discharge current - 1 C5A

• Operation temperature range - Charge: 0~45°C,

169
Discharge: -20~60°C

• Cycle life - >300cycles

• Storage temperature – During 1 month: -5 ~ 35°C,

During 6 months: -20 ~ 45°C

Schematic Diagram

170
TP4056 Li-ion Lithium Battery Charger Module

• Voltage Supply (Vs) - 4V0 ~ 8V0

• Charge Voltage termination (accuracy) - 4.2V(1.5%)

• Supply current (Rprog=1.2k: 1A chrg) - 150uA (typ)

• Supply current (Chrg ended/ shutdown) - 55uA (typ)

• Ibat (Rprog=1.2k: 1A chrg) - 1050mA (max)

• Ibat (Standby mode; Vbat = 4.2V) - -6uA (max)

• Vtrckl(Rprog=1.2k: Vbat:rising) - 2.9V (typ)

• Itrckl (Rprog=1.2k: Vbat<Vtrckl ) - 140mA (max)

• Vtrhsy(Rprog=1.2k) - 80mV (typ)

• Operating temperature - -40°C ~ 85°C

Schematic Diagram

171
Rocker Switch

• Rated voltage and current - (Based on switch type)

• Contact resistance (new state) - <100MΩ (5V, 1A dc)

• Dielectric strength (new state) - >1000V >2500V

(10mA between the contacts)

• Insulation resistance (new state) - >100MΩ

(500V dc between the open contacts)

• Inrush current - 50A/3msec (capacitive load)

• Electrical life endurance - >6000 operations Volt drop: 10,000 operations

• Temperature rise at the terminals - <30°C – 6000 operations

<45°C – 10, 000 operations

Schematic Diagram

172
173
APPENDIX J

Source Code
Software Code
Splash Screen

Homepage

174
175
176
About Page

177
Help Page

178
User Manual Page

Question Page

179
Hardware Code

#include <SenAi_Test_inferencing.h>

#include <Arduino.h>

#include <Adafruit_MPU6050.h>

#include <Adafruit_Sensor.h>

#include <Wire.h>

#include <BluetoothSerial.h>

#define SAMPLING_FREQ_HZ 10

#define SAMPLING_PERIOD_MS 1000 / SAMPLING_FREQ_HZ

#define NUM_CHANNELS
EI_CLASSIFIER_RAW_SAMPLES_PER_FRAME

#define NUM_READINGS EI_CLASSIFIER_RAW_SAMPLE_COUNT

#define NUM_CLASSES EI_CLASSIFIER_LABEL_COUNT

Adafruit_MPU6050 mpu;

BluetoothSerial SerialBT;

char com;

String trigger;

int state = 0;

void setup(void) {

Serial.begin(115200);

SerialBT.begin("Senyas Device");

while (!Serial){

delay(10); // will pause Zero, Leonardo, etc until serial console opens

180
}

// Try to initialize!

if (!mpu.begin()) {

Serial.println("Failed to find MPU6050 chip");

while (1) {

delay(10);

Serial.println("Hardware Initiated");

//setup motion detection

mpu.setAccelerometerRange(MPU6050_RANGE_2_G);

mpu.setGyroRange(MPU6050_RANGE_250_DEG);

Serial.println("MPU set");

delay(1000);

void loop() {

unsigned long timestamp;

ei_impulse_result_t result;

int err;

float input_buf[EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE];

signal_t signal;

float maxProbability = 0.0;

int maxIndex = 25;

181
sensors_event_t a, g, temp;

mpu.getEvent(&a, &g, &temp);

float acc_x, acc_y, acc_z, gyro_x, gyro_y, gyro_z;

while (SerialBT.available()){

delay(10);

com = SerialBT.read();

trigger += com;

// Trigger FSL recognition

while(trigger == "a"){

delay(10);

// FSL recognition model impelementation

for (int i = 0; i < NUM_READINGS; i++){

timestamp = millis();

// Reading data from sensors

int joy_1 = analogRead(25);

int joy_2 = analogRead(33);

int joy_3 = analogRead(32);

int joy_4 = analogRead(35);

int joy_5 = analogRead(34);

acc_z = a.acceleration.x;

acc_y = a.acceleration.y;

acc_x = a.acceleration.z;

182
gyro_x = g.gyro.x;

gyro_y = g.gyro.y;

gyro_z = g.gyro.z;

// Storing sensor data to input buffer of the model for FSL recognition

input_buf[(NUM_CHANNELS * i) + 0] = joy_1;

input_buf[(NUM_CHANNELS * i) + 1] = joy_2;

input_buf[(NUM_CHANNELS * i) + 2] = joy_3;

input_buf[(NUM_CHANNELS * i) + 3] = joy_4;

input_buf[(NUM_CHANNELS * i) + 4] = joy_5;

input_buf[(NUM_CHANNELS * i) + 5] = acc_x;

input_buf[(NUM_CHANNELS * i) + 6] = acc_y;

input_buf[(NUM_CHANNELS * i) + 7] = acc_z;

input_buf[(NUM_CHANNELS * i) + 8] = gyro_x;

input_buf[(NUM_CHANNELS * i) + 9] = gyro_y;

input_buf[(NUM_CHANNELS * i) + 10] = gyro_z;

while (millis() < timestamp + SAMPLING_PERIOD_MS);

//Running model Inference

err = run_classifier(&signal, &result, false);

if (err != 0){

Serial.print("ERROR: Failed to run classifier");

Serial.print(err);

return;

183
}

// Output buffer result from inference

Serial.println("Predicted");

for (int i = 0; i < EI_CLASSIFIER_LABEL_COUNT; i++) {

Serial.print(" ");

Serial.print(result.classification[i].label);

Serial.print(": ");

Serial.println(result.classification[i].value);

if (result.classification[i].value > maxProbability) {

maxProbability = result.classification[i].value;

maxIndex = i;

// Code for displaying output with the highest probability

if (maxProbability > 0.55){

SerialBT.print(result.classification[maxIndex].label);

SerialBT.print(" ");

Serial.print("Result:");

Serial.print(result.classification[maxIndex].label);

Serial.print(" ");

} else {Serial.println("");};

184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
CURRICULUM
VITAE

202
203
204
205
206

You might also like