SENYAS Filipino Sign Language Translation Device and System For Two Way Communication BOOKBIND
SENYAS Filipino Sign Language Translation Device and System For Two Way Communication BOOKBIND
An Undergraduate Thesis
Presented to
The Faculty of the College of Engineering
Samar State University
Catbalogan City
In Partial Fulfillment
Of the Requirements for the Degree
Bachelor of Science in Computer Engineering
January 2024
APPROVAL SHEET
In partial fulfillment of the requirements for the course of Bachelor of
Sign Language Translation Device and System for Two-Way Communication” has
been prepared and submitted by Ruben Lorenz S. Yboa, Kharl Angelo S. Obong,
Noted:
ENGR. MAYNARD R. DALEMIT ENGR. MEDDY S. MANGARING
Panel member Panel member
ENGR. MARICRIS M. EDIZA ENGR. NIKKO ARDEL P. FLORETES
Panel member Panel member
ENGR. FRANCISCO M. DEQUITO JR.
Panel Chairman
________________________________________________________________
PASSED.
ii
ACKNOWLEDGEMENT
The researcher wishes to thank the funders who provided financial support
family especially our parents who constantly motivate ad inspire us to strive hard
to make the project possible. To Yboa, Obong, Berio, and Bacsal Family who never
failed to sustain but continually give their moral and financial support, we offer our
heartfelt gratitude.
Engineering for his outstanding support and guidance throughout the entirely of
adviser contributing significantly to the depth and quality of our findings. His
mentorship not only enhanced our research skills but also broadened our
National School and Mrs. Julita B. Tanseco, SPED Teacher III, Samar National
System. Their dedication in providing feedback and expertise greatly enriched the
project.
iii
We also would like to thank our classmates and friends who helped and
supported us in any way they could to finish the project. Their insights and opinions
during the development of the system was valuable until the conclusion of its
progress. To the people unstated above who have been part of this project, our
deepest gratitude.
Above all, we thank the Lord Almighty for the knowledge, guidance,
provisions good health, and enlightenment he had given to us. The journey was
no easy task and didn’t come with no complication. The blessings that serve as
the light to our problems and ensured the completion of the research being
conducted.
iv
DEDICATION
who kept us going throughout this challenging journey and reminded us to never
To our parents, words cannot express our gratitude for the countless
sacrifices you have made to provide us the opportunity to pursue higher education.
times. You celebrated every little victory with such joy and pride that kept us
understanding and encouragement when we had to spend long hours away from
you, locked away with our research and coursework. You took over responsibilities
without complaint, so we could completely dedicate our time and energy towards
loved us through our highs and lows. We could not have persevered without having
achieved here is a reflection of the love, faith and support our families surround us
with. This dissertation stands testament to that. We love you and cannot wait to
v
ABSTRACT
and hearing individuals. Senyas utilizes a glove device equipped with a joystick
These inputs are translated into corresponding FSL gestures and displayed as text
understand speech.
to accurately recognize FSL gestures from the glove's sensor data. The model was
evaluation involved 5 respondents aged 14-19 from Samar National School testing
device, app, and overall system performance. Identified limitations include issues
with accuracy for certain complex gestures and handling continuous use in
and facilitating communication between deaf, mute, and hearing users. The
vi
system contributes significantly to accessibility technology and has much potential
vii
TABLE OF CONTENTS
Dedication ………………………………………………………………….. v
Abstract …………………………………………………………………….. vi
Chapter I. Introduction
viii
System Design …………………………………………… 41
Maintenance …………………………………………….. 93
ix
Chapter V. Summary, Conclusion and Recommendation
x
LIST OF TABLES
xi
LIST OF FIGURES
disability
xii
Figure 3.3.16 Continue Process Checking Wi-fi Connection 54
Connection Lost
Connection Lost
Functions
Figure 3.3.20 Help Page system flow from the hidden menu 58
xiii
Figure 3.4.2 Collect Dataset 79
xiv
CHAPTER I
INTRODUCTION
significant barrier that affects their ability to function normally in society and often
leads to isolation and a lack of engagement with the world around them. An article
by Larsson, E., et al. (2022) claims, that while being unable to hear presents some
barrier that makes it challenging to interact with individuals who do not know sign
language or have no experience with communicating with people who are Deaf-
Mute.
Over 1 billion people worldwide have some disability, with between 110 and
Health Organization (WHO). Around 34 million of the estimated 466 million people
challenging to determine the prevalence of speech problems due to the wide range
1
Association (ASHA) estimates that about 10% of the population is affected by a
communication impairment.
unique visual language. FSL still needs to be discovered and understood by many
Filipinos despite being recognized as the nation's official sign language in 2018.
accessibility for people who are deaf or hard of hearing and hard-of-hearing. The
law known as RA 11106, An Act Declaring the Filipino Sign Language as the
National Sign Language of the Filipino Deaf and the Official Sign Language of the
Government in All Transactions involving People Who are Deaf or Hard of Hearing,
and Mandating its Use in Schools, Broadcast Media, and Workplaces, was signed
helped Deaf-Mute people communicate better. However, most Filipinos need help
understanding FSL because the community remains behind the country's fast-
According to Narte & Rupero (2023), the exclusion of the Deaf for many
years not only continued a culture of discrimination but also pushed them to the
ends to the point that they are no longer included in the majority of aspects of
as a language challenge in the nation. The R.A. 11106, or the FSL Act of 2018,
2
has recently recognized Filipino Sign Language (FSL) as the country of the
Philippines' official visual-gestural language. This has widened the scope of the
fundamental to the formation of the Filipino Deaf identity and a natural language
of the Deaf.
Therefore, Montefalcon et al. (2021) and Narte & Rupero (2023) note R.A.
11106, or the FSL Act of 2018, the formal designation of Filipino Sign Language
as the national sign language of the Philippines. This shows that a legislative
framework has been established to promote the use of FSL. But Notarte-Balanquit
(2021) points out that although FSL has helped the deaf community communicate
better, most people do not understand it, probably due to social and technical
follows, in which Narte and Rupero (2023) explain that it will allow FSL to develop
The Deaf population has traditionally been excluded in Philippine culture, and
3
Narte & Rupero (2023) examine how their participation may be improved if FSL is
In summary, while the cited claims discuss the recognition and challenges
communication and accessibility for the deaf community and potentially contribute
to detect their sign language and, through the use of your mobile phone, translate
4
1.3 And enable communication between the wearable device to
2. To identify the limitations of the system for future development and research
Conceptual Framework
deaf-mute persons and people who do not know sign language. The conceptual
framework consists of a wearable glove device and mobile application that work
recognize hand gestures and sign language, which is then transmitted to the
mobile app and translated into text-to-speech. The mobile app also provides
speech recognition to translate spoken language into text for the hearing user. By
enabling sign language recognition and speech translation in both directions, the
5
Figure 1.1. Senyas Conceptual Framework
6
The Senyas glove device is composed of the multiple components with their
it is also the brain of our device. It is a powerful and versatile microcontroller that
processes data from the other components, controls their operation, and performs
the calculations needed for recognizing hand gestures and sign language
translation.
allowing to track the arm angle for more accurate sign language recognition.
3D analog joystick is an additional data input from the user. It can be used
signs.
manages the charging process of the Li-Po battery safely and efficiently,
The Boost Converter Module. This is a module that converts 3.7V power
supply from the battery into 7V to power the ESP32 microcontroller through the
Vin pin. This ensures they receive the correct voltage for optimal operation.
powers the entire glove device. It provides long-lasting power so our device could
7
The Rocker switch: This is a simple on/off switch that allows our device to
Figure 1.1 assumes a normal person using the Android application of the
device; then, the data or speech generated by the android application that came
from the normal person, who was the user, would be transmitted and translated
into text. This action will happen in mobile applications and solve the problem of
translating sign language into text. As for the Deaf-Mute person using the wearable
device worn by the user’s hand, the Deaf-Mute person would then connect the
wearable device through Bluetooth to the mobile application; by doing so, it will
and the mobile application will translate these signs into text that can be
We can now quickly determine the corresponding sign languages by reading the
data input. The data generated by these sensors would be transmitted to the phone
and translated it into a text form. This action will happen in mobile applications and
Therefore, the mobile application and the wearable device will allow two-
other. The accuracy and efficiency of the system will be then assessed, as well as
8
Scope and Delimitations
person that often face significant challenges when it comes to communicating with
others.
Deaf-mute and regular users. This system's other goal is to give Deaf-Mute and
bilateral communication between individuals who are Deaf-mute and those without
to as normal individuals.
This study is limited to those people with hearing impairment and muted,
and those who are speech-impaired, thus other disabilities and/or impairments are
excluded from the study and which the device will not focus on. The device may
battery life, which can affect its performance and usability. The device may also be
limited by environmental factors, such as noise, which can affect its ability to
9
Significance of the Study
application that uses sensors to detect sign language gestures and translates them
and lessen the hassle of basic tasks like asking for help, providing assistance,
listening, and communicating with other person in a public environment like school
and park.
Future Researcher. The results of this study will help future researchers
have knowledge and foundation if they will be conducting a study similar to this
one. They will be guided by the information that will be gathered during this study.
10
Definition of Terms
herein. The following terms are conceptually defined for the researchers to have a
process of conveying messages between parties using verbal means like speech
communicators.
Deaf. Conceptually, Deaf people mostly have profound hearing loss, which
implies very little or no hearing. They often use sign language for communication
an individual has a hearing loss of 90 decibels (dB) or more in their better ear. This
means that the individual is unable to hear most sounds, including speech, without
(MLOps) platform for developing embedded and edge machine learning systems
that can be deployed across diverse hardware, as defined by Hymel et al. (2022).
11
In practical application, it is a machine learning platform enabling data collection
from sensors, signal processing, training ML models, and deploying them onto
platform for managing the machine learning model lifecycle from data to
deployment.
class of machine learning algorithms that use multiple layers of artificial neural
distinct visual language used naturally by the Filipino Deaf community, with its own
Cristobal and Martinez (2021). Operationally, Mabalot and Mendoza (2018) define
12
Kodular. Conceptually, Kodular is a platform that makes it easy for anyone
used to record student attendance in a more efficient and effective way than
based student presence application using Kodular, users should first create a new
project and select the "Android App" template. Next, they can design the user
environment. Once the user interface is complete, users can add functionality to
the application using Kodular's built-in components and blocks. Finally, users
should test and debug the application before building and publishing it to the
complex tasks in a way that is similar to how humans solve problems Sara Brown,
Mute. Conceptually, mute person is someone who does not speak, either
due to an inability or lack of desire to speak. More specifically, the term "mute"
refers to someone with profound deafness from birth or early childhood, which
13
prevents them from developing spoken language. As a result, they are unable to
use articulate speech and are considered deaf-mute Melissa Conrad Stöppler, MD,
production.
computing device that can be conveniently carried and moved from place to place
carried in a pocket or bag, and a user interface that allows for easy interaction with
the device.
that is immediate in a human sense of time (Vangie Beal 2021). Operationally, real
quantity and converts into a signal that can be read by another device or
accelerometer, is used to measure physical range of the motion of hand and which
14
Sign languages. Conceptually, Sign languages serve as visual means of
uses a system of gestures and signs to convey meaning between individuals who
implanted in the user's body, or even applied as tattoos on the skin Adam Hayes
that is integrated into clothing or accessories and provides users with additional
15
CHAPTER II
healthcare, opinions held by others, or social support. As per WHO (2021), these
might also encompass the accessibility and utilization of personal assistance and
assistive products.
in Figure 2.1, which was given by DOH (2016), as we can observe there are fewer
college graduates or even attended college of people with disability. But we can
16
also observe that there are more people with severe disability with no education
but have finished elementary. Still, the gap between moderate and severe is not
too great for those who have or finished high school. This is due to programs like
the Special Education (SPED) program and Sign Language Interpreting Service
(SLIS) that help a person with a disability, like those who have a hearing
impairment.
program aims to address the unique learning requirements of these students and
help them achieve their full potential, according to a statement by the Department
service that provides interpreting services for individuals who are hard of hearing
communication between the deaf or individuals with difficulty in hearing and muted
individuals. SLIS can help deaf people attend school and facilitate in school
In summary, while SPED and SLIS primarily address the educational and
Language. Our study could use these existing services by providing an innovative
tool for real-time communication accessibility for the deaf community, which is one
17
Rao P (2022) and their team have developed a stand-alone sign language
translator that can be deployed on a Raspberry Pi. The translator uses the Hand-
mesh model from MediaPipe to convert dynamic fingerspells into words and form
surfaces such as boards or flyers and translate them into a regional language of
recognition to assist individuals with disabilities. These devices can detect and
et al., (2021). In combination with software that translates the gestures into text or
speech, wearable devices can assist individuals with the hearing and speech
is possible to communicate with deaf and mute individuals through the use of sign
individuals, there is a wide gulf between them and it is impossible for them to share
ideas and thoughts with others. New technologies should arise in order to close
this gap, which has been present for years. Therefore, a bridge between the deaf-
mute and others, an interpreter, is required. This article presented a sign language
translation system. The technique made use of an American Sign Language (ASL)
(2021).
18
Based on Rao P (2022), Almasri et al., (2021) and Akshatha, et al., (2021),
they discuss the communication of barrier that exists between deaf and speech-
understanding sign language while our study will focus to directly address the
communication barrier by providing a means for both sign language users and non-
Through the use of the sensors embedded in the gloves like a flex-sensor
and accelerometer to detect arm motion and position, the system can detect and
read the different hand gesture and movement of the sign language, identify these
particular gestures they may relate to sign language terms and phrases. To
achieve communication, sign language will be translated by the device into speech
through both a speaker and text that may be displayed on an LCD screen, noted
Acquisition and Control (DAC). This system involves the creation of a smart glove
which takes hand motions and turns them into readable text. This text can be
display. The Sign Language Translator begins with the design of an elastic glove
accelerometer (Ax, Ay, Az), and a three-dimensional gyroscope (Gx, Gx, Gz). The
primary goal is to mount most of these components into the flexible glove to
capture all hand gestures and translate them into letters, effectively conveying sign
19
language. Their experimental results demonstrate that these gestures can be
detected using cost-effective sensors that track finger positions and orientations.
The current version of the system can precisely interpret 20 out of 26 letters,
system for translating text from Indian Sign Language (ISL) automatically. This
system uses a built-in web camera to capture video, which is then pre-processed
language for the letters (A–Z) and digits (0–9). The process to complete the
higher frame rate would increase processing time due to more data to handle.
use an adaptive skin color model that maps the YCbCr color space to the YCbCr
color plane. Then the sign recognition technique will depend on the preprocessing
method employed. For instance, when color thresholding and fingertip position
20
installing an ODBC driver using the Database Explorer App provided in the
reducing the system's accuracy. Achieving an efficient system for real-time sign
significant challenges.
with flex sensors and an accelerometer that can detect and read hand gestures
smart glove system that can correctly interpret 20 out of 26 letters, reaching a
vision-driven system with a built-in web camera that can translate single-handed
sign language for the letters (A–Z) and digits (0–9). The researchers will also
develop a similar concept involving a glove device system, but in this case, they
will employ a joystick to monitor finger movements. This existing literature could
potentially serve as a valuable resource for the researchers, offering insights and
inspiration to enhance the efficiency and accuracy of the glove device sign
language translator.
a system developed by Kumar R.M., et al., (2021), an application that can convert
human speech to a text input that can be further translated into a sequence of
21
images that displays sign language. To convert input audio to text, speech
extract root words and remove filler words such as ‘is’, ‘are’, or ‘was’ that is
mobility and low-cost. But other system requires cable connection with computer,
but with Bluetooth technology, the device can be wireless. And to save the device
from being bulky and heavy, the processing responsibility will be given to user’s
Android phone, since most people owns an android phone apart from Apple’s
Iphone, we utilize this advantage to reduce the cost and simplify the design that
et al., (2019).
Shah et al., (2022) has developed and deployed a translator for Indian Sign
network and is divided into two main modules. The first module is responsible for
capturing user input through a device camera. The second module handles the
distinguish distinct sign images and then applies an Artificial Neural Network (ANN)
to process them. The identified signs are then compared with a vast dataset of
stored gestures and their associated outputs. The corresponding words are
displayed to the user. Similarly, if the user provides voice input, the system will
display the corresponding gesture. This system acts as an interpreter, much like a
22
into speech for individuals with normal hearing. Three parties are involved: the
with normal hearing. The hearing-impaired person performs signs in front of the
system, which tracks and transmits sign language to the computer. The computer
then analyzes the signs and conveys their meaning in speech to the person with
normal hearing.
app that can convert human voice to sign language and vice versa. It used speech
text and text input to sign language images. Then the system developed by Phing,
sensors in the glove track the user's hand movements and translate them into text
or speech. The system is wireless and portable, making it a convenient option for
language to text or speech. Shah et al. (2022) created a sign language translation
images of the user's hands and a CNN to recognize the signs. The system can
then translate the signs into text-to-speech, or vice versa. The system has the
hearing people. The researchers will use these resources to provide an effective
communication to establish the connection between the device and the mobile
app.
23
In the realm of Filipino Sign Language (FSL) translation, the Neural Network
hardware with the capability to discern and comprehend hand gestures. Trained
within sensor data, thereby enabling precise and efficient real-time recognition of
gestures. This technological innovation not only ensures the reliability of Filipino
deaf and hearing communities (Deep Learning Methods for Sign Language
Translation, 2023).
language translation. Lee and Kim (2019) in their US Patent No. 9,952,072
translation accuracy for complex signs as suggested by Choi and Park (2022) in
using a wristband. Senyas combines the benefits of both approaches, utilizing both
glove sensors and an accelerometer for comprehensive hand and arm tracking.
These existing patents, such as Lee and Kim (2018) and Choi and Park
language communication. They pave the way for further improvements, and
Senyas contributes to this effort by exploring a study that combined sensors and
24
focusing on a wearable, user-friendly device. Building upon the foundation made
Impulse platform. This platform not only expedites the training process but also
latency not only augments efficiency but also simplifies the developmental
intricacies associated with this cutting-edge technology (Edge Impulse, 2023). The
amalgamation of the Neural Network Classification model with the Edge Impulse
The first national disability survey in the Philippines was carried out by the
Health Organization. The survey's objectives were to gather data on the various
25
the foundation for the creation of adaptable services and programs for people with
According to DOH, (2016), out of the 10.464 Filipinos who are interviewed,
1,256 have reported to have a severe disability. Of the 1,256-sample size, 21%
have hearing impairment and 7% of them find it very problematic with day-to-day
life while 14% find it extremely problematic. Those who have severe disability are
like managing device constraints and handling variable input that may not match
training data well. Though some companies have tried developing mobile
translation apps, academic research is still needed to address the core technical
hurdles. The ultimate goal is creating an automated system that can reliably
translate between spoken English and ASL in real time on handheld devices to
movement, and hand shape. Additionally, the signer's current attitude might be
26
objectivity and skepticism, this study also takes into account happy and negative
phrases are taken from well-known online videos of sign language. These
sentences are performed by the signers with clear facial expressions, noted by
significantly enhance their quality of life. The accuracy and efficiency of gesture
recognition has drawn more attention in recent years. For instance, a study
published in 2020 offered a novel method for identifying sign language gestures
utilizing an inertial measuring unit (IMU) and machine learning algorithms, noted
by Li et al., (2020). Another study in 2021 investigated the use of similar glove-
27
systems. Wearable sensors are typically used by researchers to record hand
movements. After that, the data are analyzed using any method for hand gesture
recognition.
wearable gadgets that can translate and recognize the sign language. One study
that used an inertial measurement unit (IMU) and machine learning algorithms to
identify sign language gestures was published in 2020. (Li et al., 2020). The
(ASL) recognition was the subject of another study in 2021, demonstrating its
March 2023, which is colossal compared to Iphone’s iOS market share of only
11%. This analytics service uses a tracking code that is installed on more than 1.5
million websites globally and they also cover different range of activities and
website visitors operating system, which identifies the type of phone of the visiting
user. The code then sends the data back to server of the system and label their
decision on forgoing with Android environment for the development of the device’s
app. But this doesn’t restrict it from having an iOS-based app in the future.
28
record the hand positions. The Arduino controller on the glove would wirelessly
transmit the captured data to the Arduino controller connected to the computer
screen via Bluetooth. A speaker would utter the term linked with the gesture if the
data matched one of the motions stored in the computer. Flex sensors are
audio, a system designed Babour, A. et al., (2023) Additionally, the board would
use analog-to-digital converters (ADC) to transform the audio to text so that it could
Anupama, H.S. conducted a study. et al., (2021) collected data using motion
Interpreter (SLI). An Arduino board is utilized to obtain the sensor data. Following
accuracy rate attained by the method is 93%. The voice of a registered speaker is
one example of such a gadget. Even though state-of-the-art algorithms have been
developed with success, they still have limits since they have not addressed the
issue of processing sequential hand gesture data fast or accurately describing the
29
Chronological-Pattern Indexing (CPI) method is used to sort the pattern of hand
motions and hand motion data that the LMC sensor collected.
gestures using sensors, these sensors will generate signals and would then be
datasets gathered from the generated signals of the sensors, for the device to be
able to read and predict the expected output using these datasets, it must use an
Those algorithms can be used for machine learning and increasing the
accuracy of the device. A study conducted has found that KNN clustering algorithm
has better accuracy than Neural Network and Decision Tree Classifier, statement
from Johnny S., et al., (2022), results collected from testing those three mentioned
algorithms to train their Fifth Dimension Technologies Gloves and assessing the
Kim & Seo, (2023), has introduced an innovative system for recognizing
30
approach eliminates the need for complex systems like Long short-term memory
For the purpose of classification, the scalograms are transformed into RGB
ranging from small to large variants, is utilized for image classification, with fine-
tuning performed.
device is plausible, but the machine learning and deep learning techniques used
digital photos and videos. The You Only Look Once (YOLO) v3 and DarkNet-53
convolutional neural networks (CNNs) are the foundation of a real-time system for
hand gesture identification, study proposed Kim G.M., et al., (2019). But this
31
It is frequently necessary to establish beyond a reasonable doubt that the
Neural Networks (CNNs) with the use of the You Only Look Once version (YOLO)
algorithm, noted by Bhavadharshini, M., (2021). The method first performs data
contained in the input snapshot, YOLO evaluation will handle the picture. A
bounding box will be constructed with a name that concentrates the intended
object if the prepared image is present in the input. The sign language dataset
that can be heard or read to achieve an almost normal conversation with anyone.
According to the World Federation of the Deaf (2023), there are over 300 distinct
sign languages used by Deaf people around the world. These are distinct sign
languages from one another and are not mutually understandable, so without
learning the other sign language, a signer of one sign language might not be able
Designating the Filipino Sign Language as the National Sign Language of Filipino
Deaf and Official Sign Language of the Government All Transactions Concerning
32
the Deaf and Mandating Its Use in Schools, Broadcast Media, and Workplaces,"
is the nationwide legislation in the Philippines. This law designates Filipino Sign
Language (FSL) as the official sign language for citizens with hearing impairment
The FSL is now widely used sign language in the Philippines and it is
currently 54% of sign language users utilize it, mentioned by study of Ong C., et
al., (2018). Their study developed the SIGMA system that aims in facilitating the
using a glove-based system with flex sensors and a complementary vision system
that is used in detecting hand positon with respect to the body, it can translate
machine translation system that can translate FSL into written Filipino text. The
researchers collected data from 50 Deaf FSL signers and used it to create a
bilingual corpus of FSL and Filipino text. The resulting machine translation system
was evaluated for accuracy and achieved a BLEU score of 44.92, indicating
moderate translation quality. It is mentioned in their paper that FSL is the sign
language used by Deaf Filipinos is accepted as the official sign language of the
nation.
33
The researchers must also be able to train and test the device, literature
about ASL can be used as resource and reference. For learning and using signs
in Filipino Sign Language (FSL), the Filipino Sign Language Online Dictionary is a
useful tool. A complete list of FSL signs is provided in the dictionary, together with
the study, the dictionary has improved communication between hearing and Deaf
people who use FSL and raised awareness of FSL as a distinct language having
its own syntax and grammar. The School of Deaf Education and Applied Studies
FSL Online Dictionary. The films are created by native FSL signers, and the lexicon
Senyas device can generate speech from the user’s gesture. To implement
this, the researchers will use existing studies and tools that can achieve this. The
purpose of this concept is to convert text to speech which the speakers are used
for an output.
Using tools like the Talkie library, one can turn text into speech and produce
speech. The library makes use of a mathematical approach for analyzing voice
signals called Linear Predictive Coding (LPC). Formant frequencies are produced
using the LPC model and then utilized to synthesis speech. Several pre-recorded
34
speech samples are available in the Talkie library and can be played back by the
microcontroller.
to implement a text-to-speech system for the English language. The authors found
that the Talkie library was able to generate speech output that was intelligible and
of high quality, and that the library was easy to use and integrate with other
microcontroller-based systems.
to implement a text-to-speech system for the English language. The authors found
that the Talkie library was able to generate speech output that was intelligible and
of high quality, and that the library was easy to use and integrate with other
microcontroller-based systems.
speech that will be converted in to a text must be included in the device. This will
allow the Deaf-Mute user to read to text through display. Such systems can be
hardware demanding so the researchers will explore other methods like using
cloud-based systems.
services offered by the cloud-based platform Azure Speech Services. The speech-
to-text service converts audio input into text using the deep neural network (DNN)
35
patterns and features, The DNN acoustic model is trained using a large volume of
voice data. Given audio input, the SLM is used to anticipate the most likely word
language-specific models that have been developed using data from certain
languages or dialects.
recognize speech in audio and video files. Deep neural networks (DNNs) are used
by the service to convert audio input into text. The DNNs can distinguish between
different languages and accents since they were trained on a lot of voice data.
These services may be used for speech-to-text translation that can enable
two-way communication. It’s also important to note that both of the services have
research, suggest the methods and instruments to be employed, and supports the
significance of this research. The findings and conclusions will be used as a guide
to develop Senyas device and application contend its ability to become a medium
36
CHAPTER III
METHODOLOGY
In this chapter, the researchers will discuss the methods to conduct this
will act as a medium and facilitate communication between Deaf-Mute people with
normal individuals, and this section will explain the research design and the
approach in the assessment of its accuracy and effectiveness. The chapter will
discuss the model the researchers will use to develop the device. This will outline
the methods to be used and the research instruments for testing and assessment
Research Design
overall project development phases, ensuring that the process is efficient, and
methodologies, and each has a unique set of guidelines and procedures. One of
37
specify hardware and system requirements and define overall system architecture.
testing where all the units developed during implementation are combined into a
single system. The fifth phase is system deployment and the final phase is
performance.
For our research study, the Waterfall model is a suitable research design
since it includes the various stages that we need to process, including data
the data gathering stage, we will identify the specific features and functionalities
that the system must possess to effectively translate sign language. The system
design stage would involve creating detailed specifications for the system, while
the implementation stage will involve constructing and programming it. The testing
stage will verify the wearable device accuracy and reliability in translating sign
language. The deployment phase will take place in making the system available
38
for end-users and ensuring it works correctly, while the maintenance phase will
involve continuous monitoring and updating, including bug fixes, new feature
for the life of the device. Therefore, we have decided to use the Waterfall model,
as it aligns with the phases of our research study, including data gathering, design,
understanding of the objectives and functions of the device and app. This crucial
step aligns these objectives with the necessary specifications for constructing the
device and app. Through requirements analysis, the project team delves into
defining the essential features, performance criteria, and constraints that the
the device and app need to achieve and guides subsequent stages in the
development lifecycle.
dependencies, and milestones. In the context of our study, this aligns with the
conceptualization and progressing sequentially through each stage until the final
system drafts.
39
Figure 3.2.1. Senyas Study Gantt Chart in monthly deliverables
40
2. System Design. Follows the planning stage to create an outline of technical
interface design.
The design of the device should focus on functionality and portability. The
device should be able to read hand gestures in real-time by placing the Joystick
Sensor on top of the fingers so that it can register the bend of the fingers
accurately. An accelerometer is placed on top of the hand and tracks the distance
between the hand and the body, as well as hand motion, and the signal generated
This block diagram shows the system designed for gesture recognition and
41
speech and hearing capabilities. The joystick sensor and accelerometer are both
integrated into wearable devices worn on the hand. The joystick sensors quantify
finger bending, while the accelerometer gauges hand acceleration. The Esp-
smartphone for further interaction between a normal person and a deaf-mute. Esp-
module. This module communicates with the Android app to allow data transfer.
On the Android side, a dedicated app receives the gesture data from the Esp-
translate and receive signals into meaningful gestures. This step ensures a mutual
from the deaf-mute individual are translated into text-to-speech to the person using
the received text messages into synthesized speech. This conversion enables the
into the smartphone and device. This ensures that the person using the
smartphone can listen to the responses generated from the recognized gestures.
42
2.2 System Circuit Design
will elaborate more on its uses and how data flows in the system as device for
This Figure 3.3.2 show the schematic diagram for device. Esp-Wroom-32 is
the microcontroller of the device. The joystick sensor and accelerometer are
integrated into wearable devices worn on the hand. The joystick sensors quantify
processes the MPU6050 and Joystick data to determine the sign language
gesture. Once it recognizes the sign language, it will send it to Bluetooth to receive
text-to-speech in the app. A boost converter module will convert the 3.7V supply
from the battery into a 7V supply, this ensures a sufficient and stable power supply
43
for the Esp-Wroom-32, MPU6050, and joystick, while a lithium battery serves as
system in terms of how they contribute to achieving the overall system objectives.
sensors to capture sign language gestures, processing the data, and translating it
44
gestures associated with sign language. By combining the data from the gyroscope
with other sensors and algorithms, the sign language translation device can
accurately interpret and translate sign language gestures into meaningful output,
can be used to charge the device's battery, ensuring Figure 3.3.5. TP4056 Battery
Charger
it remains powered for extended periods. Using the
TP4056, ensures a safe and controlled charging process for the battery in the sign
translation device, helping to prolong the battery life and maintain reliable
operation.
Figure 3.3.6. 3D Analog joystick input is integrated into the overall sign language
Joystick
translation system. The translated gestures are then
using the 3D analog joystick as an input device in a sign translation device, you
45
can capture the user's hand movements and gestures, providing an intuitive and
provides a dependable power source for electronic components and stable supply
uses a solid polymer electrolyte to exchange ions between the positive and
46
performance and portability of the device. Proper care, handling, and charging
practices are essential to maximize the lifespan and safety of Li-Po batteries.
as the central nervous system of a sign translation device, facilitating the flow of
platform for the assembly of electronic circuits, contributing to the functionality and
and down or left and right. The switch is used to turn lights on and off, activate
circuits.
47
2.4 Software Design
The design of the software will be discussed in this section and the software
to be proposed will work in synchronous with the device in real time. The device
will translate sign language in real-time, and different words and phrases have a
unique gesture, to process this large varying data, machine learning will be
adapted to process the data and develop an algorithm that can learn from this data.
The number of data can affect the required processing power and memory, to
solve this issue, having a mobile device to carry that task will make the device
lighter and cheaper. The research seeks to highlight such technology and how can
Application. This will be the front-end of the system. This will be the UI of
the software to navigate and control the device. The interface will display the
48
translated Speech-to-Text from the normal person for the Deaf-Mute person the
Data Communication. The device will track the hand gesture through
sensors and send the data to the smartphone via Bluetooth. This will make
translation real-time while being wireless. The phone will then process the data
and be sent back to the device to produce and output similar to speech.
developed to collect the data from the device and using machine learning, identify
and translate the data to a comprehensible language. Which will then be sent back
to the device to produce speech through speakers. The application will also
process speech-to-text using kodular speech recognition tools like Google Speech
to Text Cloud, the converted speech will then be processed which would then be
visualizes the user interaction with the interface through Bluetooth on their Android
49
Given below are diagrams of the multiple pages and components within the
Senyas system application detailing the data flow that occurs on each page.
The first step is displaying the app splash screen, which a screen is
displayed when you open the app. After the splash screen display, the app
navigates to home page. The main page is the main screen of the app, and it is
50
Figure 3.3.13. Continue Process Side Menu & Disconnect Button
This figure 3.3.13 shows the side menu can be opened and closed by
clicking and swiping from right or left, respectively. The disconnect button checks
"Disconnected".
51
Figure 3.3.14. Navigation Flow of Main Page
52
The main page of the app contains several functionalities, including a side
menu, disconnect button, 3000 millisecond delay, SDK version check, clear button,
and microphone button. On opening the main page, the app automatically checks
the SDK version. If the SDK version is greater than or equal to 31, the app
terminates. Otherwise, it notifies nearby devices. The clear button clears the text
53
If it is, it displays an alert that says "Disconnected". Otherwise, it terminates
the app. The 3000 millisecond delay checks if the Bluetooth is enabled on the
device. If it is not, it notifies the user and redirects them to the Bluetooth settings
to a device. If it is, it displays an alert that says "Bluetooth Connected" Then you
can do the Text-to-speech by clicking the Gesture button sign language device
54
This figure 3.3.16 shows the microphone button checks the Wi-Fi
connection. If the Wi-Fi is turned on, it allows the user to do speech-to-text by long
pressing the microphone button. The app will automatically stop the speech-to-text
feature when the user stops speaking. If the Wi-Fi connection is turned off, it
notifies the user and redirects them to the Wi-Fi settings page on their device to
enable it.
Figure 3.3.17. System Flow of Bluetooth Notification Pages When Connection Lost
Figure 3.3.17 shows the system flow of Bluetooth notification pages when
the connection is lost. On Bluetooth, if it is not turned on, a notifier with a "Continue"
button will be displayed. Clicking the "Continue" button will open the Bluetooth
55
settings menu on your mobile phone. Then you can now enable the Bluetooth.
Select and pair the device of the Senyas Device connection from the android
Figure 3.3.18. System Flow of Wi-Fi Notification Pages When Connection Lost
Figure 3.3.18 shows the system flow of Wi-Fi notification pages when the
connection is lost. If Wi-Fi is not turned on, a notifier with a "Continue" button will
be displayed. Clicking the "Continue" button will open the Wi-Fi settings menu on
your mobile phone. Then enable your Wi-Fi connection. After that, you can now
56
Figure 3.3.19. Hidden Menu Containing Page References and Functions
concealed menu accessible through either the menu button or by swiping right
from the edge of the screen. This menu provides access to a range of page
options, including the 'Help' page, for navigating and utilizing the app more
57
describe all process & finally, a toggle button to switch between Light and Dark
Figure 3.3.20. Help Page system flow from the hidden menu
Figure 3.3.20 shows the system flow of the help page from a hidden or side
menu. When you click the help page from the side menu, it will open a new screen
with three general questions or details about the app. Each question or detail has
the same functionality, but different content. Clicking on each question or detail will
58
open another screen with the answer to the question. Terms and Conditions will
open a new screen with the legal terms and conditions for using the app.
The mobile application serves as the main place for translation and
communication.
The study of Tan et al. (2019) provides valuable support for the inclusion of
effectiveness of using a mobile app alongside a wearable device for sign language
that translates sign language gestures into text using an Android app. This
approach will help us as it shares similar goals with our project, aiming to enhance
The application receives data from the glove device and utilizes an
of a wide range of gestures. The application converts spoken language into text,
enabling hearing individuals to communicate with deaf users through the Senyas
system and this feature will also allow deaf users to understand the translated
speech of hearing individuals. The application translates recognized FSL signs into
text or words, which are displayed on the mobile screen. The mobile application
will feature an interface that is easy to navigate and provides clear visual user
59
2.7 Interface Design
interface and block-based programming approach made it an ideal choice for this
accessibility allowed the team to create, design and bring the Senyas application
to life, effectively bridging the communication gap between deaf and hearing
individuals.
between pages, and contains main page. The main page contains two main
sections, the app bar and the content section. On right side of the app bar, when
you tap on menu icon, the hidden side menu (navigation bar) will appear. On the
left side, the Bluetooth icon will serve as an indicator if you’re connected to the
glove device or not. You can connect the mobile application to the glove device
60
Side Menu Bluetooth Icon
Speech-to-text by
long press button it
will display here
based of your
speech
Text-to-speech will
display based of your
sign language
Before going to main page, the user will see first the splash screen and then
automatically redirected to the main page. This Figure 3.3.21 shows the main page
of the app features a simple design with two main functionalities in the content
users to convert their speech into text, while the Text-to-Speech section could
icon and speak into their device. The app will then transcribe the user's Speech-
corresponding gesture from the glove device will be displayed. The app will then
61
In addition to the Speech-to-Text and Text-to-Speech functionalities, the
app also features the clear button which can clears the text box.
straightforward and user-friendly interface that provides users with the essential
Click Continue to
navigate Android
Bluetooth settings
This screen shows the app asking the user to enable Bluetooth connectivity
so it can pair with the glove translation device. The app requires Bluetooth
permission in order to find and connect with the glove via Bluetooth wireless
technology. By allowing the Bluetooth permission, the user grants the app access
which is essential for the app to work properly with the glove device.
62
NOTE: Only Android
Version 12 or higher
version will notify this
This figure 3.3.23 shows the permission request prompt for enabling
Bluetooth that appears on Android 12 and higher. On the latest Android versions,
users need to explicitly allow the required permissions for apps to access certain
features like Bluetooth. By clicking "Allow", the app will be granted permission to
turn on Bluetooth and utilize its full functionality. This extra permission step helps
63
Name of our
Device Click
to paired the
device
This figure 3.3.24 shows before using the glove device with the Senyas app,
you need to pair it with your phone via Bluetooth. This allows the glove and your
phone to communicate.
3. Turn Bluetooth on
After pairing the glove device, you can connect it to the Senyas app.
64
Click Continue to
navigate Android
Wi-Fi settings
This message will pop up when the user tries to use the speech-to-text
feature by long pressing the mic icon in the Senyas app without an internet
connection. The user will need to connect to the internet before they can use
speech-to-text.
To connect to the internet, the user can tap the redirect button “continue” in
the message box. This will take them to the Wi-Fi settings of their mobile phone.
Once they are connected to the internet, they can return to the Senyas app and
use speech-to-text. The generated text from speech-to-text will then display in the
text box.
65
the Senyas app uses a cloud-based speech recognition service. According to
Google Cloud (2023), it works by sending your voice recording from the app to
Click to
Automatic
Bluetooth
connected
This figure 3.3.26 shows after pairing the glove device in the Bluetooth
settings with your mobile phone, you can now go in this section to connect the
Senyas app to glove device. To connect it, you can easily tap the Senyas Device
66
Disconnected
Button
This figure 3.3.27 shows the gesture button serves as the control
mechanism for activating the FSL translation process. Once the glove device and
mobile application are connected, the user can activate the translation by tapping
the button. This triggers the glove device to begin capturing and interpreting hand
movements and translating them into corresponding FSL signs. As the user
performs various signs, the translated text will be displayed in a text box. By
processing of sensor data and reduces background noise, ultimately improving the
67
Help Page
About Page
Switch Dark or
light mode
The app features a hidden side menu for navigation, accessed by tapping the
menu icon in the top left of the main screen. This vertical menu slides out from the
1. Help - This option provides some user guides or resources for using the
app correctly like how will you connect the device to this app. It offers
basic questions, tutorials, contact information for support, and terms and
conditions.
4. Theme Switcher - Toggle between light and dark mode color schemes.
68
Questions
& Answers
Terms and
Condition
The Help page provides some overview of the Senyas app's features and
functionality, along with instructions on how to use them. It is divided into two main
sections:
1. Basic Questions
This answers common questions about the Senyas app, such as:
deaf?
69
2. Terms of Use
This section provides a link to the Senyas app's terms of use, which
outline the legal agreement between users and the creators of the app.
In Figure 3.3.30, the Question 1 Page for the Senyas app describes the key
differences between using the app with a connected device and without a
connected device. The main difference is that when the device is connected, the
app can receive the user's hand gestures and translate them into text. This allows
for two-way communication between the user and the app. When the device is not
connected, the user cannot use the text-to-speech you need to connect Bluetooth
70
The Question 2 Page contains instructions on how to use the Senyas app
to translate Filipino Sign Language (FSL) into text and speech, and vice versa.
This section also provides steps involving using the Senyas app from installing to
The Question 3 Page contains the contact information where you can
communicate with creators to entertain your questions or inquiries about the app.
The Terms and Conditions page for the Senyas app, which is a mobile app
that translates Filipino Sign Language (FSL) into text and speech, and vice versa,
outlines the legal agreement between users and the app's developers. It covers a
71
• Acceptance of Terms
• Prohibited Conduct
• Intellectual Property
• Termination
• Limitation of Liability
• Entire Agreement
• Changes to Terms
• Contact Information
By using the Senyas app, users agree to the terms and conditions. It is
important for users to carefully read and understand the terms and conditions
72
This Figure 3.3.32 shows for about page provides users with information
about the app and its purpose. The page begins with a brief overview of the Senyas
The Senyas mobile application will serve as a part of the solution for the
people. It will translate sign languages into a readable text and hearable speech.
solution". The about page states that the app is designed to be "accessible to
73
This figure 3.3.33 shows for User Manual page to provides users with
detailed instructions on how to use the Senyas app to translate Filipino Sign
Language (FSL) into text and speech, and vice versa. It also covers a variety of
topics, including:
• System Overview
• To Setup Senyas
• To do Speech to Text
• To do Text-to-Speech
The page is divided into several sections, each of which focuses on a specific
aspect of using the app. Each section includes clear and concise instructions and
74
(a) (b)
(c) (d)
75
Our Senyas prototype is designed to be a working system that can
recognize and translate movements in sign language through the use of hardware
put into place. During development, issues like improving the accuracy of gesture
and system, bridges the communication gap between deaf and hearing individuals
tracking sensors specifically, the joystick and an arm angle accelerometer which
captures hand movements and orientation, translating them into FSL to text that
can convert spoken language into text and vice versa, enabling two-way
76
3.2 Product Evaluation
Descriptive survey. The use of surveys can be useful in our research study
a survey, the researchers must first determine the research questions to ensure
that the questions asked in the survey are relevant. In our study, we opted to use
a descriptive survey to collect data on user perceptions and satisfaction with the
Senyas device and system. We use this type of survey as it aims to gather
experience with Senyas across different aspects like device design, mobile app
design, functionality etc. The survey data results will be analyzed using statistical
analyzing the survey data, the researchers will interpret the results and draw
conclusions based on the research questions. The survey data will support our
development.
77
Figure 3.4.1. Test Joystick & Gyroscope Value
The evaluation of the joystick and gyroscope is the main topic of this
testing processes. This comprises raw sensor data, system reactions, and any
78
3.3 Algorithm Training
machine learning models used in training and assessment. The dataset is created
by extracting relevant features from the device. These features may include hand
movements and other visual cues associated with sign language gestures. Each
entry in the dataset corresponds to a specific sign language gesture, labeled with
additional information, like hand or finger positions. The validity and dependability
of the research findings are critically dependent on the suitable dataset being
chosen. This section provides details about the sources, methods, and ethical
79
3.4 Product Development
Kodular. The Senyas app was made through the Kodular which allow you
to create Android apps easily with a blocks-type editor without needing to know
deeply into the code. With the Material Design UI, your apps will stand out
(Kodular, 2023). It is proved to be the ideal platform for developing our mobile
application due to its drag-and-drop interface, which significantly simplified the app
creation process and eliminated the need for extensive coding knowledge. This
and open-source nature made it an accessible and cost-effective platform for our
project. Overall, Kodular proved to be an invaluable tool for creating our mobile
80
Figure 3.4.4. The Kodular Platform Environment
building native Android apps. It employs a visual programming interface and drag-
and-drop tools to convert UI designs directly into functional code (Kodular, 2023).
application for translating Filipino sign language. This demonstrates the potential
of Kodular as a promising solution for quickly building robust mobile apps under
81
Figure 3.4.5. Kodular Companion
a mobile app that allows developers to test their Kodular apps on their Android
devices. The app connects to the Kodular Creator web development platform and
allows developers to see their app changes in real time. This can be extremely
helpful for debugging and testing purposes. With the Kodular Companion, you
don't need to export or compile your app before testing it, which can save you a lot
of time and effort. The Kodular Companion works with all Android devices running
from late and updated version, so you can test your app on a wide range of
devices. The Kodular Companion is very easy to use. Simply connect your Android
device to the Kodular Creator web development platform and scan the QR code
that is displayed or enter the code. The Kodular Companion is a valuable tool for
82
developers who are using the Kodular platform to create Android apps. It can help
you to save time, improve your workflow, and create better apps.
Edge Impulse. We train our dataset using the Edge Impulse. Edge Impulse
users can get data from various sources, including data from sensors, public
83
Figure 3.4.7. Arduino IDE Interface
prototyping and developing electronics projects due to its ease of use and wide
range of supported boards. In our system we utilized the Arduino IDE to bridge the
gap between trained sign language recognition model also the microcontroller
embedded in the glove device. The trained model, which likely resides on a
Arduino IDE comes into play. Using the Arduino IDE, the researchers can convert
the trained model into a lightweight version that can be stored in the
84
microcontroller's memory. Once the optimized model is embedded in the
microcontroller, the glove device can perform real-time sign language recognition
share their creations with others, teach electronics in educational settings, and
even design and manufacture professional printed circuit boards (PCBs). We used
Fritzing to design the schematic and hardware layout of our glove device. Fritzing's
85
device, including the joystick, accelerometer, and microcontroller. Fritzing played
development of Senyas device and system. It provides detailed tables outlining the
Table 3.4.1:
Hardware Cost
Unit Total
Component Product Description Quantity
price Price
ESP32 Development
Microcontroller 1 205.00 205.00
Board
Three-axis gyroscope
MPU6050 and a three-axis 1 84.00 84.00
accelerometer
Lithium-ion
Battery 1 400.00 400.00
Polymer
TP4056 Li-ion
Charger Module 1 25.00 25.00
Lithium Battery
Boost 2A DC-DC Power
1 75.00 75.00
Converter Module
86
Jumper Wires Wires 1 29.00 29.00
The total cost of 1,919.00 pesos is obtained by summing up the total prices
components required for the project. This total cost is essential for budgeting and
Table 3.4.2:
Software Subscription
1 Month
Kodular $ 3.50 per Month $ 0.00 (Free Plan)
Subscription
$ 0.00 (Free
Kodular
Not Applicable Free Download at Play
Companion
Store)
87
Fritzing Not Applicable Free $ 0.00 (Free Plan)
The total cost of $ 0.00 indicates that all the mentioned tools and
subscriptions are currently being used under free plans or are freely available. The
inclusion of "0 PHP" emphasizes that there are no associated costs in the given
context. It's common for certain software tools to offer free plans or be open-
Table 3.4.3:
Documentation Cost
each item. This total represents the overall cost for the various items and services
88
mentioned, including tokens, transportation, questionnaire copies, manuscript
Table 3.4.4:
Total Cost
Description Cost
Hardware Cost P 1,919.00
Software Subscription P 0.00
Documentation Cost P 11,610
Total Cost P 13,529
physical components and devices for a project. In this case, the hardware cost is
P 1,919.00. The breakdown of the hardware cost may include expenses for items
expenses related to using software services that may require a subscription fee.
In this case, the cost is P 0.00, suggesting that either the software tools being
utilized are open-source, freely available, or currently being used under a free
detailed breakdown of this cost includes expenses such as printing, copying, and
89
4. Integration and Testing. Occur after the device and app coding and
assess the accuracy of the device's functionalities and its ability to communicate
communication between the device and the app, validating the overall reliability
In Figure 3.5.1 our testing phase's main goal is to thoroughly evaluate the
mobile application and device's multiple features. The thorough testing plan
guarantees that the technology solution satisfies industry standards and user
Some challenges occurred during the testing phase, even with careful
planning. These difficulties are openly discussed and range from unpredictable
90
our testing methodology, which offers a framework for interpreting the findings and
conclusions.
5. System Deployment. The user is actively engaged in testing the device and
app to evaluate comfortability and accuracy. This phase involves soliciting user
feedback and observing their interactions with the system to ensure seamless two-
(a) (b)
91
(c) (d)
device. The goal is to ensure a smooth and efficient rollout, enabling users to
plan. The tasks, responsibilities, schedules, and materials needed for an effective
rollout are described in this plan. It considers things like hardware provisioning,
92
transition the developed mobile application and device from the development
Figure 3.7.1 illustrates the process of maintaining the device and app
enhance both the device and the associated application. The update cycle involves
93
identifying and addressing issues, implementing new features, and ensuring
Research Procedure
To begin the research process, it was essential to gather all the necessary
information and data, which was done through a literature review. The researchers
specifically focused on the existing devices related to the study to identify any gaps
or areas where the proposed research could contribute significantly. The data
gathered influences the design of the data and the procedure to be used for
The researchers aim to employ the Senyas device to test and evaluate the
researchers to evaluate the objectives of the study if it satisfies it. The device can
be used first to gather dataset of FSL to train the model. The datasets are then
used to train the algorithm and validate the accuracy of the algorithm.
The algorithm or model will be integrated on the glove device and the
communication.
Finally, the researchers would determine the accuracy of the device through
the series of test, conclude its usability and effectiveness in real-setting based on
user experience and results from the tests and surveys conducted.
94
Research Instrument
and reliability by doing a pilot test. The main advantage of pilot testing is to find
issues before launching the complete or final device. Pilot testing is to evaluate the
device overall usability. It has concerns about whether the device is gathering the
functionality.
study of learning what end users of a system or product need and want, then
employing those insights to enhance the design process for products, devices,
services or software.
Using the user experience (UX) testing in our study we could analyze the
mobile application and wearable device's overall user experience. This involves
monitoring people while they use the system and taking note of their contentment,
usability, and any issues. UX testing may reveal information about how effective
95
An assessment or evaluation known as an accuracy test focuses on
determining the accuracy and precision of a device or system. Accuracy tests are
expected result. The goal is to determine the level of accuracy and identify any
errors.
how accurately the device translates sign language gestures into the
corresponding audio and text representations. The test would compare the
test helps ensure that the technology is providing reliable and precise translations,
Specifically, we will employ the Weighted Mean as the statistical tool for this
purpose. This approach will enable us to extract meaningful insights from the
Weighted Mean
𝑆𝑐𝑎𝑙𝑒 5(𝑥) + 𝑆𝑐𝑎𝑙𝑒 4(𝑥) + 𝑆𝑐𝑎𝑙𝑒 3(𝑥) + 𝑆𝑐𝑎𝑙𝑒 2(𝑥) + 𝑆𝑐𝑎𝑙𝑒 1(𝑥)
𝑊=
𝑇
Where:
96
Percentage
𝐹
𝑃= × 100
𝑇
Where:
F – Frequency of Respondents
97
CHAPTER IV
RESULTS AND DISCUSSION
This chapter will cover the system functionality and features that meets with
the objectives for the study. This will provide further details on the system and its
examine the Requirement Analysis and Specification, interpreting the data being
interpretation of the data collected, offering valuable insights and facilitating the
nuanced understanding of the findings in the study. This section serves as a critical
1. To develop a system that can recognize FSL and interpret it to a normal person
1.1 To develop a wearable device that can recognize hand gesture and
device that can track both the finger and hand movements. When reading sign
language, the range of motion for hand gestures must be must be considered. This
involves the bending of the fingers, as well as the orientation and movement of the
98
gyroscope can be used to track their orientation and movement. The
accelerometer is used to detect the velocity of the hand movement, while the
Another sensor is utilized to measure the finger bending and Flex sensor is usually
used in this case, but due to its cost, it makes the system more expensive. This
research is a great opportunity to use a new low-cost sensor and analyze its
performance when integrated with the system. Joystick is most commonly used to
control machines and for video games. It is cheap can easily be replaced and with
that, we opted to use it as a sensor to estimate bending of the fingers. This sensor
will be connected to a microcontroller that will process their data and its calculated
output. The device is in compliance with the objective for the study, to recognize
recognition of the FSL and be able to classify it to its appropriate meaning. The AI
that will be implemented in our system will handle this workload and its
performance will be determined based on the sensors and the data gathered. To
avoid any confusion, our approach differs from the vision-based system, as noted
by Tan Ching Phing, et al (2019), vision-based systems recognize hand and finger
hand, wearable technology for sign language identification often uses glove-based
or user-attached sensors.
that are available today like Roboflow and Edge Impulse. Since we are handling
99
raw data from sensors unlike images and videos for vision-based system, we have
decided to use Edge Impulse for the development of FSL recognizing AI and
implementing it to the system. Edge Impulse is suitable for developers at all skill
simplifies the integration of machine learning into edge devices. It also includes
optimization features which supports edge devices requiring it less storage and
The platform also includes models that is available to be used by the user.
able to do so. The model we used is the Neural Network Classification model,
which is included for free in the platform. Inspired by the human brain, this model
uses algorithms to learn from our data and predict possible outcomes based on
new data points. This model is claimed to be ideal for anomaly detection, predictive
datasets needed for machine learning, datasets are very important to having an
accurate system. Acquiring the datasets for our system is achieved from recording
the hand gestures and sign of FSL while wearing the device. The raw data from
sensors will extracted through serial communication. An 80:20 split of the data is
made into training and test sets. This ratio will effectively train the model while also
100
After the data is captured, it will be then pre-processed by extracting the
timestamps and labelling the dataset. The datasets acquired will be then used for
Features will be then generated from the raw data on the datasets. Since
raw data is frequently complex and high-dimensional, models have a hard time
learning from it. By identifying the most pertinent and instructive features, feature
extraction lowers the data's dimensionality and improves accuracy and efficiency
of learning process.
Refer to figure 4.1.1, you will observe the features derived from the raw data
and visualizing within the platform by grouping and assigning different labels to a
color. Because the complexity of different datasets gathered, you can see some
features overlapping, like “A” and “7”. This can affect the accuracy of recognizing
101
Figure 4.1.1. Feature Extraction
The generated features from the datasets will be then used by the Neural
Network Classification to learn the different patterns from the data. The model was
Training for 50 epochs with a learning rate of 0.0005., while also feeding it with
data in 32 batches. The neural network architecture consists of Input layer (231
features), dense layer of 80 neurons, another dense layer of 100 neurons, and
another 100 neurons of dense layer with dropout layer in-between them.
With this architecture, the model was then trained with the use of the
gathered dataset with several dense layers with ‘softmax’ activation and dropout
layers of 0.1 to avoid overfitting. The result of the evaluation is then produced and
102
If you refer to the figure 4.2, “A”
1.2 To develop an android application for the system to display translated FSL
way communication. It avoided the need to build a speech recognition system from
103
incorporated through simple configuration and by connecting it to the text display
module.
When a normal person speaks into the android devices microphone, the
Key parameters like language and accent were configured to optimize speech
translated text. This allows seamless conversion of verbal speech into visual text
in real-time within the application through the integration of the speech recognizer
module.
text. Additionally, the displayed text could be converted into speech output using
speech could be captured as text, and then text-to-speech output tailored for deaf-
104
text-to-speech synthesis via the sign language device to vocalize messages for
deaf-mute recipients.
application in real-time.
The device will need to continuously stream data of recognized FSL to the
software application. To achieve this, the device will send the string output as bytes
that will then streamed through Bluetooth connection between the device and the
application. Bluetooth is known to not have great distance for communication, but
we need to reduce the power consumption for the device. We can use Bluetooth
for less power consumption and at the same data transfer rates compared to other
2. To identify the limitations of the system for future development and research
the device and system performance. The results showed accuracy limitations with
capabilities. Survey results from respondents also indicate to improve the system
for long-term continuous use. This evaluation enabled us to identify limitations and
add more sign language gestures to the device's library and make it more accurate
105
at recognizing them. We'll also explore ways to make it more comfortable to hold
and use it for a long duration of time or for continuous conversation. By fixing these
things, we'll make the device more reliable and get the best user experience when
participants. This will provide feedback on the overall performance of the system:
Senyas Glove and App. When analyzing and interpreting the computed value of
Table 4.1
Table 4.1 shows the percentage of respondents according to their Age. Out
106
Table 4.2
Out of 5 respondents, all of them (100%), were from Samar National School. We
chose this school since they have a Special Education class for deaf, mute and for
Table 4.3
Gender Percentage
M 4 80%
F 1 20%
Total: 5 100%
107
Table 4.4
This table present the results of a survey evaluating the Senyas Filipino
Sign Language Translation Device and System for Two-Way Communication. The
survey collected feedback from 5 respondents aged 14-19 years old, who are
calculated for each metric to summarize the average rating. An interpretation guide
108
Table 4.5
Device Performance
Scale Weighted
Indicators Interpretation
5 4 3 2 1 Mean
During the part of device performing that talks about the overall
performance of comfortability in the hands of user, the results were obtain that
resulted in a neutral rating of 3 for comfort. Which means that the device is not
easy to wear or remove, but it's not uncomfortable either. Overall, users have
109
Table 4.6
Scale Weighted
Indicators Interpretation
5 4 3 2 1 Mean
2.7 for the mobile app. The app interface is easy to understand and navigate. The
app connects to the device without any issues. Users are generally happy with the
mobile app.
110
Table 4.7
System Performance
Scale Weighted
Indicators Interpretation
5 4 3 2 1 Mean
In talking about the system performance, the survey results from the
respondents were neutral with total rating of 3.3 for the system’s accuracy. The
Language were not totally accurate but it performs well in other areas of the
system. Thus, the users were satisfied with system’s overall performance.
111
Table 4.8
User Experience
Scale Weighted
Indicators Mean
Interpretation
5 4 3 2 1
Part 5: User Experience
The system includes instructions and
user manual that are very 1 1 0 2 1 2.8 Neutral
comprehendible.
The overall system demonstrates
0 3 1 1 0 3.4 Neutral
ease-of-use and user-friendliness.
The system able to operate in any Dissatisfactor
0 0 1 1 3 1.6
conditions with no problem. y
How would you rate the system when Neutral
1 0 2 0 2 2.6
it comes to long term use?
The system was able to function with Neutral
1 0 2 1 1 2.8
no issues in a normal setting.
The system can be used for informal
2 2 1 0 0 4.2 Satisfactory
conversation.
The system will be able to be used in Neutral
2 0 1 2 0 3.4
formal conversation.
How would you rate the system in Neutral
1 1 2 0 1 3.2
interpreting FSL in real-time?
The system was able to keep-up with Neutral
1 1 2 0 1 3.2
the conversation.
Rate the overall usability of the system
3 0 0 0 2 3.4 Neutral
for day-to-day activities.
Legend: “Very Dissatisfactory" (1.0-1.7), "Dissatisfactory" (1.8-2.5), "Neutral" (2.6-3.4),
Legend: “Very Dissatisfactory" (1.0-1.7), "Dissatisfactory" (1.8-2.5), "Neutral" (2.6-3.4),
"Satisfactory" (3.5-4.2), and "Very Satisfactory" (4.3-5.0)
The user experience section talks about how the system performs in the
perspective of the user. Users gave a neutral rating of 3.1 for the user experience.
This is because the respondents found that during the formal conversation it
cannot function well but there is a room for improvement for the user experience.
112
Table 4.9
Overall Satisfaction
Scale Weighted
Indicators Interpretation
5 4 3 2 1 Mean
The last part of the survey is all about the overall satisfaction of the
respondents. The survey results that gathered from respondents was resulted in a
3.6 rating that means they have been satisfied by the overall performance of the
system.
In summary, users have mixed feelings about the device's comfort, giving it
a neutral rating of 3. The mobile app is easy to use and connect, and users are
generally happy with it. It received a neutral rating of 2.7. The system's accuracy
is not perfect, but users are satisfied with its overall performance. It received a
neutral rating of 3.3. There is room for improvement in the user experience, but
users are satisfied with the overall performance of the system. It received a neutral
rating of 3.1. Users are overall satisfied with the system, giving it a rating of 3.6.
113
Chapter V
This chapter is the final part of our research project. It summarizes the most
wanted to do with our project, how we did it, and what we discovered. Then, we
will carefully analyze the results and explain what they mean for our study. Finally,
we will suggest ways to improve the Senyas system in the future so that it can help
even more people with hearing and speech impairments participate in society.
Summary
People who are deaf and mute face big problems talking to others. Their
disability makes it hard for them to live normal lives and often makes them feel
lonely and disconnected from the world. It's even harder for people who are both
deaf and mute, as they have a huge communication barrier when interacting with
people who don't know sign language or haven't talked to deaf and mute people
before.
Thus, this chapter summarizes our thesis project, "Senyas: Filipino Sign
project aimed to bridge the communication gap between hearing individuals and
those who are deaf or mute and for both. We achieved this by developing a unique
114
are then translated into corresponding Filipino Sign Language (FSL) hand
gestures.
In our thesis project, collecting raw data and train this using the Neural
crucial role. This data will serve as the foundation for the model to learn the
and process of collecting raw data and using it to train a model, we can enhance
the effectiveness and accuracy of our Senyas system for FSL recognition.
gestures. This is because some sign language gestures only require a small bend
of the finger, but the joystick doesn't detect or read that. It mostly recognizes
individuals who are deaf and mute. We successfully translated finger movements
and arm angles into their FSL representations, and the mobile application provided
115
The development of Senyas represents a significant contribution to the field
of technology. This innovative system has the potential to improve the lives of
individuals who are deaf and mute by enabling them to communicate more
there is a room for further development. Future efforts will focus on enhancing the
believe that by continuing to refine and develop the Senyas system, we can
Conclusion
Device and System for Two-Way Communication," has successfully achieved the
1.) To develop a system that can recognize FSL and interpret it to a normal
2.) To identify the limitations of the system for future development and research
in achieving these objectives. The model's ability to learn complex patterns and
116
Additionally, its adaptability facilitated the continuous improvement of the system
The product evaluation using survey questionnaire were used to gather data
from the respondents in this thesis project. The survey questionnaire mainly
consists of five parts. The device and system performance with the rating of 3 and
3.3 respectively means that the respondents were neutral in this part. The mobile
application performance with the rating of 2.7 and the overall satisfaction with the
rating of 3.6 which means that they’re satisfied with it. Then the user experience
section has the total of 3.1 rating which means that the respondents are satisfied
with it.
participated in our product evaluation. Also, most of the participants could not read
or write well. This means that we didn't get enough feedback to make any
potential in facilitating two-way communication for deaf and muted individuals. Its
new approach for communication and social interaction for those who face
system was measured through survey questionnaire and we can analyze the result
The respondents were 5 students aged 14-19 years old from the Samar
National School. There was 1 female and 4 male respondents. In terms of hearing
117
ability, there was 1 deaf respondent and 4 deaf-mute respondents. All 5
of questionnaire because they are relatively easy to design and administer, and
they can provide quantitative data that can be easily analyzed. In our surveys, it
To analyze the data, a weighted mean score was calculated for each survey
metric. The formula for the weighted mean is provided, where W is the weighted
mean, x is the number of responses for each scale rating, and T is the total number
means enabled statistical analysis of the average user ratings on the various
to analyze the user rating data statistically and measure the systems performance.
Recommendation
and improvement:
118
• More datasets gathering: Make a more datasets to improve accuracy of the
enhanced functionality.
conveying emotions.
etc.) and screen sizes can access and use the app comfortably. This
includes users with visual impairments who may rely on larger text.
119
• Battery life indicator: Knowing the device's battery status makes the user
turn off then it will automatically disconnect its Bluetooth connection from
armbands, rings, or wristbands that can track finger movements and hand
orientation accurately.
PCB and battery to make it more compact and reduce overall size and
weight.
allows users to personalize the system for their specific hand size and
movement patterns.
120
BIBLIOGRAPHY
https://doi.org/10.30534/ijeter/2021/13972021.
https://doi.org/10.1088/1742-6596/1019/1/012017.
on-Communication-Disorders/.
extraction method for hand gesture recognition with Leap Motion." Journal of
https://doi.org/10.1145/3477498
121
Anupama, H.S., et al. (2021). Automated sign language interpreter using
0076
from https://www.lifeprint.com/
system for Filipino Sign Language to written Filipino text. Journal of Information
Cheok, M. J., et al. (2019). A review of hand gesture and sign language
Choi, H., & Park, M. (2022). Method and apparatus for sign language
122
Cristobal, S., & Martinez, L. B. (2021). Filipino Sign Language as
https://doi.org/10.1080/01434632.2020.1788813
education-sped-program/
Lifewire. https://www.lifewire.com/what-are-portable-devices-2377121
communication
Gadekallu TR, et al. (2021). Hand gesture classification using a novel CNN-
https://doi.org/10.1007/s40747-021-00324-x
Garg, S., & Dhall, A. (2021). Assistive Technologies for the Deaf and Hard
https://cloud.google.com/speech-to-text
123
Gu Y, et al. (2022). American Sign Language Translation Using Wearable
Inertial and Electromyography Sensors for Tracking Hand Movements and Facial
https://doi.org/10.3389/fnins.2022.962141
Kim G-M & Baek J-H. (2019). Real-time hand gesture recognition based on
https://play.google.com/store/apps/details?id=io.makeroid.companion&hl=en&gl=
US
Lee, J., & Kim, H. (2019). Glove-based sign language recognition system
and method. [US Patent No. 9,952,072]. United States Patent and Trademark
Office.
3557362/SignAloudglovestranslate-sign-language-movements-spoken
English.html.
124
Microsoft. (2023). Azure Speech Services. Retrieved from
https://azure.microsoft.com/en-us/services/cognitive-services/speech-services/
Sensory Gloves for Sign Language Recognition State of the Art between 2007 and
https://doi.org/10.1145/3485768.3485783
https://doi.org/10.1016/j.eswa.2022.118993
https://www.rxlist.com/mute/definition.html
share/mobile/philippines
Tan Ching Phing, et al. (2019). Wireless Wearable for Sign Language
Translator Device with Android-based App. University Tun Hussein Onn Malaysia.
https://doi.org/10.1007/978-981-13-6031-2_27
125
Terraskills. (2023). Sign language's importance in communication.
communication/
research
https://wfdeaf.org/our-work/sign-language/
from https://www.who.int/news-room/fact-sheets/detail/deafness-and-hearing-
from https://www.who.int/health-topics/disability#tab=tab_1
https://www.who.int/news-room/fact-sheets/detail/disability-and-health
Matter. https://mitsloan.mit.edu/ideas-made-to-matter/machine-learning-
explained
Yutonh Gu, Chao Zhen, Masahiro Todoh, & Fushen Zha. (2022). American
126
Sensors for Tracking Hand Movements and Facial Expressions. Neuromorphic
Zheng, J., Zhao, Z., Chen, M., Chen, J., Wu, C., Chen, Y., Shi, X., & Tong,
127
128
APPENDIX A
129
APPENDIX B
130
APPENDIX C
131
APPENDIX D
132
APPENDIX E
Questionnaire
Part 1: Demographics
Name (Optional):
•
Age:
•
School:
Gender:
•
Mode:
133
Part 2: Device Performance
Instructions: On a scale of 1 to 5, where 1 means "Very Dissatisfied" and 5
means "Very Satisfied," please choose the number that best represents your
satisfaction with the Senyas device performance.
1 = Very Dissatisfied
2 = Dissatisfied
3 = Neutral
4 = Satisfied
5 = Very Satisfied
Questions 1 2 3 4 5
How would you rate the ease of setting
up the wearable device for initial use?
How would you rate the comfortability of
the device?
The device demonstrates adjustability to
the user’s hand.
134
Part 3: Mobile Application Performance
Instructions: On a scale of 1 to 5, where 1 means "Very Dissatisfied" and 5
means "Very Satisfied," please choose the number that best represents your
satisfaction with the Senyas mobile application performance.
1 = Very Dissatisfied
2 = Dissatisfied
3 = Neutral
4 = Satisfied
5 = Very Satisfied
Questions 1 2 3 4 5
and comprehensible.
comprehensible.
135
Part 4: System Performance
Instructions: On a scale of 1 to 5, where 1 means "Very Dissatisfied" and 5
means "Very Satisfied," please choose the number that best represents your
satisfaction with the Senyas system performance.
1 = Very Dissatisfied
2 = Dissatisfied
3 = Neutral
4 = Satisfied
5 = Very Satisfied
Questions 1 2 3 4 5
The whole system is easy to set-up and
can be used immediately.
Different features included in the system
were able to function.
There were no issues with the
connectivity of the device and the
software.
The system was able to recognize
different FSL letters/words.
System was able to construct
comprehensive sentence from FSL.
How would you rate the accuracy of the
system when it comes recognizing and
translating FSL?
How satisfied are you with the number of
letters/words that system was able to
translate from FSL?
How would you rate the overall
functionality and the features included in
system?
136
Part 5: User Experience
Instructions: On a scale of 1 to 5, where 1 means "Very Dissatisfied" and 5
means "Very Satisfied," please choose the number that best represents your
satisfaction with the Senyas user experience.
1 = Very Dissatisfied
2 = Dissatisfied
3 = Neutral
4 = Satisfied
5 = Very Satisfied
Questions 1 2 3 4 5
137
Part 6: Overall Satisfaction
Please provide your insights regarding the overall satisfaction using the
Senyas: Filipino Sign Language Translation Device and System in addressing
Two-Way Communication. Please share your insights in the spaces provided
below.
Instructions: On a scale of 1 to 5, where 1 means "Very Dissatisfied" and
5 means "Very Satisfied," please choose the number that best represents your
satisfaction with the Senyas performance.
1 = Very Dissatisfied
2 = Dissatisfied
3 = Neutral
4 = Satisfied
5 = Very Satisfied
Question 1 2 3 4 5
138
APPENDIX F
Disclaimer
139
140
APPENDIX G
SENYAS: Filipino Sign Language Translation Device and System for Two-Way
Communication
1. Acceptance of Terms
you agree by these terms and conditions. If you do not agree to these
2. Prohibited Conduct
You agree not to use the App for any illegal or unauthorized purpose.
You also agree not to use the App in any way that could damage, disable,
or impair the App or interfere with any other party's use of the App. You
further agree not to use the App to transmit any content that is unlawful,
inappropriate actions.
3. Intellectual Property
The App and all of its contents, including but not limited to the text,
graphics, images, and audio, are the property of Senyas or its creators. You
agree not to copy, modify, distribute, or create plagiarized works of the App
141
4. Termination
We may terminate your right to use the App at any time, for any
reason, without notice. You may terminate your right to use the App by
5. Limitation of Liability
Even if you are aware of the potential for harm, Senyas will not be
held accountable for any harm that comes from using the app. This includes
6. Entire Agreement
This agreement is the only one that matters between you and Senyas
about the app. It replaces any other agreements or promises you may have
7. Change to Terms
We may change these rules anytime, and we'll let you know by
posting the new ones on the app or our website. You agree to check the
rules regularly and follow the latest ones. If you don't agree to the new rules,
8. Contact Information
If you have any questions about these Terms, please contact us at:
142
143
APPENDIX H
USER MANUAL
USER MANUAL
2023
144
Senyas: Filipino Sign Language Translation Device
and System for Two-Way Communication
145
Table of Contents:
1. System Overview
- What is Senyas?
- How it Works?
2. Getting Started
3. Android 12 higher
4. Bluetooth Connection
- Pair Device
5. Speech-to-text
6. Senyas Device
146
System Overview
What is Senyas?
The Senyas mobile application will serve as a part of the solution for the
person. The app is capable of doing some features such as translating sign
languages into readable text and hearable speech. The app will fully utilize
How it works?
translating sign language gestures into text and text-to-speech output in real
147
Getting Started
Step 1: Visit the Application’s Facebook Page and look for the links for Senyas
Download Alternatively, you can download the App in the Applivery or scan QR
QR code of the
2 download page
1
148
Step 2: Click on “Install” to download the app to your device
149
Step 3: Once the installation is complete, locate the app on your device’s home
150
Android 12 higher
Ask Permission
Note:
If you are using Android 12 or higher, you need to grant certain permissions
"Allow" when prompted, you give the app approval to activate Bluetooth and
151
Bluetooth Connection
Paired Device
Step 1: Open the Senyas App on your Android device and Senyas Device.
1 Power Switch
152
Step 2: If Bluetooth is turned off on your android device, a notification will appear
153
Step 3: Turn on Bluetooth
154
Step 4: In the Senyas App, under the "Available Devices" section, find and tap the
option that hardware device name "Senyas Device" to pair with the device.
155
Step 5: Wait for the pairing process to complete. You'll see a notification when
pairing is successful.
Step 6: Once the Senyas device is successfully paired, go back to the Senyas
App.
156
Speech-to-Text
instructing you to turn on Wi-Fi. Tap "Continue" which will take you to your device's
Wi-Fi settings.
157
Step 2: In your device's Wi-Fi settings, turn on Wi-Fi and connect to a wireless
network.
158
Step 3: Once connected to Wi-Fi, go back to the Senyas App.
159
Step 4: You will see a pop-up asking for microphone permissions. Tap "While
160
Step 5: You can now use the Speech-to-text by long press button.
161
Senyas Device – Setup and Maintenance
Power Switch
I – Power On
O – Power Off
LED Indicators
162
Device Ports
Debugging Port – Used to debug and configure the code the device. When
monitoring the device, use a serial monitoring app with baud rate set to 115200.
NOTE: When using the debugging port, make sure to turn off the device first.
There is no protection in the device when it is supplied by two external power
supply.
Charging Port – This port is used to charge the device. The device comes with
550mAh Li-Po battery in which the device can be continuously used for 2 hours 30
minutes. It also takes 2 – 3 hours to fully charge the device.
NOTE: Even though the device has overcharge protection, make sure not to
charge the device for more than 3 hours. And make sure that the device isn’t
exposed to sunlight for a very long time.
163
APPENDIX I
DATA SHEET
ESP-WROOM-32
164
ESP32 Pin Diagram
165
Schematic Diagram
166
MT3608 – 2A DC-DC Boost Power Module
overload protection
Schematic Diagram
167
MPU6050
Gyroscope
Accelerometer
Supply Voltage
Operating Circuits
168
Lithium-ion Polymer
• Material – Cobalt
169
Discharge: -20~60°C
Schematic Diagram
170
TP4056 Li-ion Lithium Battery Charger Module
Schematic Diagram
171
Rocker Switch
Schematic Diagram
172
173
APPENDIX J
Source Code
Software Code
Splash Screen
Homepage
174
175
176
About Page
177
Help Page
178
User Manual Page
Question Page
179
Hardware Code
#include <SenAi_Test_inferencing.h>
#include <Arduino.h>
#include <Adafruit_MPU6050.h>
#include <Adafruit_Sensor.h>
#include <Wire.h>
#include <BluetoothSerial.h>
#define SAMPLING_FREQ_HZ 10
#define NUM_CHANNELS
EI_CLASSIFIER_RAW_SAMPLES_PER_FRAME
Adafruit_MPU6050 mpu;
BluetoothSerial SerialBT;
char com;
String trigger;
int state = 0;
void setup(void) {
Serial.begin(115200);
SerialBT.begin("Senyas Device");
while (!Serial){
delay(10); // will pause Zero, Leonardo, etc until serial console opens
180
}
// Try to initialize!
if (!mpu.begin()) {
while (1) {
delay(10);
Serial.println("Hardware Initiated");
mpu.setAccelerometerRange(MPU6050_RANGE_2_G);
mpu.setGyroRange(MPU6050_RANGE_250_DEG);
Serial.println("MPU set");
delay(1000);
void loop() {
ei_impulse_result_t result;
int err;
float input_buf[EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE];
signal_t signal;
181
sensors_event_t a, g, temp;
while (SerialBT.available()){
delay(10);
com = SerialBT.read();
trigger += com;
while(trigger == "a"){
delay(10);
timestamp = millis();
acc_z = a.acceleration.x;
acc_y = a.acceleration.y;
acc_x = a.acceleration.z;
182
gyro_x = g.gyro.x;
gyro_y = g.gyro.y;
gyro_z = g.gyro.z;
// Storing sensor data to input buffer of the model for FSL recognition
input_buf[(NUM_CHANNELS * i) + 0] = joy_1;
input_buf[(NUM_CHANNELS * i) + 1] = joy_2;
input_buf[(NUM_CHANNELS * i) + 2] = joy_3;
input_buf[(NUM_CHANNELS * i) + 3] = joy_4;
input_buf[(NUM_CHANNELS * i) + 4] = joy_5;
input_buf[(NUM_CHANNELS * i) + 5] = acc_x;
input_buf[(NUM_CHANNELS * i) + 6] = acc_y;
input_buf[(NUM_CHANNELS * i) + 7] = acc_z;
input_buf[(NUM_CHANNELS * i) + 8] = gyro_x;
input_buf[(NUM_CHANNELS * i) + 9] = gyro_y;
if (err != 0){
Serial.print(err);
return;
183
}
Serial.println("Predicted");
Serial.print(" ");
Serial.print(result.classification[i].label);
Serial.print(": ");
Serial.println(result.classification[i].value);
maxProbability = result.classification[i].value;
maxIndex = i;
SerialBT.print(result.classification[maxIndex].label);
SerialBT.print(" ");
Serial.print("Result:");
Serial.print(result.classification[maxIndex].label);
Serial.print(" ");
} else {Serial.println("");};
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
CURRICULUM
VITAE
202
203
204
205
206