100% found this document useful (1 vote)
117 views

Eye Blink Detection: Integrated - Master of Computer Applications

The document describes an eye blink detection project. It was submitted by three students - Anudeep Dasari, Abhijna P S, and Navneet Kumar Singh - under the guidance of Mr. Akshay S at Amrita Vishwa Vidyapeetham University. The project involved developing an application that uses the camera to detect eye blinks and aims to remind users to take breaks when using computers in order to protect eye health. It utilized technologies like OpenCV, Dlib, PyCharm, PyQt5 and calculated the eye aspect ratio (EAR) to determine blinks.

Uploaded by

Sudhanshu D
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
117 views

Eye Blink Detection: Integrated - Master of Computer Applications

The document describes an eye blink detection project. It was submitted by three students - Anudeep Dasari, Abhijna P S, and Navneet Kumar Singh - under the guidance of Mr. Akshay S at Amrita Vishwa Vidyapeetham University. The project involved developing an application that uses the camera to detect eye blinks and aims to remind users to take breaks when using computers in order to protect eye health. It utilized technologies like OpenCV, Dlib, PyCharm, PyQt5 and calculated the eye aspect ratio (EAR) to determine blinks.

Uploaded by

Sudhanshu D
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

EYE BLINK DETECTION

A project report submitted

in partial fulfilment of the requirement for the award of the degree of

Integrated - MASTER OF COMPUTER APPLICATIONS

by

ANUDEEP DASARI [MY.SC.I5MCA18019]


ABHIJNA P S [MY.SC.I5MCA18024]
NAVNEET KUMAR SINGH [MY.SC.I5MCA18033]

under the guidance of


Mr. AKSHAY S
Assistant Professor, Department of Computer Science,
Amrita Vishwa Vidyapeetham, Mysuru Campus

AMRITA VISHWA VIDYAPEETHAM


MYSURU CAMPUS

January 2021

1
AMRITA VISHWA VIDYAPEETHAM

MYSURU CAMPUS

BONAFIDE CERTIFICATE

This is to certify that the project entitled "EYE BLINK DETECTION" submitted by,

ANUDEEP DASARI [MY.SC.I5MCA18019]


ABHIJNA P S [MY.SC.I5MCA18024]
NAVNEET KUMAR SINGH [MY.SC.I5MCA18033]

for the award of the degree of Integrated - Master of Computer Applications at Amrita
School of Arts and Sciences is a bonafide record of the work carried out by them under my
guidance and supervision at Amrita Vishwa Vidyapeetham, Mysuru.

SUPERVISOR PROJECT CO-ORDINATOR (UG)

Mr. AKSHAY S Mr. AKSHAY S

Assistant Professor Assistant Professor


Department of Computer Science Department of Computer Science
Amrita School of Arts and Sciences Amrita School of Arts and Sciences

CHAIRPERSON

Mr. Adwitiya Mukhopadhyay

(i)

2
AMRITA VISHWA VIDYAPEETHAM

MYSURU CAMPUS

DECLARATION

We,
ANUDEEP DASARI [MY.SC.I5MCA18019]
ABHIJNA P S [MY.SC.I5MCA18024]
NAVNEET KUMAR SINGH [MY.SC.I5MCA18033]

hereby declare that this project report, entitled “EYE BLINK DETECTION” is a record of
the original work done by us under the guidance of Mr. AKSHAY S, Assistant Professor,
Department of Computer Science, Amrita School of Arts and Sciences, Mysuru and that to
the best of our knowledge, this work has not formed the basis for the award of any
degree/diploma/associate-ship/fellowship or a similar award, to any candidate in any
University.

Signature of the Student

1. Abhijna PS

2. Navneet Kumar Singh

3. Anudeep Dasari

Place: Mysuru Date: 09/01/2021

(ii)

3
ACKNOWLEDGEMENT

We would like to express our sincere thanks to Amma, our beloved chancellor
“Mata Amritanandamayi Devi”.

We would like to express our sincere thanks to Br. Sunil Dharmapal, Director,
who supported us with continuous encouragement, motivation and lending us with right
direction throughout our tenure of studies.

We would like to express our sincere thanks to Br. Venugopal, Correspondent, for
providing us with well supported peaceful studying environment with excellent
infrastructure and support.

We would like to express our sincere thanks to our beloved principal


Dr. Rekha Bhat for giving us moral support and continuous encouragement which has
been the key for the successful completion of the project.

We are pleased to acknowledge Mr. Adwitiya Mukhopadhyay, Chairperson,


Department of Computer Science, for his encouragement and support throughout the project.

We would like to express our heart-felt gratitude to our Project Co-Ordinator


Mr. Akshay S, Assistant Professor, Department of Computer Science for his valuable
suggestions and excellent guidance rendered throughout this project.

We would like to express our heart-felt gratitude to our guide


Mr. Akshay S, Assistant Professor, Department of Computer Science for his valuable
suggestions and excellent guidance rendered throughout this project.

Further, we extend our thanks to all the faculty members and technical staff of our
department and ICTS for their suggestions, support and providing resources at needed
times.

Finally, we express our sincere gratitude to our parents and friends who have been
the embodiment of love and affection which helped us to carry out the project in a smooth
and successful way.

(iii)

4
ABSTRACT

With a growing number of computer devices around, the more time we spend for interacting
with such devices there is an adverse effect on our health as well. The aim of the project is to
assist the user to be reminded to take care of his own eye, by avoiding the computer vision
syndrome. Nowadays all the computers come with camera hence we make use of camera and
build eye blink detector application. Increase in technology leads to more workaholic people
around and therefore we make use of technology itself to satisfy human healthcare. This is not
the first application ever built to take care of human eye blink count but our application
provides a user friendly as well as an efficient way of resolving things. Since the application
runs in background it makes the way for user to use other applications and also reminding the
user to take care of the eye by giving them a constant notification.

(iv)

5
Table of Contents
CERTIFICATES……………………….. i
ACKNOWDLGEMENT……………….. ii
DECLERATION………………………..iii
ABSTRACT……………………………. iv
LIST OF FIGURES…………………….. v
LIST OF TABLES……………………… vi

Chapter No. Chapter Title Page No.


1. Introduction 9
1.2 Motivation………………………….....10
1.3 Problem definition…………………….10
1.4 Objective of Project…………………...10
1.5 Limitations of Project………………….10
2. Analysis 11
2.1 Introduction……………………………12
2.2 Software Requirement………………...12
2.3 Hardware Requirements……………….12
2.4 Software Requirements………………...12
2.5 Content Diagram of project…………….15
2.6 Flowchart of blink detection module…...16
3. Design 17
3.1 Architecture diagram………………......18
3.2 Data Flow diagram…………………….18
3.3 Use case diagram………………………19
3.4 Module design…………………………20
4. Implementation and Results 21
4.1 Introduction…………………………….22

6
4.2 Explanation of key function……………22
4.3 Result analysis………………………….25
4.4 Method of implementation……………..26
4.5 Conclusion……………………………...28
5. Testing and Validation 29
5.1 Introduction……………………………..30
5.2 Design of test case and scenarios……......30
5.3 Validation……………………………….30
5.4 Conclusion………………………………31
6. Conclusion 32
6.1 Future Enhancement……………………..33
7. References 34

7
LIST OF FIGURES

Fig No Figure Description Page Number


2.4.1 Open CV 13
2.4.2 PyCharm 13
2.4.3 Dlib & Facial landmark indices 14
2.4.4 PyQt5 14
2.5.1 Content Diagram 15
2.6.1 Flow chart 16
3.1 Architecture Diagram 18
3.2 Data flow Diagram 18
3.3 Use case diagram 19
3.4 Module design 20
4.3.1 Application 25
4.3.2 Application and blink detection 26
4.3.3 Application and blink detection 26
4.4.1 Coordinates used to calculate EAR 27
4.4.2 Formula to calculate EAR 27
4.4.3 EYE Landmarks 28
4.4.4 EYE Closed 28
4.4.5 Graph plotting EAR VS Time 28

LIST OF TABLES

Table No Table Description Page Number


5.2.1 Test case and scenarios 30

(v)

8
CHAPTER 1
INTRODUCTION

9
1.1 Introduction
Viewing a computer or digital screen often makes the eyes work harder. As a result, the unique
characteristics and high visual demands of computer and digital screen viewing make many
individuals susceptible to the development of vision-related symptoms. The project focuses on
helping the once own eye by avoiding the vision problem. Due to hours of usage in computer
we tend to forget the stress we put on our eye. The project is focused on avoiding such thing
and a small effort to keep vision better.

1.2 Motivation
Where due to the watching screen for long time will lead to many eye vision syndromes.
Keeping that in the mind we are intended to do this project to help those individuals for not
falling for such eye syndrome in coming future and to take care of their eye.

1.3 Problem definition


 The tendency to forgetting to blink our eye in the process of working in computer which
is leading many individuals to have vision problems like short and long sightedness.
 If the small notify reminds the user to take care of their own eyes, which would help
avoiding the eye problems faced in future.

1.4 Objective of Project


 Notify the user if the eye blinks are less than the average blink.
 Also notify the user to take rest after these many minutes of usage of computers.
 Storing the statistic of number of blinks per day.

1.5 Limitations of Project


This project post focused solely on using the eye aspect ratio as a quantitative metric to
determine if a person has blinked in a video stream.
However, due to noise in a video stream, subpar facial landmark detection, or fast changes in
viewing angle, a simple threshold on the eye aspect ratio could produce a false-positive
detection, reporting that a blink had taken place when in reality the person had not blinked.

10
CHAPTER 2
ANALYSIS

11
2.1 Introduction
So here in this chapter, the software which we used to build the project and hardware
requirements will be described. And the how our project works and the flowchart of the eye
blink detection module is explained.

2.2 Software Requirement Specification


Software requirements specification establishes the basis for an agreement between customers
and contractors or suppliers (in market-driven projects, these roles may be played by the
marketing and development divisions) on what the software product is to do as well as what it
is not expected to do. Software requirements specification permits a rigorous assessment of
requirements before design can begin and reduces later redesign. It should also provide a
realistic basis for estimating product costs, risks, and schedules. Used appropriately, software
requirements specifications can help prevent software project failure.

2.3 Hardware Requirements


Camera
The camera is the main component of this project. The camera can be a normal web camera
internally attached to laptop, an external camera, raspberry pi module camera or an IP
camera. The camera should be able to record in average quality.

2.4 Software Requirements


OpenCV
OpenCV (Open Source Computer Vision Library) is released under a BSD license and hence
it’s free for both academic and commercial use. It has C++, Python and Java interfaces and
supports Windows, Linux, Mac OS, iOS and Android. OpenCV was designed for
computational efficiency and with a strong focus on real-time applications. Written in
optimized C/C++, the library can take advantage of multi-core processing.
Enabled with OpenCL, it can take advantage of the hardware acceleration of the underlying
heterogeneous compute platform.
We use OpenCv python for our project. The OpenCv uses the camera in order to capture
images convert them into appropriate format and then detect the eye blink. The OpenCv takes
help from the Dlib in order to detect facial landmarks.

12
Figure 2.4.1- OpenCV
PyCharm
PyCharm is an integrated development environment used in computer programming,
specifically for the Python language. It is developed by the Czech company
JetBrains. PyCharm is a dedicated Python Integrated Development Environment (IDE)
providing a wide range of essential tools for Python developers, tightly integrated to create a
convenient environment for productive Python, web, and data science development.

Figure 2.4.2- PyCharm

Dlib
Dlib is a modern C++ toolkit containing machine learning algorithms and tools for creating
complex software in C++ to solve real world problems. It is used in both industry and
academia in a wide range of domains including robotics, embedded devices, mobile phones,
and large high-performance computing environments. Dlib's open source licensing allows

13
you to use it in any application, free of charge. We build python binaries for Dlib using C
make tool.

Figure 2.4.3- Dlib and its facial landmarks indices

PyQt5
PyQt5 is a comprehensive set of Python bindings for Qt v5. It is implemented as more than
35 extension modules and enables Python to be used as an alternative application
development language to C++ on all supported platforms including iOS and Android. PyQt5
is a Python binding of the cross-platform GUI toolkit Qt, implemented as a Python plug-in.
PyQt5 is free software developed by the British firm Riverbank Computing.

Figure 2.4.4- PyQt5 Designer

14
2.5 Content diagram of Project

Figure 2.5.1- content diagram

15
2.6 Flowchart of blink detection module.

Figure 2.6.1 -Flowchart

16
CHAPTER 3

DESIGN

17
3.1 ARCHITECTURE DIAGRAM

Figure 3.1: Architecture Diagram

3.2 DATA FLOW DIAGRAM

Figure 3.2: Dataflow Diagram

18
3.3 USE CASE DIAGRAM

Figure 3.3: Use Case Diagram

19
3. 4 Module design

Figure 3.4

20
Chapter 4
IMPLEMENTATION & RESULTS

21
4.1 Introduction
Unlike traditional image processing methods for computing blinks which typically involve
some combination of:
1. Eye localization.
2. Thresholding to find the whites of the eyes.
3. Determining if the ―white‖ region of the eyes disappears for a period of time (indicating a
blink).
The eye aspect ratio is instead a much more elegant solution that involves a very simple
calculation based on the ratio of distances between facial landmarks of the eyes.This method
for eye blink detection is fast, efficient, and easy to implement.
In the first part we’ll discuss the eye aspect ratio and how it can be used to determine if a
person is blinking or not in a given video frame.

4.2 Explanation of Key functions


import sys
from PyQt5 import QtWidgets
from PyQt5.QtWidgets import QDialog, QApplication
from PyQt5.uic import loadUi
from scipy.spatial import distance as dist
from imutils.video import VideoStream
from imutils import face_utils
import numpy as np
import imutils
import time
import dlib
import cv2

The importing of packages except Dlib is simple and can be done easily. All the other
packages can be installed by using conda commands or by using PyCharm itself. And PyQt5
is used for the layout designing and the .ui file is used in the program to run the application.
Now that all the necessary packages are imported we need to define a function that
implements the EAR calculation.

class startb(QDialog):

def __init__(self):
print("Application initialization...")
super(startb,self).__init__()
loadUi("untitled.ui",self)
self.startbutton.clicked.connect(self.blinkdetect)

So starting with class, where the __init__ is the function where the .ui file is accessed to run
the application layout and the object value (startbutton is the object value of button START).

22
def eye_aspect_ratio(self,eye):
# compute the euclidean distances between the two sets of
# vertical eye landmarks (x, y)-coordinates
A = dist.euclidean(eye[1], eye[5])
B = dist.euclidean(eye[2], eye[4])

# compute the euclidean distance between the horizontal


# eye landmark (x, y)-coordinates
C = dist.euclidean(eye[0], eye[3])

# compute the eye aspect ratio


ear = (A + B) / (2.0 * C)

# return the eye aspect ratio


return ear
This function accepts a single required parameter, the (x, y)-coordinates of the facial
landmarks for a given eye.
Compute the distance between the two sets of vertical eye landmarks while Line 21 computes
the distance between horizontal eye landmarks. Finally, combines both the numerator and
denominator to arrive at the final eye aspect ratio, then returns the eye aspect ratio to the
calling function.

def blinkdetect(self):
print("START Button.....")
EYE_AR_THRESH = 0.2
EYE_AR_CONSEC_FRAMES = 2
COUNTER = 0
TOTAL = 0

# initialize the frame counters and the total number of blinks


print("[INFO] loading facial landmark predictor...")
detector = dlib.get_frontal_face_detector()
predictor =
dlib.shape_predictor('shape_predictor_68_face_landmarks.dat')
(lStart, lEnd) = face_utils.FACIAL_LANDMARKS_IDXS["left_eye"]
(rStart, rEnd) = face_utils.FACIAL_LANDMARKS_IDXS["right_eye"]
print("[INFO] starting video stream thread...")

vs = VideoStream(src=0).start()
fileStream = False
time.sleep(1.0)

while True:
if fileStream and not vs.more():
break
frame = vs.read()
frame = imutils.resize(frame, width=800)
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
rects = detector(gray, 0)
j=False
for rect in rects:
shape = predictor(gray, rect)
shape = face_utils.shape_to_np(shape)
leftEye = shape[lStart:lEnd]
rightEye = shape[rStart:rEnd]
leftEAR = self.eye_aspect_ratio(leftEye)

23
rightEAR = self.eye_aspect_ratio(rightEye)
ear = (leftEAR + rightEAR) / 2.0
leftEyeHull = cv2.convexHull(leftEye)
rightEyeHull = cv2.convexHull(rightEye)
cv2.drawContours(frame, [leftEyeHull], -1, (0, 255, 0), 1)
cv2.drawContours(frame, [rightEyeHull], -1, (0, 255, 0), 1)
if ear < EYE_AR_THRESH:
COUNTER += 1
else:
if COUNTER >= EYE_AR_CONSEC_FRAMES:
TOTAL += 1

COUNTER = 0
cv2.putText(frame, "Blinks: {}".format(TOTAL), (10, 30),
cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2)
self.blinkcount.setText("Blinks: {}".format(TOTAL))
cv2.putText(frame, "EAR: {:.2f}".format(ear), (300, 30),
cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2)
self.earvalue.setText("EAR: {:.2f}".format(ear))

cv2.imshow("Frame", frame)
key = cv2.waitKey(1) & 0xFF
if key == ord("q"):
j=True
cv2.destroyAllWindows()
vs.stop()
break
if(j==True):
break

So when the Button is clicked the above function will be accessed and will be processed.
If the eye aspect ratio falls below a certain threshold and then rises above the threshold, then
we’ll register a “blink” — the EYE_AR_THRESH is this threshold value. We default it to a
value of 0.2 as this is what has worked best for my applications.
We then have an important constant, EYE_AR_CONSEC_FRAME — this value is set to 2
to indicate that two successive frames with an eye aspect ratio less than EYE_AR_THRESH
must happen in order for a blink to be registered.
So in the for loop using an array slicing techniques, we can extract the (x, y)-coordinates for
both the left and right eye, respectively. From there, we compute the eye aspect ratio for each
eye in the line leftEAR = self.eye_aspect_ratio(leftEye)
rightEAR = self.eye_aspect_ratio(rightEye).

app=QApplication(sys.argv)
mainwindow = startb()
widget=QtWidgets.QStackedWidget()
widget.addWidget(mainwindow)
widget.setFixedWidth(971)
widget.setFixedHeight(533)
widget.show()
app.exec_()

24
So this comes at the end of the code but out of the loop and class so it will access the
initialized from the __init__ function i.e. user interface and it is executed before all the blink
detection module starts.

4.3 Result Analysis

In the above figure when the program is started it initialize the user interface and displayes
the user interface as show in the figure _______ and when the START button is pressed the
shape_predictor_68_face_landmarks.dat pre-trained face detector file is used and imported to
the program and then the camera is accessed and the camera interface will pop up to the user.

Figure 4.3.1- Application

25
Figure 4.3.2 application and blink detection

Figure 4.3.3 application and blink detection

4.4 Method of Implementation


We can apply facial landmark detection to localize important regions of the face, including
eyes, eyebrows, nose, ears, and mouth:
This also implies that we can extract specific facial structures by knowing the indexes of the
particular face parts:
In terms of blink detection, we are only interested in two sets of facial structures — the eyes.
Each eye is represented by 6 (x, y)-coordinates, starting at the left-corner of the eye (as if you
were looking at the person), and then working clockwise around the remainder of the region:

26
Figure 4.4.1- Figure Describing the coordinates used to calculate the EAR
Based on this image, we should take away on key point:
There is a relation between the width and the height of these coordinates
Based on the work by Soukup ova and Cech in their 2016 paper, Real-Time Eye Blink
Detection using Facial Landmarks, we can then derive an equation that reflects this relation
called the eye aspect ratio (EAR):

Figure 4.4.2- Formula to calculate the EAR

Where p1, …, p6 are 2D facial landmark locations.


The numerator of this equation computes the distance between the vertical eye landmarks
while the denominator computes the distance between horizontal eye landmarks, weighting
the denominator appropriately since there is only one set of horizontal points but two sets of
vertical points.
Why is this equation so interesting?
Well, as we’ll find out, the eye aspect ratio is approximately constant while the eye is open,
but will rapidly fall to zero when a blink is taking place.
Using this simple equation, we can avoid image processing techniques and simply rely on the
ratio of eye landmark distances to determine if a person is blinking.
To make this clearer, consider the following figure from Soukup ova and Cech:

27
Figure 4.4.3- Top-left: A visualization of eye landmarks when then the eye is open.
Figure 4.4.4- Top-right: Eye landmarks when the eye is closed.
Figure 4.4.5- Bottom: Plotting the eye aspect ratio over time. The dip in the eye aspect ratio
indicates a blink.

 On the top-left we have an eye that is fully open — the eye aspect ratio here would be
large(r) and relatively constant over time.
 However, once the person blinks (top-right) the eye aspect ratio decreases
dramatically, approaching zero.
 The bottom figure plots a graph of the eye aspect ratio over time for a video clip. As
we can see, the eye aspect ratio is constant, then rapidly drops close to zero, then
increases again, indicating a single blink has taken place.
 To implement eye blink module, we need to import necessary modules in
detect_blinks.py packages. The following code snippet shows the importing of
necessary packages:

4.5 Conclusion
So by this chapter we described about the facial landmarks for both eyes, we compute the
eye aspect ratio for each eye, which gives us a singular value, relating the distances between
the vertical eye landmark points to the distances between the horizontal landmark points.
Once we have the eye aspect ratio, we can threshold it to determine if a person is blinking —
the eye aspect ratio will remain approximately constant when the eyes are open and then will
rapidly approach zero during a blink, then increase again as the eye opens.

28
Chapter 5
TESTING & VALIDATION

29
5.1 Introduction
We introduce Eye blink detection Testing where number of milliseconds the eyes was closed
is taken as the test case which generates the blink and also the type of eye colour is
considered in the below shown table.

5.2 Design of test cases and scenarios

S.N Test Case Input Expected Actual Result


Description Output Output

1 Number of 20 No blink No blink Pass


milliseconds detected detected
eyes closed

2 Number of 50 No blink No blink Pass


milliseconds detected detected
eyes closed

3 Number of 100 Blink Blink Pass


milliseconds detected detected
eyes closed

4 Eye Brown Eyes Eye Detected Eye Detected Pass


Detection

5 Eye Black Eyes Eye Detected Eye Detected Pass


Detection

Figure 5.2.1-Test case and scenarios

5.3 Validation
The output of the blink detection algorithm was evaluated by two separate individuals. Both individuals
watched the recording of each trial and noted each time the participant blinked. If a participant blinked
and the algorithm did not detect the blink, a miss would be counted. If the algorithm detected a blink
when there was not one observed, it was classified as a ―false positive. Only 6.5 percent of blinks were
missed, whereas 2.0 percent of blinks were falsely detected. Overall, the blink detection algorithm had
an accuracy rating of 93.7 percent.

30
5.4 Conclusion
In the process of detection there were no error in detecting the eye but there was some false
blink count due to persons head movement. In an over a 90 blinks there was at least 13 to 15
false blinks. Due to the webcam positions where in laptops it’s usually at the top so some of
the blinks were taken when the person looks downward which is not actually an eye blink but
the algorithm and position of cam defect. Whereas the accuracy of true blinks was 93 percent
and still for more accuracy it can be tuned by changing threshold and frame count values.

31
Chapter 6

Conclusion

32
The human computer interaction is important for every person of this
century. In today's era most of the work is done through computers. Technology have its have
its own advantage in our life but also by over use of tech and gadgets we face problems.
Especially the gadgets with screen displays which affects our valuable human organ eye which
helps us in visualize. This was the approach to help the individuals to realize the threat their
bring to themselves and an approach to avoid the syndrome faced by modern techs.

6.1 Future Enhancement


This project post focused solely on using the eye aspect ratio as a quantitative metric to
determine if a person has blinked in a video stream.
However, due to noise in a video stream, subpar facial landmark detection, or fast changes in
viewing angle, a simple threshold on the eye aspect ratio could produce a false-positive
detection, reporting that a blink had taken place when in reality the person had not blinked.

To make our blink detector more robust to these challenges, we can recommend:
1. Computing the eye aspect ratio for the N- th frame, along with the eye aspect ratios for N –
6 and N + 6 frames, then concatenating these eye aspect ratios to form a 13-dimensional feature
vector.
2. Training a Support Vector Machine (SVM) on these feature vectors.
We can use the combination of the temporal-based feature vector and SVM classifier helps
reduce false-positive blink detection and improves the overall accuracy of the blink detector.
Our project also does not have a good UI and cannot be used in all type’s devices. We can
improve it by implementing project in Android and IOS which is used by majority of users.
Our project also uses server-side camera rather than client side that aids for faster image
processing. We can also implement this from client-side camera but it has a performance
bottleneck.

33
REFERENCES

1. Adrian Rosebrock, PhD. Eye blink detection with OpenCV, Python, and dlib,

https://www.pyimagesearch.com, on April 24, 2017.

2. PyCharm, 2020.3 (Build: 203.5981.165) / 2 December 2020; 38 days

ago,https://en.wikipedia.org/wiki/PyCharm.

3. PyQt5 5.15.2, https://pypi.org/project/PyQt5, Nov 24, 2020.

34

You might also like