0% found this document useful (0 votes)
12 views7 pages

Lashkov CSE 2019 IEEE v2

Uploaded by

Mohit Pundir
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views7 pages

Lashkov CSE 2019 IEEE v2

Uploaded by

Mohit Pundir
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/337790410

Driver Dangerous State Detection Based on OpenCV & Dlib Libraries Using
Mobile Video Processing

Conference Paper · August 2019


DOI: 10.1109/CSE/EUC.2019.00024

CITATIONS READS
20 1,789

5 authors, including:

Igor Lashkov Alexey Kashevnik


University of Hawaiʻi at Mānoa Russian Academy of Sciences
36 PUBLICATIONS 416 CITATIONS 241 PUBLICATIONS 1,955 CITATIONS

SEE PROFILE SEE PROFILE

Nikolay Shilov Anton I. Shabaev


St.Petersburg Institute for Informatics and Automation of the Russian Academy of Sc… Petrozavodsk State University
305 PUBLICATIONS 1,629 CITATIONS 25 PUBLICATIONS 172 CITATIONS

SEE PROFILE SEE PROFILE

All content following this page was uploaded by Alexey Kashevnik on 10 May 2020.

The user has requested enhancement of the downloaded file.


Driver Dangerous State Detection Based on OpenСV
& Dlib Libraries Using Mobile Video Processing
Igor Lashkov1, Alexey Kashevnik1, Nikolay Shilov1, Vladimir Parfenov2, Anton Shabaev3
1SPIIRAS, St.Petersburg, Russia
2ITMO University, St.Petersburg, Russia
3Petrozavodsk State University (PetrSU), Petrozavodsk, Russia

{igla, alexey, nick}@iias.spb.su, [email protected], [email protected]

Abstract—The real-time driving behavior monitoring plays a certain period of time. Drowsiness or distraction driving states
significant role in intelligent transportation systems. Such monitoring of the driver are the real hazards that can lead to the traffic road
increases traffic safety by reducing and eliminating the risk of potential accident.
traffic accidents. Vision-based approach including video cameras for
dangerous situation detection is undoubtedly one of the most There is a number of different ways to enhance the road
perspective and commonly used in sensing driver environment. In this safety for a vehicle driver. It should be noted that one of the
case, the images of a driver, captured with video cameras, can describe enormous popular approaches presented in previous scientific
its facial features, like head movements, eye state, mouth state, and, researches relies in the development of advanced driver
afterwards, identify a current level of fatigue state. In this paper, we assistance systems. These safety systems allow to reduce road
leverage the built-in front-facing camera of the smartphone to accidents and provide better interaction and engagement with a
continuously track driving facial features and early recognize driver’s driver. Some common examples of driver safety technologies
drowsiness and distraction dangerous states. Dangerous state for this type of systems are vehicle collision avoidance system,
recognition is classified into online and offline modes. Due to lane keep assistant, driver drowsiness and distraction monitoring
efficiency and performance of smartphones in online mode, the driving and alerting. General use of such systems can be described as a
dangerous states are determined in real time on the mobile devices with certain set of consecutive commands in this way: monitoring
aid of computer vision libraries OpenCV and Dlib while driving. driver behavior, state of the vehicle or road situation by using
Otherwise, the offline mode is based on the results of statistical
different built-in auxiliary devices, including short and long-
analysis provided by a cloud service, utilizing not only the
range radars, lasers, lidars, video stream cameras to perceive the
accumulated statistics in real time, but also the previously collected,
stored and produced by machine learning tools.
surroundings; continuous analysis of readings from sensors and
determining dangerous situations while driving; alerting driver
Keywords—driver, driving behavior, smartphone, dangerous about recognized unsafe in-cabin and road situations; and taking
situation, fatigue, facial features, front-facing camera, vehicle. control of the vehicle if driver reaction is not sufficient or
missing. At the moment, driver safety systems heavily rely on
I. INTRODUCTION data collected from different in-vehicle sensors.
Traffic accidents remain the most fatal of almost every Although, advanced driver assistance systems coming with
country in the whole world. According to the statistics of road different high-precision sensors demonstrate high accuracy and
traffic accidents, 1.35 million people die annually, reported by performance in recognizing driving dangerous situations in
World Health Organization in 2018 [1]. This global report on different circumstances, existing smartphones are at much lower
road safety also indicates that road traffic injuries are the leading price, and are undeniably much popular among people in almost
cause of death of children and young people aged 5-29 years. every country that is easy to use in every vehicle1. By the way,
The risk of road traffic deaths remains highest in Africa that is these devices are already equipped with a set of motion, rotation,
26.6 per 100 000 and lowest in Europe, 9.3 per 100 000 image sensors [4], that can describe changes in the surrounding
population. A significant number of motor vehicle crashes environment and can be efficiently utilized by software
involve driver’s drowsiness and distraction. For instance, the developers upon the request in its third-party applications. It’s
research brief conducted by AAA Foundation for Traffic Safety safe to say that every smartphone has already built-in front-
[2] shows that in the real driving experiments the drowsiness facing camera, that can be utilized for continuously monitoring
state was recognized in 10.6%–10.8% of crashes that resulted in driving behavior, detecting facial features and patterns, and
significant property damage, airbag deployment, or injury. recognizing drowsy and distracted driving. This paper considers
Drowsiness was evaluated using the reliable measurement the advantages of facial features utilization for dangerous
PERCLOS [3] (PERcentage of CLOSure), indicating the behavior determination and extends our prior works [5], [6], [7],
percentage of time that a driver’s eyes are closed within the including the mobile application for Android smartphones2.

1
http://www.pewinternet.org/fact-sheet/mobile/ 2
https://play.google.com/store/apps/details?id=ru.igla.drivesafely

XXX-X-XXXX-XXXX-X/XX/$XX.00 ©20XX IEEE


II. RELATED WORK One more paper proposes the three-stage drowsiness
As of smartphone’s affordable low price, a set of embedded detection framework for vehicle drivers, developed for Android-
sensors and small sizes, these devices are gaining popularity for based smartphones [12]. The first stage uses the PERCLOS
building driver intelligent assistance systems at a large scale. obtained through images captured by the front-facing camera
Firstly, it should be highlighted that there is a certain scientific with an eye state classification method. The system uses near
and technical groundwork in the area of smartphone camera- infrared lighting for illuminating the face of the driver while
based solutions focused on recognizing dangerous driving night-driving. The next step uses the voiced to the unvoiced ratio
behavior in the real-time scenarios. There is a certain set of calculated based on the speech data taken from the built-in
mobile applications utilizing smartphone’s front-facing camera smartphone microphone, in the event PERCLOS crosses the
and applying data image processing algorithms to provide threshold. A final stage is used as a touch response within a
driving safety while driving. The increasing popularity of specified time to declare the driver as drowsy and subsequently
smartphone camera-based approaches is due to its ability to alert with an audible alarm. According to the received results of
measure contact free. the study the developed framework for smartphones
demonstrates 93% drowsiness state classification. The final
On the other hand, the study [8] demonstrates that detection measurement indicators used in this study include PERCLOS,
of blinks can be affected by the driver state, level of automation, the voiced-unvoiced ratio and a reaction test response of the
the measurement frequency, and the algorithms used. It driver on the smartphone screen.
proposes the evaluation of the performance of an
electrooculogram- and camera-based blink detection algorithms Other more sophisticated approach includes the detection of
in both manually and conditionally automated driving sleep deprivation by evaluating a short video sequence of a
conditions under various constraints. During the experiment, the driver [13]. It utilizes the OpenCV Haar Caascades to extract the
participants were requested to rate their subjective drowsiness driver’s face from every frame and classify it within the deep
level with the Karolinska Sleepiness Scale every 15 minutes. learning framework into two classes: “sleep derived” and
“rested”. In detail, this approach is based on the use of the
The main goal of the existing smartphone-based research trained model formed by the non-linear models MobileNets,
studies and solutions is to early warn driver about recognized adapted specifically for mobile applications on smartphones.
dangerous state and eliminate the risk of drowsy or distracted The output of MobileNet for camera frame is the estimation of
driving. Let’s consider driver’s drowsiness determination the probability of the frame to belong to “sleep deprived” class.
related studies. This study [9] demonstrates a monitoring system In case the probability of this class is more than 0.5, the driver
developed to detect and alert the vehicle driver about the in the frame is classified as “sleep deprived”. The real
presence of the drowsiness state. To recognize whether the experiments have been conducted with aid of prototype
driver is drowsy, the visual indicators that reflect the driver’s implemented as an Android-based mobile application for
condition, comprising the state of the eyes, the head pose and smartphone. TensorFlow lite framework was utilized to compile
the yawning, were assessed. The number of tests were proposed the MobileNet model previously trained on a standalone laptop.
to assess the driver’s state, including yawning, front nodding,
blink detection, etc. Although the proposed recognition method Another major cause for road accidents is driver’s
gets 93% of total drowsiness detections, its unclear which distraction. This paper proposes a smartphone camera-based
dataset was utilized to evaluate the system and whether the driver fatigue and distraction monitoring system while driving
detection method was tested under different light conditions. In [14]. This study heavily relies on monitoring driver’s eyes and
this study the Android-based smartphone was utilized to assess mouth, and detecting eye rub due to irritation in eye and yawning
the driver’s state. through intensity sum of facial region. The evaluation of the
proposed approach is done using the developed mobile
Another study [10] presents the developed smartphone application for Android platform with Xiaomi Redmi 1s
mobile application “Drowsy Driver Scleral-Area” related to smartphone. The authors of the study conducted the experiments
driver’s drowsiness detection. The proposed mobile application and only evaluated the CPU load and the battery consumption
includes a Haar cascade classifier, provided by the computer of the developed system. They concluded their system
vision framework OpenCV [11] for driver’s face and eyes consumed 12% of battery of continuous use for a one hour. The
detection; and a module written in Java and responsible for paper highlights that the proposed approach is not suitable for
image processing and alerting driver about potential hazards work under low/no light conditions.
while driving. The developed application is configured to detect
prolonged eyelid closure exceeding three seconds indicating Another study [15] is focused on developing Driver Fatigue
drowsiness state. Also, it was tested on a static photo sequence, Detection System aimed at monitoring driver behavior and
person in a laboratory and in a vehicle. The paper highlights that alerting him to prevent from falling asleep while driving. The
the pixel density analysis method was used that eliminates the proposed solution is adapted for working on the smartphone,
need to manually count pixels and determine a threshold for utilizing built-in camera for recording video and processing it
drowsiness. It involves the calculation of the ratio of white for real-time eye-tracking. The authors of the study admit that
pixels to maximum white pixels (corresponding to full eye their solution is limited due to external illumination conditions
opening) in the region of detection. The authors of the study and wearing sunglass by a driver.
consider that additional tests need to be conducted under more Other paper [16] evaluates the pertinence of using driver
dynamic motion and reduced light conditions. head rotation movements to automatically predict the
smartphone usage while driving. The duration a driver spends
looking down from a reference neutral direction is used as a situation. It is worth noting that current exising research projects
parameter to predict the smartphone usage. According to the and studies lack of offline big data processing and analysis
conducted experiments, a smartphone usage detection system across all users of the system. The proposed approach differes
based on real-time video analysis of head movements is from existing ones in aspects of utilizing a set of different on-
implemented in this study. It performs the real-time video smartphone and cloud-based technologies able to mutually
analysis of the driver’s face, evaluates its head rotation deviation complement one another. It may be presented through
from neutral orientation when the driver is looking at the road, processing statistics of all drivers of the system, grouping and
and detects whether the percentage of these deviations exceeds clustering drivers based on finding similar preferences, driving
a threshold. styles or vehicle in-cabin behavior, coordinate strategy for some
particular driver, and therefore adapt system for him/her and
To monitor the driver’s vigilance level and recognize its achieve feasible and viable results in driver behavior evaluation
fatigue state, the study relies on multiple visual indicators, and dangerous state recognition.
including eye blinking, head nod and yawning [17]. Real-time
detection is based on the use of the face and eye blink detection III. DROWSINESS AND DISTRACTION STATE RECOGNITION
with Haar-like technique and mouth detection for yawning state
with canny active contour finding method. The proposed Driver behavior is a result of complex interactions between
approach was implemented using Java programming language the driver, the vehicle and the environment. There is a certain
and OpenCV framework responsible for image processing that research and technological gap in recognizing in-cabin
is supported by Android platform. According to the conducted situations whether the driver is drowsy or distracted while
experiments, the performance of the proposed method for face driving based on processing image frames taken from
and eye tracking was tested under variable light conditions. smartphone front-facing camera. In this case, each fatigue
dangerous situation can be described on the base of monitoring
In the paper [18], a strategy and system to detect driving driver’s physiological behavior. Therefore, it is proposed to
fatigue based on machine vision and machine learning classify dangerous driving behavior essentially into drowsiness
AdaBoost algorithm is proposed. The entire detection strategy and distraction states, comprising of a number of different visual
consists of the following operations: detection of the face using cues, resulting in different characteristics of its different
classifiers of the front and deflected face; extraction of eye duration and frequency, and, in the result, the driver behavior
region according to geometric distribution of facial organs; and, patterns. At first, let’s consider the drowsiness state. General
finally, trained classifiers for open and closes eyes are used to scheme for drowsiness state is presented in Fig. 1.
detect eyes in the selected regions. As a result, the PERCLOS
measure is calculated and used as a measure for fatigue rate as Each driver’s fatigue state can be presented in a form of time
well as the duration time of eye-closed state. Underneath, the series data, that the parameters of driver behavior are
OpenCV library was utilized to analyze frames for face continuously collected based on the information from front-
recognition. In case the driver’s fatigue state is recognized, the facing camera of the smartphone. Drowsiness state recognition
system will make an audible alert for a driver or dial the is possible due to monitoring driver’s events, detecting facial
emergency center or police. The performance of the proposed features and its characteristics, including eye and mouth state,
system may decrease up to 10 frames per second. The developed head movements and pose.
Driving Fatigue Detection System is compatible with Android In particular, the face-related information can be presented
smartphones. It should be highlighted that the study misses the in a form of two-dimensional array, involving the locations and
experiments in conditions under poor illumination. sizes of the driver’s head, eyes, nose, and mouth. A set of
A recent research study and smartphone camera-based different points, describing the extracted facial features, allows
solution [19] are oriented at driver monitoring and recognizing to estimate the additional characteristics and parameters, and,
drowsiness, distraction and gaze direction. The presented therefore identify the current driver’s state and recognize
approach uses facial landmarks to detect eye closure via eye dangerous situation. The head movements, describing the
aspect ratio and PERCLOS metrics and yawns via the ratio of orientation of the head in a certain moment, can be acquired
the height of the mouth to its width. Distraction state is from the 2D image taken from the smartphone camera. The head
recognized whether the driver is talking on the phone while posture is based on the estimation of the position of the specific
driving. This recognition is based on the use of a pre-trained face points, characterizing the eye state, mouth state and nose.
deep neural network YOLOv2 [20] on the COCO dataset.
It should be highlighted that the observed research studies
consider visual indicators mostly in a real-time. The use of data
mining and machine learning techniques may improve the
performance and accuracy of the recognition method and gain
more reliable results. Main two driver dangerous in-cabin states
the existing smartphone-based solutions are focused on, are
drowsiness and distraction. One of the positive aspects of the
listed research studies is that they simultaneously consider
different multiple parameters, signaling dangerous state,
because a single indicator may not be sufficient and
representative to describe the surroundings and the current road
Fig. 1. General scheme for drowsiness state recognition
The output of the algorithm for the estimation of current
head orientation in space can be presented as a vector defined by
Euler rotation degrees of freedom, including the pitch, yaw and
roll angles. The yaw and the pitch angles can be efficiently
utilized to recognize whether the driver is distracted, as well as
recognize the situations when he/she is nodding off for the
drowsiness state based on the yaw angle. The distraction state
Fig. 2. General scheme for distraction state recognition
recognition can be described by the head orientation as well as
gaze direction, related to the driver’s eyes state. Dangerous state recognition can be classified into online and
Eye state can be essentially presented with a range of offline modes. In case of online mode, dangerous states have to
indicators, including reliable PERCLOS measurement, duration be determined in real time on the smartphone while driving. The
and frequency of eye-lid blinks, eye-lid distance [21] and eye determination time depends on the context (driver reaction time,
gaze direction [22]. Generally, the duration and frequency of vehicle speed etc.), but it is approximately two seconds. These
eye-closed state and frequency will decrease and those of eye dangerous states are drowsiness and distraction. If the driver is
open state will decrease when driver is in drowsiness state. The drowsy or distracted the system should alert him/her in real time
duration of eye-lid blink is defined as the time spent while upper or an accident can occur. This mode is characterized by the use
and lower eye-lids are connected. Also, the eye aspect ratio, of third-party libraries (e.g. OpenCV, Dlib [23]) integrated in the
defined as the ratio between the height and the width of the eye application, API of the operational system (e.g. Android Face
contour, frequently used to recognize the opened or closed state recognition API) and pretrained models (e.g. TensorFlow Lite
of the eye. framework [24]) provided as on-device files.
Yawning is an important and frequently used measure of Dlib presents one of the most popular and frequently used
fatigue state. The size of the mouth openness, computed by its libraries for facial landmarks extraction. The face detector,
upper and lower bound, can be utilized to easily indicate the created and used by this library, is essentially built using
situation whether the yawning state is present at the moment. If histogram of oriented gradients feature and trained on the iBUG
height of the mouth exceeds the defined threshold, the person is 300-W [25] face landmark dataset. Using the trained model, the
considered to perceive yawning. In addition, similar to the eye library allows to obtain the 68 facial landmarks, including the
aspect ratio, the mouth aspect ratio that is the ratio between the information about eyes, nose, lips and mouth, including its size
height and the width of the mouth contour, can be utilized to and location. The most well-known computer vision library is
recognize yawning state. OpenCV, that is built upon of real-time running algorithms, like
as Viola-Jones technique based on Haar feature-based cascade
Speaking with passenger, talking on phone or other classifiers for object detection. The implementation of the head
distracted activities while driving can lead to the traffic road pose estimation method is based on the use of OpenCV methods
accident caused by a driver’s carelessness. Distraction state and facial features extracted by Dlib to calculate the head
(Fig. 2) may potentially impact general road users as well as orientation in space. The use of external computer vision
professional drivers. This dangerous driving behavior can be libraries, OpenCV and Dlib, applied for face recognition and
recognized with a head pose obtained from the general 2D providing the driver’s face-related information, is shown in
camera image. Head pose can provide head rotate and head pitch Fig. 4.
angles to identify the situations whether the driver is looking at
the road while driving. Nevertheless, analyzing the head pose Offline mode is characterized by calculations and
may not be sufficient to detect where the driver is looking at. estimations provided by cloud service (not in smartphone). The
determination time can be much more than one minute since the
A driver may scan the surrounding environment and check calculations have to be implemented and a lot of data has to be
the mirrors with quick glances without significantly turning its analyzed. These situations are determined on the basis of not
head. Hence, the gaze location information can be very only the collected data in real time, but also previously collected,
important to identifying such situations. and stored. In these situations, the machine learning techniques,
IV. REFERENCE MODEL FOR FACIAL FEATURES RECOGNITION like deep neural networks (e.g. Keras [26], MobileNet, YOLO),
and the pre-trained models may be utilized to increase the
Computer vision algorithms running on the smartphone accuracy and correctness of the drowsiness state recognition, as
essentially do not require additional hardware equipment than well as for the distraction one.
the already integrated front-facing camera. Despite the
smartphone has limited performance resources in comparison The state recognition method, running in the online and
with standalone computers, these mobile devices are able to offline modes, can potentially improve the driver’s visual
carry out computer vision algorithms focused on objects behavior estimation, increase the overall application
recognition in real-time. performance and allow to accurately recognize the driver’s
facial characteristics and patterns. The general facial patterns are
There is a wide range of driver’s facial features head turn left/right, head tilt forward/back, mouth and eye
characterizing drowsiness and distraction state, respectively. openness/closeness states. The result outcome of the presented
The multitude of driver physiological parameters can make the reference model is the recognized fatigue state, that is
recognition system more reliable and accurate than using single drowsiness and distraction. The proposed reference model can
parameter. The reference model of facial features recognition is aid in process of dangerous behavior recognition to provide
presented in Fig. 3.
efficient and robust estimation to early mitigate or prevent the Android-based smartphones and focused on real-time
road accident. monitoring driving behavior, extracting typical facial
parameters, recognizing drowsiness (Fig. 5) and distraction
(Fig. 6) states using computer vision algorithms running directly
on the driver’s smartphone. The determination of drowsiness
state relies on the use of multiple indicators, including the
PERCLOS, head tilting forward to classify sleepness level and
recognize microsleeps, and mouth openness, signaling yawning.
On the contrary, a head rotation and tilt angle, and eye gaze are
required to decide whether the driver is distracted.

Fig. 3. Reference model of facial features recognition

Fig. 5. Mobile application recognizes driver’s drowsiness state using


smartphone’s front-facing camera and alerting driver to pay its attention.

Fig. 4. Face-related information extracted by OpenCV and Dlib libraries

The core advantages of the proposed approach for


recognizing unsafe driving behavior are essentially based on the
use of machine learning techniques through deferred
calculations in cloud service; operate in situations when the
knowledge of the system is not sufficient to identify the road
situation for some driver and consider already stored linked data
of other road users; also, identify abnormal driving behavior and
recognize drowsiness or distraction states based on the Fig. 6. Mobile application recognizes driver’s distraction state using the
accumulated driving statistics for each driver. smartphone’s front-facing camera and alerting driver to pay its attention.

V. EVALUATION On the other hand, the accumulated driving statistics, based


The evaluation of the study is conducted by the comparison on the information about trips of different drivers, have certain
of the Drive Safely application with the existing technical benefits for successful recognition of dangerous driving
approaches and solutions implemented as mobile applications behavior. The combination of different strategies based on th use
for smartphones. On one hand, a great number of studies, based of machine learning algorithms can be efficiently utilized in
on utilization of smartphone front-facing camera, is related to processing the previous driving experience, based on monitoring
only on-device (smartphone) processing real-time driving the in-cabin driver visual behavior, influence the future driving
scenarios, continuously monitoring driver’s visual behavior, dangerous situation recognition and, therefore, adapt this
determining the road dangerous situation. It should be solution for some driver the best way possible. For instance,
highlighted, that in this case, no additional pre-downloaded there is a number of studies, working on the analysis of data
training datasets or machine learning algorithms are utilized for gathered by naturalistic or driving simulator experiments and
dangerous state recognition. However, there is a set of computer driver behavior estimation for better understanding the driving
vision frameworks including the machine learning techniques dangerous situation prediction and detection. The results of
optimized for real-time processing and actively used by the these works and experiments may be provided along with the
mobile applications. In general, these solutions are limited in mobile application packages and can potentially improve the
detecting regions containing parts of the driver’s face, accuracy and the reliability of the situation detection in driver
recognizing the facial features and estimating the probability of behavior monitoring systems.
risk occurring the traffic accident. Thereby, the scope of these
VI. FUTURE WORK
solutions is limited only by current driving situation and the
smartphone it is working on. The general example of such In future work, we expect to collect the dataset of driving
solutions is a Drive Safely mobile application, intended for statistics in real scenarios involving general and professional
drivers of different age with different types of vehicles to Conference of the Open Innovations Association FRUCT, Finland, pp.
306-313, 2016.
improve the accuracy and reliability of driving dangerous
[7] I. Lashkov, A. Smirnov, A. Kashevnik and V. Parfenov, “Ontology-
recognition. We plan to estimate the influence and performance Based Approach and Implementation of ADAS System for Mobile
of driving behavior parameters on the overall driving safety by Device Use While Driving,” Proceedings of the 6th International
utilizing machine learning algorithms. In these experiments we Conference on Knowledge Engineering and Semantic Web, Moscow,
CCIS 518, pp. 117-131, 2015.
intend to use the developed Android-based mobile application
[8] J. Schmidt, R. Laarousi, W. Stolzmann and K. Karrer. “Eye blink
Drive Safely aimed at recognizing dangerous behavior and detection for different driver states in conditionally automated driving
alerting driver to avoid the occurrence of traffic accident. and manual driving using EOG and a driver camera,” Behavior Research
Methods, vol. 50, iss. 3, pp. 1088-1101, 2018.
VII. CONCLUSION [9] E. E. Galarza, F. D. Egas, F. Silva, P. M. Velasco, E. Galarza. “Real
Time Driver Drowsiness Detection Based on Driver’s Face Image
In order to accumulate knowledge about drowsiness and Behavior Using a System of Human Computer Interaction Implemented
distraction state recognition, the paper presents the general in a Smartphone,” Proceedings of the International Conference on
schemes for each dangerous situation, respectively. The Information Technology & Systems, pp. 563-572, 2018.
[10] F. Mohammad, K. Mahadas and G. K. Hung “Drowsy driver mobile
monitoring of dangerous driving behavior is based on the front- application: Development of a novel scleral-area detection method,”
facing camera readings from smartphone in vehicle cabin. It Computers in Biology and Medicine, vol. 89, pp. 76–83, 2017.
includes the detection of facial features and its characteristics, [11] G. Bradski, A. Kaehler, “Learning OpenCV: Computer Vision in C++
including eye and mouth state, head movements and pose. Also, with the OpenCV Library,” O'Reilly Media, Inc., 2nd edition, 2013.
[12] A. Dasgupta, D. Rahman, A. Routray “A Smartphone-Based Drowsiness
the reference model for facial features recognition is proposed. Detection and Warning System for Automotive Drivers,” IEEE
Dangerous state recognition has been classified into online and Transactions on Intelligent Transportation Systems, pp. 1–10, 2018.
offline modes. On one hand, in online mode, the driving [13] M. García-García, A. Caplier, M. Rombaut. “Sleep Deprivation
dangerous states have to be determined in real time on the Detection for Real-Time Driver Monitoring Using Deep Learning,”
Image Analysis and Recognition, pp. 435–442, 2018.
smartphone while driving. In this case, the mobile applications [14] M. Ramachandran, S. Chandrakala “Android OpenCV based effective
are focused on extraction and analyzing facial features and driver fatigue and distraction monitoring system,” 2015 International
making judgments based on the recognized visual behavior Conference on Computing and Communications Technologies (ICCCT),
state. It is possible due to good efficiency and performance of pp. 262-266, 2015.
[15] M. Abulkhair, A.H. Alsahli, K.M. Taleb, A.M. Bahran, F.M. Alzahrani,
smartphones being used for a variety of high load tasks. On the H.A. Alzahrani and L.F. Ibrahim “Mobile Platform Detect and Alerts
other hand, the offline mode is based on the results of numerous System for Driver Fatigue,” Procedia Computer Science, vol. 62, pp.
calculations provided by cloud service (some remote service, not 555–564, 2015.
the smartphone). These driver dangerous situations are [16] M. García-García, A. Caplier and M. Rombaut. “Driver Head
Movements While Using a Smartphone in a Naturalistic Context,” 6th
recognized utilizing not only the collected statistics in real time, International Symposium on Naturalistic Driving Research, Jun 2017,
but also previously collected, and stored. The further The Hague, Netherlands. 8, pp. 1-5, 2017.
development of the smartphone camera-based solutions can [17] Y. Qiao, K. Zeng, L. Xu, X. Yin, “A smartphone-based driver fatigue
bring significant benefits and improvements in early traffic detection using fusion of multiple real-time facial features,” 2016 13th
IEEE Annual Consumer Communications & Networking Conference
emergency prevention and reduce the road accident probability. (CCNC), Las Vegas, NV, pp. 230-235, 2016.
[18] W. Kong, L. Zhou, Y. Wang, J. Zhang, J. Liu, S. Gao “A System of
ACKNOWLEDGMENT Driving Fatigue Detection Based on Machine Vision and Its Application
The reported study was funded by RFBR according to the on Smart Device,” Journal of Sensors, pp. 1–11, 2015.
[19] A. U. Nambi, S. Bannur, I. Mehta, H. Kalra, A. Virmani, V. N.
research project № 17-29-03284 and № 19-07-00670 of the
Padmanabhan, R. Bhandari and B. Raman. “HAMS: Driver and Driving
Russian Foundation for Basic Research. The work was partially Monitoring using a Smartphone,” Proceedings of the 24th Annual
supported by Government of Russian Federation, Grant 08-08. International Conference on Mobile Computing and Networking
(MobiCom '18). ACM, New York, NY, USA, pp. 840-842, 2018.
REFERENCES [20] J. Redmon and A. Farhadi. “YOLO9000: Better, Faster, Stronger,” 2017
[1] Global status report on road safety 2018. Geneva: World Health IEEE Conference on Computer Vision and Pattern Recognition (CVPR),
Organization; 2018. Licence: CC BY- NC-SA 3.0 IGO. pp. 6517-6525, 2017.
[2] J.M. Owens, T.A. Dingus, F. Guo, Y. Fang, M. Perez, J. McClafferty, B. [21] D. Wenhui, Q. Peishu and H. Jing, “Driver fatigue detection based on
Tefft. Prevalence of Drowsy Driving Crashes: Estimates from a Large- fuzzy fusion,” in Proceedings of the Chinese Control and Decision
Scale Naturalistic Driving Study. (Research Brief.) Washington, D.C.: Conference (CCDC '08), pp. 2640–2643, Shandong, China, July 2008.
AAA Foundation for Traffic Safety, 2018. [22] L. Bergasa, J. Nuevo, M. Sotelo, R. Barea and M. Lopez, “Real-Time
[3] D. Dinges, R. Grace “PERCLOS: A valid psychophysiological measure System for Monitoring Driver Vigilance,” IEEE Transactions on
of alertness as assessed by psychomotor vigilance,” TechBrief NHTSA, Intelligent Transportation Systems, vol. 7, no. 1, pp. 63–77, 2006.
Publication No. FHWAMCRT-98-006, 1998. [23] D. E. King, “Dlib-ml: A machine learning toolkit,” Journal of Machine
[4] M. Fazeen, B. Gozick, R. Dantu, M. Bhukhiya, M.C. Gonzalez “Safe Learning Research, vol. 10, pp. 1755–1758, 2009.
Driving Using Mobile Phones,” IEEE Transactions on Intelligent [24] A. Ignatov, R. Timofte, P. Szczepaniak, W. Chou, K. Wang, M. Wu,
Transportation Systems, vol. 13, issue 3, pp. 1462-1468, 2012. Max, T. Hartley and L. V. Gool. “AI Benchmark: Running Deep Neural
[5] A. Smirnov, A. Kashevnik, I. Lashkov, N. Hashimoto and A. Boyali Networks on Android Smartphones,” ECCV Workshops 2018, pp. 288-
“Smartphone-Based Two-Wheeled Self-Balancing Vehicles Rider 314, 2018.
Assistant,” Proceedings of the 17th IEEE Conference of the Open [25] C. Sagonas, E. Antonakos, G, Tzimiropoulos, S. Zafeiriou, M. Pantic,
Innovations Association FRUCT, Yaroslavl, Russia, pp. 201-209, 2015. “300 faces In-the-wild challenge: Database and results,” Image and
[6] A. Smirnov, A. Kashevnik, I. Lashkov, O. Baraniuc and V. Parfenov Vision Computing (IMAVIS), Special Issue on Facial Landmark
“Smartphone-Based Dangerous Situation Identification While Driving: Localisation "In-The-Wild", issue 47, pp. 3-18, 2016.
Algorithms and Implementation,”Proceedings of the 18th IEEE [26] A. Gulli and S. Pal “Deep Learning with Keras,” Packt Publishing, p.
296, 2017.

View publication stats

You might also like