0% found this document useful (0 votes)
19 views

Download Image Processing and Communications Techniques Algorithms and Applications Michał Choraś ebook All Chapters PDF

Applications

Uploaded by

cientadiadra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views

Download Image Processing and Communications Techniques Algorithms and Applications Michał Choraś ebook All Chapters PDF

Applications

Uploaded by

cientadiadra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 50

Experience Seamless Full Ebook Downloads for Every Genre at textbookfull.

com

Image Processing and Communications Techniques


Algorithms and Applications Michał Choraś

https://textbookfull.com/product/image-processing-and-
communications-techniques-algorithms-and-applications-
michal-choras/

OR CLICK BUTTON

DOWNLOAD NOW

Explore and download more ebook at https://textbookfull.com


Recommended digital products (PDF, EPUB, MOBI) that
you can download immediately if you are interested.

Advances and Applications of Optimised Algorithms in Image


Processing 1st Edition Diego Oliva

https://textbookfull.com/product/advances-and-applications-of-
optimised-algorithms-in-image-processing-1st-edition-diego-oliva/

textboxfull.com

Image processing and GIS for remote sensing : techniques


and applications Second Edition Liu

https://textbookfull.com/product/image-processing-and-gis-for-remote-
sensing-techniques-and-applications-second-edition-liu/

textboxfull.com

Nature Inspired Optimization Techniques for Image


Processing Applications Jude Hemanth

https://textbookfull.com/product/nature-inspired-optimization-
techniques-for-image-processing-applications-jude-hemanth/

textboxfull.com

Image Processing and Communications Challenges 7 1st


Edition Ryszard S. Chora■ (Eds.)

https://textbookfull.com/product/image-processing-and-communications-
challenges-7-1st-edition-ryszard-s-choras-eds/

textboxfull.com
Metaheuristic Algorithms for Image Segmentation Theory and
Applications Diego Oliva

https://textbookfull.com/product/metaheuristic-algorithms-for-image-
segmentation-theory-and-applications-diego-oliva/

textboxfull.com

Image Color Feature Extraction Techniques Fundamentals and


Applications Jyotismita Chaki

https://textbookfull.com/product/image-color-feature-extraction-
techniques-fundamentals-and-applications-jyotismita-chaki/

textboxfull.com

Fractals: applications in biological signalling and image


processing 1st Edition Aliahmad

https://textbookfull.com/product/fractals-applications-in-biological-
signalling-and-image-processing-1st-edition-aliahmad/

textboxfull.com

Methods and techniques for fire detection : signal, image


and video processing perspectives 1st Edition Çetin

https://textbookfull.com/product/methods-and-techniques-for-fire-
detection-signal-image-and-video-processing-perspectives-1st-edition-
cetin/
textboxfull.com

Signal and Image Processing Techniques for the Development


of Intelligent Healthcare Systems E. Priya

https://textbookfull.com/product/signal-and-image-processing-
techniques-for-the-development-of-intelligent-healthcare-systems-e-
priya/
textboxfull.com
Advances in Intelligent Systems and Computing 1062

Michał Choraś
Ryszard S. Choraś Editors

Image
Processing and
Communications
Techniques, Algorithms and
Applications
Advances in Intelligent Systems and Computing

Volume 1062

Series Editor
Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences,
Warsaw, Poland

Advisory Editors
Nikhil R. Pal, Indian Statistical Institute, Kolkata, India
Rafael Bello Perez, Faculty of Mathematics, Physics and Computing,
Universidad Central de Las Villas, Santa Clara, Cuba
Emilio S. Corchado, University of Salamanca, Salamanca, Spain
Hani Hagras, School of Computer Science and Electronic Engineering,
University of Essex, Colchester, UK
László T. Kóczy, Department of Automation, Széchenyi István University,
Gyor, Hungary
Vladik Kreinovich, Department of Computer Science, University of Texas
at El Paso, El Paso, TX, USA
Chin-Teng Lin, Department of Electrical Engineering, National Chiao
Tung University, Hsinchu, Taiwan
Jie Lu, Faculty of Engineering and Information Technology,
University of Technology Sydney, Sydney, NSW, Australia
Patricia Melin, Graduate Program of Computer Science, Tijuana Institute
of Technology, Tijuana, Mexico
Nadia Nedjah, Department of Electronics Engineering, University of Rio de Janeiro,
Rio de Janeiro, Brazil
Ngoc Thanh Nguyen , Faculty of Computer Science and Management,
Wrocław University of Technology, Wrocław, Poland
Jun Wang, Department of Mechanical and Automation Engineering,
The Chinese University of Hong Kong, Shatin, Hong Kong
The series “Advances in Intelligent Systems and Computing” contains publications
on theory, applications, and design methods of Intelligent Systems and Intelligent
Computing. Virtually all disciplines such as engineering, natural sciences, computer
and information science, ICT, economics, business, e-commerce, environment,
healthcare, life science are covered. The list of topics spans all the areas of modern
intelligent systems and computing such as: computational intelligence, soft comput-
ing including neural networks, fuzzy systems, evolutionary computing and the fusion
of these paradigms, social intelligence, ambient intelligence, computational neuro-
science, artificial life, virtual worlds and society, cognitive science and systems,
Perception and Vision, DNA and immune based systems, self-organizing and
adaptive systems, e-Learning and teaching, human-centered and human-centric
computing, recommender systems, intelligent control, robotics and mechatronics
including human-machine teaming, knowledge-based paradigms, learning para-
digms, machine ethics, intelligent data analysis, knowledge management, intelligent
agents, intelligent decision making and support, intelligent network security, trust
management, interactive entertainment, Web intelligence and multimedia.
The publications within “Advances in Intelligent Systems and Computing” are
primarily proceedings of important conferences, symposia and congresses. They
cover significant recent developments in the field, both of a foundational and
applicable character. An important characteristic feature of the series is the short
publication time and world-wide distribution. This permits a rapid and broad
dissemination of research results.

** Indexing: The books of this series are submitted to ISI Proceedings,


EI-Compendex, DBLP, SCOPUS, Google Scholar and Springerlink **

More information about this series at http://www.springer.com/series/11156


Michał Choraś Ryszard S. Choraś

Editors

Image Processing
and Communications
Techniques, Algorithms and Applications

123
Editors
Michał Choraś Ryszard S. Choraś
Institute of Telecommunications Department of Telecommunications,
and Computer Science Computer Sciences
University of Science and Electrical Engineering
and Technology (UTP) University of Science
Bydgoszcz, Poland and Technology (UTP)
Bydgoszcz, Poland

ISSN 2194-5357 ISSN 2194-5365 (electronic)


Advances in Intelligent Systems and Computing
ISBN 978-3-030-31253-4 ISBN 978-3-030-31254-1 (eBook)
https://doi.org/10.1007/978-3-030-31254-1
© Springer Nature Switzerland AG 2020
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part
of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission
or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar
methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt from
the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the
authors or the editors give a warranty, expressed or implied, with respect to the material contained
herein or for any errors or omissions that may have been made. The publisher remains neutral with regard
to jurisdictional claims in published maps and institutional affiliations.

This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface

The monograph contains high-level papers which address all aspects of image
processing (from topics concerning low-level to high-level image processing),
pattern recognition, novel methods and algorithms as well as modern
communications.
We would like to thank all the authors and also the reviewers for the effort they
put into their submissions and evaluation.
We are grateful to Agata Gielczyk and Dr Karolina Skowron for their
management work, to Dr Adam Marchewka for hard work as Publication Chair,
and also to Springer for publishing this book in their Advances in Intelligent
Systems and Computing series.
Those papers have also been presented at IP&C 2019 Conference in Bydgoszcz.

Michał Choraś
Conference Chair

v
Organization

Organization Committee
Conference Chair
Michał Choraś, Poland

Honorary Chairs
Ryszard Tadeusiewicz, Poland
Ryszard S. Choraś, Poland

International Program Committee

Kevin W. Bowyer, USA


Dumitru Dan Burdescu, Romania
Christophe Charrier, France
Leszek Chmielewski, Poland
Michał Choraś, Poland
Andrzej Dobrogowski, Poland
Marek Domański, Poland
Kalman Fazekas, Hungary
Ewa Grabska, Poland
Andrzej Kasiński, Poland
Andrzej Kasprzak, Poland
Marek Kurzyński, Poland
Witold Malina, Poland
Andrzej Materka, Poland
Wojciech Mokrzycki, Poland
Sławomir Nikiel, Poland
Zdzisław Papir, Poland
Jens M. Pedersen, Denmark
Jerzy Pejaś, Poland

vii
viii Organization

Leszek Rutkowski, Poland


Khalid Saeed, Poland
Abdel-Badeeh M. Salem, Egypt

Organizing Committee

Łukasz Apiecionek
Sławomir Bujnowski
Piotr Kiedrowski
Rafał Kozik
Damian Ledziński
Zbigniew Lutowski
Adam Marchewka (Publication Chair)
Beata Marciniak
Tomasz Marciniak
Ireneusz Olszewski
Karolina Skowron (Conference Secretary)
Mścisław Śrutek
Łukasz Zabłudowski
Contents

Image Processing and Communications


Overview of Tensor Methods for Multi-dimensional Signals
Change Detection and Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Bogusław Cyganek
Head Motion – Based Robot’s Controlling System Using Virtual
Reality Glasses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Tomasz Hachaj
Robustness of Haar Feature-Based Cascade Classifier for Face
Detection Under Presence of Image Distortions . . . . . . . . . . . . . . . . . . . 14
Patryk Mazurek and Tomasz Hachaj
Eyes State Detection in Thermal Imaging . . . . . . . . . . . . . . . . . . . . . . . 22
Paweł Forczmański and Anton Smoliński
Presentation Attack Detection for Mobile Device-Based
Iris Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Ewelina Bartuzi and Mateusz Trokielewicz
Gaze-Based Interaction for VR Environments . . . . . . . . . . . . . . . . . . . . 41
Patryk Piotrowski and Adam Nowosielski
Modified Score Function and Linear Weak Classifiers
in LogitBoost Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Robert Burduk and Wojciech Bozejko
Color Normalization-Based Nuclei Detection in Images
of Hematoxylin and Eosin-Stained Multi Organ Tissues . . . . . . . . . . . . 57
Adam Piórkowski

ix
x Contents

Algorithm for Finding Minimal and Quaziminimal


st-Cuts in Graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Andrey Grishkevich
The Influence of the Number of Uses of the Edges of a Reference
Graph on the Transmission Properties of the Network Described
by the Graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Beata Marciniak, Sławomir Bujnowski, Tomasz Marciniak,
and Zbigniew Lutowski
Imbalanced Data Classification Using Weighted Voting Ensemble . . . . . 82
Lin Lu and Michał Woźniak
Evaluation of the MRI Images Matching Using Normalized Mutual
Information Method and Preprocessing Techniques . . . . . . . . . . . . . . . . 92
Paweł Bzowski, Damian Borys, Wiesław Guz, Rafał Obuchowicz,
and Adam Piórkowski
Remote Heart Rate Monitoring Using a Multi-band Camera . . . . . . . . . 101
Piotr Garbat and Agata Olszewska
Initial Research on Fruit Classification Methods Using Deep
Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
Zbigniew Nasarzewski and Piotr Garbat
3D Optical Reconstruction of Building Interiors
for Game Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
Mariusz Szwoch and Dariusz Bartoszewski
A Simplified Classification of Electronic Integrated Circuits
Packages Based on Shape Descriptors . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Kamil Maliński and Krzysztof Okarma
Impact of ICT Infrastructure on the Processing
of Large Raster Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
Paweł Kosydor, Ewa Warchala, and Adam Piórkowski
Gated Recurrent Units for Intrusion Detection . . . . . . . . . . . . . . . . . . . 142
Marek Pawlicki, Adam Marchewka, Michał Choraś, and Rafał Kozik
Towards Mobile Palmprint Biometric System with the New
Palmprint Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
Agata Giełczyk, Karolina Dembińska, Michał Choraś, and Rafał Kozik
Vision System for Pit Detection in Cherries . . . . . . . . . . . . . . . . . . . . . . 158
Piotr Garbat, Piotr Sadura, Agata Olszewska, and Piotr Maciejewski
Contents xi

The Impact of Distortions on the Image Recognition


with Histograms of Oriented Gradients . . . . . . . . . . . . . . . . . . . . . . . . . 166
Andrzej Bukała, Michał Koziarski, Bogusław Cyganek, Osman Nuri Koc,
and Alperen Kara

Information and Communication Technology Forum 2019


Traffic Feature-Based Botnet Detection Scheme Emphasizing
the Importance of Long Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
Yichen An, Shuichiro Haruta, Sanghun Choi, and Iwao Sasase
Performance Evaluation of the WSW1 Switching Fabric
Architecture with Limited Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
Mustafa Abdulsahib, Wojciech Kabaciński, and Marek Michalski
AI-Based Analysis of Selected Gait Parameters
in Post-stroke Patients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
Prokopowicz Piotr, Mikołajewski Dariusz, Tyburek Krzysztof,
Mikołajewska Emilia, and Kotlarz Piotr
Classification of Multibeam Sonar Image Using
the Weyl Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
Ting Zhao, Srđan Lazendić, Yuxin Zhao, Giacomo Montereale-Gavazzi,
and Aleksandra Pižurica
Learning Local Image Descriptors with Autoencoders . . . . . . . . . . . . . . 214
Nina Žižakić, Izumi Ito, and Aleksandra Pižurica
The Performance of Three-Hop Wireless Relay Channel
in the Presence of Rayleigh Fading . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
Dragana Krstic, Petar Nikolic, and Mihajlo Stefanovic
Simulation Study of Routing Protocols for Wireless Mesh Networks . . . 231
Maciej Piechowiak, Piotr Owczarek, and Piotr Zwierzykowski
Call-Level Analysis of a Two-Link Multirate Loss Model
with Restricted Accessibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
I. P. Keramidi, I. D. Moscholios, P. G. Sarigiannidis,
and M. D. Logothetis
Performance Metrics in OFDM Wireless Networks Under
the Bandwidth Reservation Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
P. I. Panagoulias, I. D. Moscholios, and M. D. Logothetis
Traffic Modeling for Industrial Internet of Things (IIoT) Networks . . . 264
Mariusz Głabowski, Sławomir Hanczewski, Maciej Stasiak,
Michał Weissenberg, Piotr Zwierzykowski, and Vito Bai
xii Contents

Modelling of Switching Networks with Multi-service Sources


and Multicast Connections in the Last Stage . . . . . . . . . . . . . . . . . . . . . 272
Maciej Sobieraj, Maciej Stasiak, and Piotr Zwierzykowski
The Analytical Model of Complex Non-Full-Availability System . . . . . . 279
Sławomir Hanczewski, Maciej Stasiak, and Michał Weissenberg
Model of a Multiservice Server with Stream and Elastic Traffic . . . . . . 287
Sławomir Hanczewski, Maciej Stasiak, and Joanna Weissenberg
The Analytical Model of 5G Networks . . . . . . . . . . . . . . . . . . . . . . . . . . 295
Sławomir Hanczewski, Alla Horiushkina, Maciej Stasiak,
and Joanna Weissenberg
Simulation Studies of a Complex Non-Full-Availability Systems . . . . . . 303
Sławomir Hanczewski and Michał Weissenberg
Simulation Studies of Multicast Connections in Ad-Hoc Networks . . . . . 311
Maciej Piechowiak
V2X Communications for Platooning: Impact of Sensor Inaccuracy . . . 318
Michał Sybis, Paweł Sroka, Adrian Kliks, and Paweł Kryszkiewicz

Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327


Image Processing and Communications
Overview of Tensor Methods for
Multi-dimensional Signals Change
Detection and Compression

Boguslaw Cyganek(B)

AGH University of Science and Technology,


Al. Mickiewicza 30, 30-059 Kraków, Poland
[email protected]
http://www.agh.edu.pl

Abstract. An overview of modern tensor based methods for multi-


dimensional signal processing is presented. Special focus is laid on recent
achievements in signal change detection, as well as on efficient meth-
ods of their compression based on various tensor decompositions. Apart
from theory, applications as well as implementation issues are presented
as well.

Keywords: Tensor change detection · Video shot detection ·


Orthogonal tensor space · Tensor decomposition · Tensor compression ·
HOSVD · Tucker decomposition · Artificial intelligence · Deep learning

1 Introduction
Contemporary sensors produce huge amounts of multi-dimensional signals. The
most popular are ubiquitous video recordings produced on mobiles, but also
signals arising in various branches of industry, such as surveillance cameras, pro-
cess control, finance, as well as in science, in such domains as particle physics,
astronomy, seismology, biology, variety of experimental simulations, to name a
few. With no much exaggeration we can say that we live in times of big data. Pro-
cessing of big data was recently underpinned with artificial intelligence methods,
such as deep learning and widely applied convolutional neural networks (CNN).
These also entail processing of huge amounts of data for which high compu-
tational power computers and GPU are employed [10,11,13,14]. All these can
be characterized as high streams of multi-dimensional data [3]. Hence, develop-
ment of methods for their efficient processing is one of the important research
topics. In this context the methods of signal change detection, as well as sig-
nal compression seem to be very useful. Tensor based methods offer a natural
tool for multi-dimensional signal processing [3–7]. Originally proposed by Tucker
[21], then adopted to the signal processing domain [6,12,15,16], tensor decom-
position methods play the key role. In this context the special stress is put
upon methods of signal clustering based on abrupt change detection of various
c Springer Nature Switzerland AG 2020
M. Choraś and R. S. Choraś (Eds.): IP&C 2019, AISC 1062, pp. 3–5, 2020.
https://doi.org/10.1007/978-3-030-31254-1_1
4 B. Cyganek

forms and duration [4,7,20]. The second actively investigated are methods for
multi-dimensional signal compression. Connection of the two domains offers new
possibilities as well [5]. Such hybrid methods can be used for compression of sig-
nal chunks. They can find broad applications in compression of video or CNN
weights [1,2,18,19], to name a few. However, tensor processing is not free from
computational problems such as curse of dimensionality, missing data, storage
limitations and computational complexity to name a few. In this keynote, an
overview of the above methods will be presented with special focus upon appli-
cations and further developments. The basic theory behind tensor analysis, with
underpinned few examples, will be presented. Then, an overview of the basic
tensor decompositions will be discussed, underlying those especially suited for
signal change detection and compression. The talk will conclude with examples
as well as ideas for future research in these areas.

Acknowledgments. This work was supported by the National Science Centre,


Poland, under the grant NCN no. 2016/21/B/ST6/01461.

References
1. Asghar, M.N., Hussain, F., Manton, R.: Video indexing: a survey. Int. J. Comput.
Inf. Technol. 03(01), 148–169 (2014)
2. de Avila, S.E.F., Lopes, A.P.B., da Luz Jr., A., Araújo, A.A.: VSUMM: a mecha-
nism designed to produce static video summaries and a novel evaluation method.
Pattern Recogn. Lett. 32, 56–68 (2011)
3. Cyganek, B.: Recognition of road signs with mixture of neural networks and arbi-
tration modules. In: Advances in Neural Networks, ISNN 2006. Lecture Notes in
Computer Science, vol. 3973, pp. 52–57. Springer (2006)
4. Cyganek, B., Woźniak, M.: Tensor-based shot boundary detection in video streams.
New Gener. Comput. 35(4), 311–340 (2017)
5. Cyganek, B., Woźniak, M.: A tensor framework for data stream clustering and com-
pression. In: International Conference on Image Analysis and Processing, ICIAP
2017, Part I. LNCS, vol. 10484, pp. 1–11 (2017)
6. Cyganek, B., Krawczyk, B., Woźniak, M.: Multidimensional data classification with
chordal distance based kernel and support vector machines. J. Eng. Appl. Artif.
Intell. 46, 10–22 (2015). Part A
7. Cyganek, B.: Change detection in multidimensional data streams with efficient
tensor subspace model. In: Hybrid Artificial Intelligent Systems: 13th International
Conference, HAIS 2018, Lecture Notes in Artificial Intelligence, LNAI, Oviedo,
Spain, 20–22 June, vol. 10870, pp. 694–705. Springer (2018)
8. Del Fabro, M., Böszörmenyi, L.: State-of-the-art and future challenges in video
scene detection: a survey. Multimedia Syst. 19(5), 427–454 (2013)
9. Fu, Y., Guo, Y., Zhu, Y., Liu, F., Song, C., Zhou, Z.-H.: Multi-view video summa-
rization. IEEE Trans. Multimedia 12(7), 717–729 (2010)
10. Gama, J.: Knowledge Discovery from Data Streams. CRC Press, Boca Raton
(2010)
11. Gama, J., Žliobaitė, I., Bifet, A., Pechenizkiy, M., Bouchachia, A.: A survey on
concept drift adaptation. ACM Comput. Surv. (CSUR) 46(4), 44:1–44:37 (2014)
Overview of Tensor Methods 5

12. Kolda, T.G., Bader, B.W.: Tensor decompositions and applications. SIAM Rev.
51, 455–500 (2008)
13. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep con-
volutional neural networks. In: Proceedings of the 25th International Conference
on Neural Information Processing Systems - Volume 1, NIPS 2012, pp. 1097–1105
(2012)
14. Ksieniewicz, P., Woźniak, M., Cyganek, B., Kasprzak, A., Walkowiak, K.: Data
stream classification using active learned neural networks. Neurocomputing 353,
74–82 (2019)
15. de Lathauwer, L.: Signal processing based on multilinear algebra. Ph.D. disserta-
tion. Katholieke Universiteit Leuven (1997)
16. de Lathauwer, L., de Moor, B., Vandewalle, J.: A multilinear singular value decom-
position. SIAM J. Matrix Anal. Appl. 21(4), 1253–1278 (2000)
17. Lee, H., Yu, J., Im, Y., Gil, J.-M., Park, D.: A unified scheme of shot boundary
detection and anchor shot detection in news video story parsing. Multimedia Tools
Appl. 51, 1127–1145 (2011)
18. Mahmoud, K.A., Ismail, M.A., Ghanem, N.M.: VSCAN: an enhanced video sum-
marization using density-based spatial clustering. In: Image Analysis and Process-
ing, ICIAP 2013. LNCS, vol. 1, pp. 733–742. Springer (2013)
19. Medentzidou, P., Kotropoulos, C.: Video summarization based on shot bound-
ary detection with penalized contrasts. In: IEEE 9th International Symposium on
Image and Signal Processing and Analysis (ISPA), pp. 199–203 (2015)
20. Sun, J., Tao, D., Faloutsos, C.: Incremental tensor analysis: theory and applica-
tions. ACM Trans. Knowl. Discov. Data 2(3), 11 (2008)
21. Tucker, L.R.: Some mathematical notes on three-mode factor analysis. Psychome-
trika 31, 279–311 (1966)
Head Motion – Based Robot’s Controlling
System Using Virtual Reality Glasses

Tomasz Hachaj(B)

Institute of Computer Science, Pedagogical University of Cracow,


ul. Podchorażych
 2, 30-084 Cracow, Poland
[email protected]

Abstract. This paper proposes head motion – based robot’s control-


ling system using virtual reality glasses that was implemented using var-
ious up-to date software and hardware solutions. System is consisted of
robotic platform with DC motors controlled by micro controller with Wi-
fi network interface. The second micro controller is used as access point
and host of MJPG stereo vision camera stream. Virtual reality glasses
are used to control motor by analyzing user’s head motion and to dis-
play stereo image from camera mounted in the front of the chassis. This
article also introduces a user - centered head rotation system similar to
yaw – pitch – roll that can be used to intuitively design functions of user
interface. All source codes that were made for system implementation
can be downloaded and tested.

Keywords: Head motion analysis · Signal processing ·


Remote controlling · Virtual reality glasses

1 Introduction
Remote wireless controlling is among basic functionalities of robotic platforms.
Handheld controllers are most popular and reliable type of controlling devices.
However there might be a need to operate a robot without using hands. It might
be necessary if a person controlling it wants to have free hands or if this person
has some hands disabilities. Additionally, if an operator cannot follow the robot
or observe it all the time it might be necessary to receive a broadcast from
cameras installed on the robot. Those two functionalists: hands-free controlling
and camera view displaying can be implemented with the help of virtual reality
(VR) glasses (goggles).
Virtual reality glasses contain optical system that enables displaying stereo-
graphic image and to monitor user’s head rotation using gyroscope. Cheap gog-
gles often utilize smartphones which commonly has accelerometers. The stere-
ographic image is obtained by displaying image on smartphone screen in split
mode and contains adjustable lenses which together with binocular disparity
(obtained from stereo vision camera) are used to generate three dimensional
image.
c Springer Nature Switzerland AG 2020
M. Choraś and R. S. Choraś (Eds.): IP&C 2019, AISC 1062, pp. 6–13, 2020.
https://doi.org/10.1007/978-3-030-31254-1_2
Head Motion – Based Robot’s Controlling System 7

In state of the art papers we did not find a solution that has all functionalities
we have mentioned, however separate elements are present in number of papers.
The virtual reality controlling systems are present in robotic from many years
[10,12]. Among possible visualization techniques it is known that virtual reality
glasses give sense of immersion and helps with remote navigation of mechani-
cal devices [3,7,8,13] in medical robotic applications [9,16], rehabilitation and
physical training [1,2].
This paper proposes head motion – based robot’s controlling system using
virtual reality glasses that can be implemented using various up-to date software
and hardware solutions. This article also describes a user - centered head rota-
tion system similar to yaw – pitch – roll that can be used to intuitively design
functions of user interface. All source codes that were made for system imple-
mentation: motor control program, Android VR Google Cardboard application
and R language implementation of method that recalculates quaternion rotation
into proposed rotation coordinates system can be downloaded and tested [6].

2 Materials and Methods


In this section the proposed system implementation and mathematical model of
head rotation angles calculation is presented.

2.1 System Implementation

System is consisted of robotic platform on tank chassis with two DC motors


controlled by micro controller with Wi-fi network interface. The second micro
controller is used as access point and host of MJPG stereo vision camera stream.
VR glasses in Google Cardboard technology is used to control motor by analyzing
user head motion and to display stereo image from camera mounted in the front
of the chassis.
The system prototype has been implemented using several mechanical and
electronic components. Robot uses tank chassis T300, which size with camera box
and wiring is 28 × 27 × 23 cm. Total weight of the system is about 2020 g. Chas-
sis has two 9 V 150 rpm motors powered by two 3.7 V, 3000 mAh accumulators.
Motors allows robot to turn left, right and to move forward or backward. They
are controlled by NodeMCU v3 with Esp8266 and Motor Shield (compatible with
Arduino sketch). Robot has USB 2.0 MJPEG dual lens ELP-960P2CAM-V90
camera connected to Raspberry Pi 3 B micro controller. Raspberry Pi is also
a Wi-fi access point and MJPEG server for camera that operates in resolution
2 × 320 × 240. The streaming application is MJPG Streamer installed on Rasp-
bian operating system. This second controller is powered by 12 000 mAh power
bank. Virtual reality glasses uses Google Cardboard technology and requires
smartphone with Android v. > 4.1. Head rotation in Google Cardboard API is
returned in quaternions [5]. Also a cell phone magnetometer inside VR glasses
has to be calibrated (I have used Samsung Galaxy A3 207). The overall schema
of this system is presented in Fig. 1.
8 T. Hachaj

WiFI

camera

Motors
control

Wiring
WiFi
Commentary
Motors

Fig. 1. This figure presents overall schema of system.

2.2 Head Rotation Calculation

For the purpose of creating head motion controlled system it might be more
convenient to recalculate head rotation coordinates from quaternions into three-
parameter angle-based system. To make this coordinates system user-centered
we use coordinates similar to yaw – pitch – roll triplet, however, with differ-
ent definition of axis and rotations directions which are more ‘intuitive’ for a
user. This type of description is more straightforward while defining behavior
of the human - computer interface. There are three rotation axes, which initial
position is dependent to user initial position (see Fig. 2). The vertical y axis
which governs horizontal rotation is defined as the normal vector of the top of
the user’s head (in fact it is normal vector of the side part of the smartphone).
The horizontal rotation is defined with left-handed screw order. When a person
is looking straight, the rotation horizontal rotation angle equals 0, when he or
she is looking left, rotation angle is negative, when he or she is looking right
rotation angle is positive. x axis which governs vertical rotation is perpendicular
to y axis and is defined by a vector which *more or less* links ears of a user (in
fact it is normal vector of the top part of the smartphone). The vertical rotation
is defined with right-handed screw order. When a person is looking straight, the
rotation vertical rotation angle equals 0, when he or she is looking up, rotation
angle is positive, when he or she is looking down rotation angle is negative. z axis
is a cross product of unit vectors of x and y. z governs sideways head bending.
The bending rotation is defined with left-handed screw order. When a person is
looking straight, the rotation vertical rotation angle equals 0, when he or she is
bending head right, rotation angle is positive, when he or she is bending head left
rotation angle is negative. The coordinates system is left-handed. To recalculate
quaternion-based rotation returned by a VR goggles the following calculations
have to be done. Let us assume that the output rotation angles are restricted to
Head Motion – Based Robot’s Controlling System 9

[−π, π] which is enough to define head rotation of stationary (for example sitting
or standing in one place) person.

y
Horizontal

Vertical
x
z
Bending

Fig. 2. This figure presents definition of user-centered coordinate system with ‘intu-
itive’ rotations definition.

The quaternion multiplication equation is defined as follows:

⎡ ⎤
Q1 .W ∗ Q2 .X + Q1 .X ∗ Q2 .W + Q1 .Y ∗ Q2 .Z − Q1 .Z ∗ Q2 .Y
⎢Q1 .W ∗ Q2 .Y + Q1 .Y ∗ Q2 .W + Q1 .Z ∗ Q2 .X − Q1 .X ∗ Q2 .Z ⎥

Q1 · Q 2 = ⎣ ⎥ (1)
Q1 .W ∗ Q2 .Z + Q1 .Z ∗ Q2 .W + Q1 .X ∗ Q2 .Y − Q1 .Y ∗ Q2 .X ⎦
Q1 .W ∗ Q2 .W − Q1 .X ∗ Q2 .X − Q1 .Y ∗ Q2 .Y − Q1 .Z ∗ Q2 .Z

Quaternion conjugate equals:

Q = [−Q.X, −Q.Y, −Q.Z, Q.W ]. (2)

In order to rotate vector V = [x, y, z] by quaternion Q we apply the following


calculation T (Q, V ):

⎨ QV ← [x, y, z, 0];
T (Q, V ) ⇐ QV ← Q · (QV · Q); (3)
⎩ 
V ← [QV .X, QV .Y, QV .Z].

Let us assume that the initial head rotation is QH . In order to recalculate quater-
nion rotation Q to Vertical – Horizontal – Bending relatively to QH we apply
the following calculation:
π
V ertical ← acos(T (QH , [1, 0, 0]) ◦ T (Q, [0, 0, 1])) −
2
π
Horizontal ← π − acos(T (QH , [0, 0, 1]) ◦ T (Q, [0, 1, 0])) − (4)
2
π
Bending ← π − acos(T (QH , [1, 0, 0]) ◦ T (Q, [0, 1, 0])) −
2
10 T. Hachaj

Fig. 3. This figure presents final implementation of the robot (a) and screenshot from
VR glasses (b).

After those operations we can recalculate the quaternion-based head rota-


tion Q to ‘intuitive’ three-dimensional description in domain [−π, π], taking into
account the initial head rotation QH .

2.3 Interpretation of Head Motions and Remote Navigation

The communication between robot and VR glasses is done via Wi-fi using
TCP/IP protocol. Both smartphone with Android OS that is used by Google
Cardboard technology and NodeMCU micro controller can communicate via
HTTP however this transmission protocol is too slow for real-time motors control
with this hardware. Because of it TCP/IP socket-level communication protocol
has been applied. The Android application running on smartphone is monitor-
ing head motions. The applications determines initial rotation of the user’s head
(QH ) and for each following data acquisition Vertical – Horizontal – Bending
head rotation is calculated. A user can reinitialize QH by touching smartphone
screen and the current head rotation Q becomes QH . The Horizontal rotation
angle governs platform’s turning and Vertical rotation is responsible for mov-
ing forward or backward. If the head position changes above a threshold values
(those values are defined in Android application) application sends a TCP/IP
package to NodeMCU with appropriate command. Firmware on NodeMCU pro-
cesses those messages and changes voltages on output pins of motor shield. The
robot continues doing particular motion until a message to stop arrives from
Android application. The stop message is send when a head position of the user
is within the ‘neutral’ threshold value. Messages from smartphone are sent no
more often than a certain time span, which is also defined in application. This
prevents robot platform to be overflown by incoming commands. A user in VR
glasses see the view from stereo vision camera that is mounted on the front of
the robot.
Head Motion – Based Robot’s Controlling System 11

Head rotation (vertical) Head rotation (horizontal)

80
60

60
40

40
20
Angle [degress]

Angle [degress]

20
0
−20

0
−40

−20
−60

−40
0 2 4 6 8 10 0 2 4 6 8 10

Time [s] Time [s]

Head rotation (bending)


60
Angle [degress]

40
20
0

0 2 4 6 8 10

Time [s]

Fig. 4. This figure presents plots of example head rotations of user in VR glasses
calculated using (4).

3 Results

After implementing the system (see Fig. 3(a)), tests have been performed on
its remote head-motion based controlling module. As can be seen in Fig. 3(b)
view in VR glasses is additional distorted by Google Cardboard API in order
to strengthen effect of stereo vision and immersive experience of a user. The
remote controlling systems worked smoothly allowing remote control of robot
with nearly close to real-time preview from stereo vision camera. Beside naviga-
tion commands mathematical model (4) was tested in order to check if it can be
used to produce intuitive motions descriptions and visualizations. B. In order to
do so a user operating VR glasses was asked to move head down and up, then
turn it left and right, and finally to bend it clockwise and return to initial head
position. Plots of obtained angles descriptions are presented in Fig. 4(a)–(c).
Additionally Fig. 4(d) presents vertical and horizontal rotation applied to unit
12 T. Hachaj

vector and its projection onto unit sphere. That last plot visualizes in 3D what
the head motion trajectory was.

4 Discussion and Conclusions

As could be seen in previous section the proposed head motion – based robot’s
controlling system satisfies needs of remote controlling of robotic platform. The
mathematical model of head rotation calculation enables to generate intuitive
user – centered rotation description. In this description positive directions of axis
are right, up and towards front user’s head. The positive direction of rotations
is right, up, and clockwise in case of head side banding, opposite rotations have
negative angle coordinates. With the help of this description it is straightforward
to define how head motion should be interpreted and translated to remote com-
mands to robot. The communication protocol takes into account system latency
and prevents robot from being overflown by incoming commands.
Due to applying a stereo vision camera in proposed systems can be utilized as
the prototype in many scientific researches. After stereo calibration this system
can be used for measuring distances between robot and elements of environment
and three-dimensional points cloud generation. The system can be also applied
for developing and testing vision – based robotic odometry [14,15] or simulta-
neous localization and mapping (SLAM) algorithms [4,11]. There is also a large
potential in developing and testing methods for head motion analysis and recog-
nition. Many commands to system might be coded in head gestures that could
be classified by appropriate algorithm.

References
1. Borowska-Terka, A., Strumillo, P.: Algorithms for head movements’ recognition in
an electronic human computer interface. Przeglad  Elektrotechniczny 93(8), 131–
134 (2017)
2. Brütsch, K., Koenig, A., Zimmerli, L., Mérillat-Koeneke, S., Riener, R., Jäncke, L.,
van Hedel, H.J., Meyer-Heim, A.: Virtual reality for enhancement of robot-assisted
gait training in children with neurological gait disorders. J. Rehabil. Med. 43(6),
493–499 (2011)
3. Dornberger, R., Korkut, S., Lutz, J., Berga, J., Jäger, J.: Prototype-based research
on immersive virtual reality and on self-replicating robots. In: Business Information
Systems and Technology 4.0 Studies in Systems, Decision and Control, pp. 257–274
(2018). https://doi.org/10.1007/978-3-319-74322-6 17
4. Engel, J., Stückler, J., Cremers, D.: Large-scale direct slam with stereo cam-
eras. In: 2015 IEEE/RSJ International Conference on Intelligent Robots and Sys-
tems (IROS), pp. 1935–1942, September 2015. https://doi.org/10.1109/IROS.2015.
7353631
5. Grygiel, R., Bieda, R., Wojciechowski, K.: Angles from gyroscope to complemen-
tary filter in IMU. Przeglad Elektrotechniczny 90(9), 217–224 (2014)
6. Hachaj, T.: GitHub repository of the project (2019). https://github.com/
browarsoftware/rpm rotation calculation. Accessed 22 Mar 2019
Head Motion – Based Robot’s Controlling System 13

7. Kato, Y.: A remote navigation system for a simple tele-presence robot with virtual
reality. In: 2015 IEEE/RSJ International Conference on Intelligent Robots and
Systems (IROS), pp. 4524–4529, September 2015. https://doi.org/10.1109/IROS.
2015.7354020
8. Kurup, P., Liu, K.: Telepresence robot with autonomous navigation and virtual
reality: demo abstract. In: SenSys (2016)
9. Lin, L., Shi, Y., Tan, A., Bogari, M., Zhu, M., Xin, Y., Xu, H., Zhang, Y., Xie, L.,
Chai, G.: Mandibular angle split osteotomy based on a novel augmented reality
navigation using specialized robot-assisted arms - a feasibility study. J. Cranio-
Maxillofac. Surg. 44(2), 215–223 (2016). https://doi.org/10.1016/j.jcms.2015.10.
024. http://www.sciencedirect.com/science/article/pii/S1010518215003674
10. Monferrer, A., Bonyuet, D.: Cooperative robot teleoperation through virtual reality
interfaces. In: Proceedings Sixth International Conference on Information Visuali-
sation, pp. 243–248, July 2002. https://doi.org/10.1109/IV.2002.1028783
11. Mur-Artal, R., Tardüs, J.D.: ORB-SLAM2: an open-source SLAM system for
monocular, stereo, and RGB-D cameras. IEEE Trans. Robot. 33(5), 1255–1262
(2017). https://doi.org/10.1109/TRO.2017.2705103
12. Nguyen, L., Bualat, M., Edwards, L., Flueckiger, L., Neveu, C., Schwehr, K., Wag-
ner, M., Zbinden, E.: Virtual reality interfaces for visualization and control of
remote vehicles. Auton. Robots 11(1), 59–68 (2001). https://doi.org/10.1023/a:
1011208212722
13. Regenbrecht, J., Tavakkoli, A., Loffredo, D.: A robust and intuitive 3D interface
for teleoperation of autonomous robotic agents through immersive virtual reality
environments. In: 2017 IEEE Symposium on 3D User Interfaces (3DUI), pp. 199–
200, March 2017. https://doi.org/10.1109/3DUI.2017.7893340
14. Usenko, V., Engel, J., Stückler, J., Cremers, D.: Direct visual-inertial odometry
with stereo cameras. In: 2016 IEEE International Conference on Robotics and
Automation (ICRA), p. 1885 (2016). https://doi.org/10.1109/ICRA.2016.7487335
15. Wang, R., Schworer, M., Cremers, D.: Stereo DSO: large-scale direct sparse visual
odometry with stereo cameras. In: IEEE International Conference on Computer
Vision (ICCV), October 2017
16. Zinchenko, K., Komarov, O., Song, K.: Virtual reality control of a robotic cam-
era holder for minimally invasive surgery. In: 2017 11th Asian Control Confer-
ence (ASCC), pp. 970–975, December 2017. https://doi.org/10.1109/ASCC.2017.
8287302
Robustness of Haar Feature-Based
Cascade Classifier for Face Detection
Under Presence of Image Distortions

Patryk Mazurek(B) and Tomasz Hachaj

Institute of Computer Science, Pedagogical University of Cracow,


Podchorazych 2, 30-084 Cracow, Poland
{patryk.mazurek,tomasz.hachaj}@up.krakow.pl

Abstract. In this paper examines effectiveness of HAAR feature-based


cascade classifier for face detection in the presence of various image dis-
tortions. In the article we have focused on picture distortions that are
likely to be met in everyday life, namely blurring, salt and pepper noise,
contrast and brightness shifts and “fisheye” type distortion typical for
wide-angle lens. In the paper present the mathematical model of the clas-
sifier and distortions, the training procedure and finally results of seg-
mentation under various level of distortion. The test dataset is a large
publicly available “Labelled Faces in the Wild” (LFW). Results show
that Cascade Classifier finds it most difficult to recognize images that
contain 70% noise type salt and pepper. The least impact on the effective-
ness of the method use of blurred images even though the high parameter
of blurring. From the obtained results it appears that the effectiveness
of face detection is also affected by the adequate parameters of contrast
and brightness.

Keywords: Cascade classifier · Face detection · Haar features ·


Low-quality

1 Introduction
Face detection is a highly developed computer technology being used in variety
of applications and multiple sciences. Face detection can be used in the process
of identification of human faces, access control on mobile devices or for the
purpose of entertainment. Each face detection system, the same as any other
object-class detection method, tackles with a problem with the quality of image
upon which the system is operating. In the majority of detected images, quality
is a factor that considerably affects the effectiveness of system operation. In order
to achieve higher effectiveness of image detection, systems ought to be resistant
to low image quality [5].
This paper focuses on one of the most popular methods of face detection, that
is Cascade Classifier [1]. When the method was presented, the basic classifier was
c Springer Nature Switzerland AG 2020
M. Choraś and R. S. Choraś (Eds.): IP&C 2019, AISC 1062, pp. 14–21, 2020.
https://doi.org/10.1007/978-3-030-31254-1_3
Robustness of Haar Feature-Based Cascade Classifier 15

Haar, subsequently Ojala proposed a new LBP classifier [2], which was added
to Cascade Classifier. Herein, we intend only to concentrate on HAAR classifier
which operates by selection of squares containing dark and bright fragments. In
case of face detection, division of a square into dark and fair regions serves the
purpose of calculation of the sum of pixels’ value from under the selected square.
A dark square may represent fragments of eyes, mouth, and the fair square may
represent a nose, forehead or cheeks.
Each classifier consists of a few stages where multiple functions are performed.
In order for the classifier to consider an examined area to be the face under
search, the sum obtained from the tested areas must exceed a certain threshold
defined in a file containing Haar cascade model. Sometimes, even this type of
testing is not capable for errors elimination and the classifier is bound to accept
the rectangles that does not contains faces (false positive error).
Regardless the foregoing errors, scientists use Cascade Classifier in various
tasks and come up with brand new solutions based on this method. With a
view to improve results of face detection, another classifier was proposed with
features that better reflect the face structure [3,16,17]. Another extension on
the Cascade Classifier for the purpose of more accurate results is to apply a
few weaker models and to verify the results obtained after each stage [4]. Next
issue where Haar classifier has been tested is to detect the area around eyes
and to make use of the driver’s fatigue [6,7]. Cascade Classifier not only assists
the process of face or body composition detection but may also be applied in
medicine, e.g. for the purpose of recognition of leukocytes on ultrasound scans
[11] or detection of objects from the bottom of the sea [12].
Development of new technologies as well as neural networks accounts for the
fact that recognition of various objects or faces becomes more and more accurate
and faster, each of those technologies, however, depending significantly upon the
quality of images supplied for the purpose of detection [9,10].
In this paper have been prepared 4 test groups in which it has been checked
how the Cascade Classifier work for face detection under presence of image dis-
tortions. Each of test contains a specially prepared images with the appropriate
level of distortion. In the first test it was prepared a images with salt and pepper
noise. This noise can be find in images who taken in low light. In next test, it
was use low-quality images. To reduce the quality images was used the Gauss
filter and Median filter. This type of distortion can be obtained when taking
low-resolution images. The third test involves the modifying images for changes
in brightness and contrast values. This type of images can be obtained by man-
ually configuration of camera or when the lighting changes dynamically and the
camera has to adjust the settings. In the last test it was use images with “fish-
eye” effect, To get this effect can be used a specific lens or application witch will
added this effect to images. The “fisheye” effect can be found in art photography
and cameras with a wide of view.
The research presented in this article can help to creating a new solutions to
face detection or it will be helpful when creating a images database for learning
models in cascade classifier method.
16 P. Mazurek and T. Hachaj

2 Materials and Methods


2.1 Databases and Research Methods
For this research a Haar feature-based cascade classifier available together with
OpenCV 3.4 library was used. A model was trained based on 9832 positive
images containing faces (including mirrored, vertical) and 1000 negative images
without faces. Model was trained based on 24 × 24 pixel images and Adaboost
learning algorithm [13].
In order to check effectiveness of the tested model we have used LFW
(Labelled Faces in the Wild) [8] that contains 13233 images of 5479 various
people. LFW has pictures of public personalities in their everyday routine which
enables to more thoroughly verify the model effectiveness.
The effectiveness of Cascade Classifier is affected by the following factors:
model accuracy, in the test we use pre-training model, so we have no influence on
the model’s learning effectiveness. The model who was used achieves efficiency
at the level 90%. And the quality of images supplied. The common types on
image distortions can be divided into following groups:

(a) Salt and pepper - during “salt and pepper” noise simulation 11 tests have
been carried out with noise being inserted into the image within the range of
0–100, and a constant pace of 10. The noise level was the percentage of pixels
against all pixels in the image. Noise pixels have been inserted randomly into
the tested images.
(b) Blurring - blurring simulation has been conducted with the use of Gaussian
blur filter and median filter.
1 x2 +y 2
G(x, y) = 2
e 2σ2 (1)
2πσ
For each filter a value between 1–17 was applied, adjusting input data every 2,
thanks to which 9 tests for each filter have been completed. The use of Gaussian
and median filters of high calibre causes image blurring but not loss of edges or
important data in the image.
(c) Contrast and brightness - in brightness and contrast tests 11 tests have
been prepared with the value of pixels changed accordingly.

f (i, j) = a ∗ x(i, j) + b (2)

The value of x(i, j) corresponds to the pixels of the basic image, the value
of b is responsible for brightness, which change was carried out by adding an
appropriate value from the range between [−127:127] to pixel. In case of contrast
(variable a), the value of each pixel was multiplied by an adequate value ranging
between [0.0:2.0] with the constant pace of 0.2. In each test, after the pixel value
change, it was necessary to adjust it to appropriate range [0:255].
(d) Lenses - the “fisheye” effect can be obtained by applying special lenses or
adequate algorithms which help to modify and transform the traditional image
into the one with “fisheye” effect. During the test, an image was modified by
Robustness of Haar Feature-Based Cascade Classifier 17

means of setting up a new Q coordinate system with its centre being located
halfway through the height and width of the modified image. Subsequently,
Cartesian coordinates are replaced with polar coordinates using.

r = (x2 + y 2 ) (3)
θ = atan2(x, y) (4)
Next, image is mapped onto a spherical or elliptical object. Value of distortion
of an image is between a vector and a polar coordinate from a “virtual camera”
to the beginning of a coordinate system. Mathematically, the above phenomenon
can be presented in the following way:

x = sin(θ) ∗ cos(θ) (5)

y = sin(θ) ∗ sin(θ) (6)


z = sin(k) (7)
Where k parameter specifies an angle of distortion which it proved to be rang-
ing between 0.002–0.02, with a constant pace of 0.002 in the test. The ultimate
stage requires to transfer an adequately distorted image onto the normal 2-D
image.

3 Results

The present section is devoted to presentation of results obtained in the course of


carried out tests. Each test checked the impact of deterioration of image quality
on effectiveness of operation of the tested model.

Table 1. This table presents results of “salt and paper” test.

Noise level 0 10 20 30 40 50 60 70 80 90 100


Positive results 95.6% 89.1% 76.9% 60.5% 41.7% 24.8% 13.0% 5.8% 2.1% 0.6% 0.2%

During test 1 measurements have been made to check how various noise levels
in an image affect effectiveness of detection rate. Based on the presented table
(Table 1) it can be concluded that images with noise level up to 20% are likely
to get high effectiveness of detection. An increase of the noise level accounts for
a decrease in effectiveness of image detection. In case of Haar classifier that was
used in the test, images containing noise above 60% are extremely difficult to
be detected or simply impossible. As it appears from the test, Haar classifier is
ultra-sensitive to noise, because salt and pepper adds to image black and white
pixel and this disrupts the effect of Haar wavelet. The Haar wavelet’s work
involves calculating the differences between square. High content of white and
18 P. Mazurek and T. Hachaj

Table 2. This table compares results of the tests with Median filter and Gaussian
filter.

Filter size 0 1 3 5 7 9 11 13 15 17
Median filter 95.6% 96.8% 96.0% 96.1% 95.9% 95.7% 95.3% 94.9% 94.4% 93.9%
Gaussian filter 95.6% 95.8% 95.8% 95.8% 95.0% 93.6% 92.9% 90.9% 88.7% 86.1%

black pixels in images results in incorrect calculation results. The most frequent
circumstances in which noise is generated is taking photos in poor illumination.
Another task was to apply the blurring effect in images and to check effec-
tiveness of the tested model. The effect in question can be seen in the photos or
films of low resolution. For that purpose median filter and Gaussian filter was
used. On the presented table (Table 2) it can be observed that the use of filters
of low values does not cause deterioration of effectiveness or even, as in case of
Gaussian filter, is responsible for an increase of detection rate (Table 2). Use of
filters of high value is bound to result in a decrease of image detection rate but
every filter responds to an increase of value in a different way. As far as median
filter is concerned (Table 2), application of value of 17 caused detection effective-
ness to drop to 86% whereas in case of Gaussian filter effectiveness decreased to
94%. Each filter removes certain minor information from images causing them
to blur but leaving, however, edges and data vital for HAAR classifier which
enables face detection.

Table 3. This table presents results of the contrast test for values from 0.0 to 2.0.

Filter size 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0
Positive results 0% 44.5% 94.7% 94.8% 94.9% 95.6% 96.0% 95.7% 94.5% 92.0% 87.7%

Table 4. This table presents results of the brightness test for values from −127 to 127.

Value −127 −104 −81 −58 −35 0 35 58 81 104 127


Positive results 48.9% 77.6% 87.3% 94.1% 95.7% 95.6% 96.0% 95.6% 94.8% 89.9% 70.8%

Another task required to check effectiveness of detection with images being


modified in terms of contrast and brightness. As it is shown in the table (Tables 3
and 4), a model achieves low effectiveness with images with low level of contrast
and brightness. An increase of contrast (Table 3) up to the value of 0.4 sig-
nificantly affects detection rate. The highest effectiveness of 96% was achieved
applying 1.2 value, further increase of contrast, however, was responsible for a
decrease of detection rate. Next Table (Table 4) seems to demonstrate a similar
tendency as in case of contrast, namely an increase of brightness ranging between
−127–35 results in increased detection rate. Adding values ranging between −35–
58 and increasing brightness has not proved to significantly affect model with
Discovering Diverse Content Through
Random Scribd Documents
NOMBRE MOYENNE DES
TEMPÉRATURES. MAXIMUM MINIMUM
1853. — MOIS. des
du mois. du mois.
observat. 8 h. matin. Midi. 5 h. soir.
Moyennes de l’année 11,95 17,45 18,29

Nous devons ajouter comme corollaire à ce tableau qu’à Batna, en


1853, il a plu tous les mois de l’année, et que les mois où la pluie a
été la plus fréquente ont été mai, octobre, novembre et décembre ; il
a neigé en janvier, février, mars, novembre et décembre ; la dernière
neige est tombée dans la plaine le 27 mars, et la première le 28
novembre.
[42] MM. Balansa et du Colombier nous ont fourni d’utiles documents
sur la végétation de la région montagneuse. — Mon ami M. T. Royer,
ancien capitaine du génie, et M. Thoman ont bien voulu faire tous les
calculs pour la détermination des altitudes d’après nos observations
barométriques ; toutes ces altitudes ont été calculées en prenant pour
base les moyennes des observations recueillies par nous à
Philippeville et à Batna.
[43] Les explorateurs qui ont le plus contribué à faire connaître la
végétation de la région saharienne sont MM. Balansa, Guyon, Hénon,
P. Jamin et Reboud.
Note du transcripteur :

Page 31, " Ononix Natrix " a été remplacé par " Ononis "
Page 33, " — annua Wickstr. " a été remplacé par " Wikstr. "
Page 38, " Cerastium Atlantium " a été remplacé par " Atlanticum "
Page 62, " Kalbfussia Salzmanni Schulz. Bip. " a été remplacé par " Schultz. "
Page 69, " qui apparait dans " a été remplacé par " apparaît "
Page 89, " Clamydophora pubescens " a été remplacé par " Chlamydophora "
Page 90, " Dæmia cordata " a été remplacé par " Dœmia "
Page 101, " [Sinapis]— arvenis " a été remplacé par " arvensis "
Page 141, " Monocotydélones " a été remplacé par " Monocotylédones "
*** END OF THE PROJECT GUTENBERG EBOOK RAPPORT SUR
UN VOYAGE BOTANIQUE EN ALGÉRIE, DE PHILIPPEVILLE A
BISKRA ET DANS LES MONTS AURÈS, ENTREPRIS EN 1853
SOUS LE PATRONAGE DU MINISTÈRE DE LA GUERRE ***

Updated editions will replace the previous one—the old editions


will be renamed.

Creating the works from print editions not protected by U.S.


copyright law means that no one owns a United States copyright
in these works, so the Foundation (and you!) can copy and
distribute it in the United States without permission and without
paying copyright royalties. Special rules, set forth in the General
Terms of Use part of this license, apply to copying and
distributing Project Gutenberg™ electronic works to protect the
PROJECT GUTENBERG™ concept and trademark. Project
Gutenberg is a registered trademark, and may not be used if
you charge for an eBook, except by following the terms of the
trademark license, including paying royalties for use of the
Project Gutenberg trademark. If you do not charge anything for
copies of this eBook, complying with the trademark license is
very easy. You may use this eBook for nearly any purpose such
as creation of derivative works, reports, performances and
research. Project Gutenberg eBooks may be modified and
printed and given away—you may do practically ANYTHING in
the United States with eBooks not protected by U.S. copyright
law. Redistribution is subject to the trademark license, especially
commercial redistribution.

START: FULL LICENSE


THE FULL PROJECT GUTENBERG LICENSE
PLEASE READ THIS BEFORE YOU DISTRIBUTE OR USE THIS WORK

To protect the Project Gutenberg™ mission of promoting the


free distribution of electronic works, by using or distributing this
work (or any other work associated in any way with the phrase
“Project Gutenberg”), you agree to comply with all the terms of
the Full Project Gutenberg™ License available with this file or
online at www.gutenberg.org/license.

Section 1. General Terms of Use and


Redistributing Project Gutenberg™
electronic works
1.A. By reading or using any part of this Project Gutenberg™
electronic work, you indicate that you have read, understand,
agree to and accept all the terms of this license and intellectual
property (trademark/copyright) agreement. If you do not agree to
abide by all the terms of this agreement, you must cease using
and return or destroy all copies of Project Gutenberg™
electronic works in your possession. If you paid a fee for
obtaining a copy of or access to a Project Gutenberg™
electronic work and you do not agree to be bound by the terms
of this agreement, you may obtain a refund from the person or
entity to whom you paid the fee as set forth in paragraph 1.E.8.

1.B. “Project Gutenberg” is a registered trademark. It may only


be used on or associated in any way with an electronic work by
people who agree to be bound by the terms of this agreement.
There are a few things that you can do with most Project
Gutenberg™ electronic works even without complying with the
full terms of this agreement. See paragraph 1.C below. There
are a lot of things you can do with Project Gutenberg™
electronic works if you follow the terms of this agreement and
help preserve free future access to Project Gutenberg™
electronic works. See paragraph 1.E below.
1.C. The Project Gutenberg Literary Archive Foundation (“the
Foundation” or PGLAF), owns a compilation copyright in the
collection of Project Gutenberg™ electronic works. Nearly all the
individual works in the collection are in the public domain in the
United States. If an individual work is unprotected by copyright
law in the United States and you are located in the United
States, we do not claim a right to prevent you from copying,
distributing, performing, displaying or creating derivative works
based on the work as long as all references to Project
Gutenberg are removed. Of course, we hope that you will
support the Project Gutenberg™ mission of promoting free
access to electronic works by freely sharing Project
Gutenberg™ works in compliance with the terms of this
agreement for keeping the Project Gutenberg™ name
associated with the work. You can easily comply with the terms
of this agreement by keeping this work in the same format with
its attached full Project Gutenberg™ License when you share it
without charge with others.

1.D. The copyright laws of the place where you are located also
govern what you can do with this work. Copyright laws in most
countries are in a constant state of change. If you are outside
the United States, check the laws of your country in addition to
the terms of this agreement before downloading, copying,
displaying, performing, distributing or creating derivative works
based on this work or any other Project Gutenberg™ work. The
Foundation makes no representations concerning the copyright
status of any work in any country other than the United States.

1.E. Unless you have removed all references to Project


Gutenberg:

1.E.1. The following sentence, with active links to, or other


immediate access to, the full Project Gutenberg™ License must
appear prominently whenever any copy of a Project
Gutenberg™ work (any work on which the phrase “Project
Gutenberg” appears, or with which the phrase “Project
Gutenberg” is associated) is accessed, displayed, performed,
viewed, copied or distributed:

This eBook is for the use of anyone anywhere in the United


States and most other parts of the world at no cost and with
almost no restrictions whatsoever. You may copy it, give it
away or re-use it under the terms of the Project Gutenberg
License included with this eBook or online at
www.gutenberg.org. If you are not located in the United
States, you will have to check the laws of the country where
you are located before using this eBook.

1.E.2. If an individual Project Gutenberg™ electronic work is


derived from texts not protected by U.S. copyright law (does not
contain a notice indicating that it is posted with permission of the
copyright holder), the work can be copied and distributed to
anyone in the United States without paying any fees or charges.
If you are redistributing or providing access to a work with the
phrase “Project Gutenberg” associated with or appearing on the
work, you must comply either with the requirements of
paragraphs 1.E.1 through 1.E.7 or obtain permission for the use
of the work and the Project Gutenberg™ trademark as set forth
in paragraphs 1.E.8 or 1.E.9.

1.E.3. If an individual Project Gutenberg™ electronic work is


posted with the permission of the copyright holder, your use and
distribution must comply with both paragraphs 1.E.1 through
1.E.7 and any additional terms imposed by the copyright holder.
Additional terms will be linked to the Project Gutenberg™
License for all works posted with the permission of the copyright
holder found at the beginning of this work.

1.E.4. Do not unlink or detach or remove the full Project


Gutenberg™ License terms from this work, or any files
containing a part of this work or any other work associated with
Project Gutenberg™.
1.E.5. Do not copy, display, perform, distribute or redistribute
this electronic work, or any part of this electronic work, without
prominently displaying the sentence set forth in paragraph 1.E.1
with active links or immediate access to the full terms of the
Project Gutenberg™ License.

1.E.6. You may convert to and distribute this work in any binary,
compressed, marked up, nonproprietary or proprietary form,
including any word processing or hypertext form. However, if
you provide access to or distribute copies of a Project
Gutenberg™ work in a format other than “Plain Vanilla ASCII” or
other format used in the official version posted on the official
Project Gutenberg™ website (www.gutenberg.org), you must, at
no additional cost, fee or expense to the user, provide a copy, a
means of exporting a copy, or a means of obtaining a copy upon
request, of the work in its original “Plain Vanilla ASCII” or other
form. Any alternate format must include the full Project
Gutenberg™ License as specified in paragraph 1.E.1.

1.E.7. Do not charge a fee for access to, viewing, displaying,


performing, copying or distributing any Project Gutenberg™
works unless you comply with paragraph 1.E.8 or 1.E.9.

1.E.8. You may charge a reasonable fee for copies of or


providing access to or distributing Project Gutenberg™
electronic works provided that:

• You pay a royalty fee of 20% of the gross profits you derive from
the use of Project Gutenberg™ works calculated using the
method you already use to calculate your applicable taxes. The
fee is owed to the owner of the Project Gutenberg™ trademark,
but he has agreed to donate royalties under this paragraph to
the Project Gutenberg Literary Archive Foundation. Royalty
payments must be paid within 60 days following each date on
which you prepare (or are legally required to prepare) your
periodic tax returns. Royalty payments should be clearly marked
as such and sent to the Project Gutenberg Literary Archive
Foundation at the address specified in Section 4, “Information
about donations to the Project Gutenberg Literary Archive
Foundation.”

• You provide a full refund of any money paid by a user who


notifies you in writing (or by e-mail) within 30 days of receipt that
s/he does not agree to the terms of the full Project Gutenberg™
License. You must require such a user to return or destroy all
copies of the works possessed in a physical medium and
discontinue all use of and all access to other copies of Project
Gutenberg™ works.

• You provide, in accordance with paragraph 1.F.3, a full refund of


any money paid for a work or a replacement copy, if a defect in
the electronic work is discovered and reported to you within 90
days of receipt of the work.

• You comply with all other terms of this agreement for free
distribution of Project Gutenberg™ works.

1.E.9. If you wish to charge a fee or distribute a Project


Gutenberg™ electronic work or group of works on different
terms than are set forth in this agreement, you must obtain
permission in writing from the Project Gutenberg Literary
Archive Foundation, the manager of the Project Gutenberg™
trademark. Contact the Foundation as set forth in Section 3
below.

1.F.

1.F.1. Project Gutenberg volunteers and employees expend


considerable effort to identify, do copyright research on,
transcribe and proofread works not protected by U.S. copyright
law in creating the Project Gutenberg™ collection. Despite
these efforts, Project Gutenberg™ electronic works, and the
medium on which they may be stored, may contain “Defects,”
such as, but not limited to, incomplete, inaccurate or corrupt
data, transcription errors, a copyright or other intellectual
property infringement, a defective or damaged disk or other
medium, a computer virus, or computer codes that damage or
cannot be read by your equipment.

1.F.2. LIMITED WARRANTY, DISCLAIMER OF DAMAGES -


Except for the “Right of Replacement or Refund” described in
paragraph 1.F.3, the Project Gutenberg Literary Archive
Foundation, the owner of the Project Gutenberg™ trademark,
and any other party distributing a Project Gutenberg™ electronic
work under this agreement, disclaim all liability to you for
damages, costs and expenses, including legal fees. YOU
AGREE THAT YOU HAVE NO REMEDIES FOR NEGLIGENCE,
STRICT LIABILITY, BREACH OF WARRANTY OR BREACH
OF CONTRACT EXCEPT THOSE PROVIDED IN PARAGRAPH
1.F.3. YOU AGREE THAT THE FOUNDATION, THE
TRADEMARK OWNER, AND ANY DISTRIBUTOR UNDER
THIS AGREEMENT WILL NOT BE LIABLE TO YOU FOR
ACTUAL, DIRECT, INDIRECT, CONSEQUENTIAL, PUNITIVE
OR INCIDENTAL DAMAGES EVEN IF YOU GIVE NOTICE OF
THE POSSIBILITY OF SUCH DAMAGE.

1.F.3. LIMITED RIGHT OF REPLACEMENT OR REFUND - If


you discover a defect in this electronic work within 90 days of
receiving it, you can receive a refund of the money (if any) you
paid for it by sending a written explanation to the person you
received the work from. If you received the work on a physical
medium, you must return the medium with your written
explanation. The person or entity that provided you with the
defective work may elect to provide a replacement copy in lieu
of a refund. If you received the work electronically, the person or
entity providing it to you may choose to give you a second
opportunity to receive the work electronically in lieu of a refund.
If the second copy is also defective, you may demand a refund
in writing without further opportunities to fix the problem.

1.F.4. Except for the limited right of replacement or refund set


forth in paragraph 1.F.3, this work is provided to you ‘AS-IS’,
WITH NO OTHER WARRANTIES OF ANY KIND, EXPRESS
OR IMPLIED, INCLUDING BUT NOT LIMITED TO
WARRANTIES OF MERCHANTABILITY OR FITNESS FOR
ANY PURPOSE.

1.F.5. Some states do not allow disclaimers of certain implied


warranties or the exclusion or limitation of certain types of
damages. If any disclaimer or limitation set forth in this
agreement violates the law of the state applicable to this
agreement, the agreement shall be interpreted to make the
maximum disclaimer or limitation permitted by the applicable
state law. The invalidity or unenforceability of any provision of
this agreement shall not void the remaining provisions.

1.F.6. INDEMNITY - You agree to indemnify and hold the


Foundation, the trademark owner, any agent or employee of the
Foundation, anyone providing copies of Project Gutenberg™
electronic works in accordance with this agreement, and any
volunteers associated with the production, promotion and
distribution of Project Gutenberg™ electronic works, harmless
from all liability, costs and expenses, including legal fees, that
arise directly or indirectly from any of the following which you do
or cause to occur: (a) distribution of this or any Project
Gutenberg™ work, (b) alteration, modification, or additions or
deletions to any Project Gutenberg™ work, and (c) any Defect
you cause.

Section 2. Information about the Mission of


Project Gutenberg™
Project Gutenberg™ is synonymous with the free distribution of
electronic works in formats readable by the widest variety of
computers including obsolete, old, middle-aged and new
computers. It exists because of the efforts of hundreds of
volunteers and donations from people in all walks of life.

Volunteers and financial support to provide volunteers with the


assistance they need are critical to reaching Project
Gutenberg™’s goals and ensuring that the Project Gutenberg™
collection will remain freely available for generations to come. In
2001, the Project Gutenberg Literary Archive Foundation was
created to provide a secure and permanent future for Project
Gutenberg™ and future generations. To learn more about the
Project Gutenberg Literary Archive Foundation and how your
efforts and donations can help, see Sections 3 and 4 and the
Foundation information page at www.gutenberg.org.

Section 3. Information about the Project


Gutenberg Literary Archive Foundation
The Project Gutenberg Literary Archive Foundation is a non-
profit 501(c)(3) educational corporation organized under the
laws of the state of Mississippi and granted tax exempt status by
the Internal Revenue Service. The Foundation’s EIN or federal
tax identification number is 64-6221541. Contributions to the
Project Gutenberg Literary Archive Foundation are tax
deductible to the full extent permitted by U.S. federal laws and
your state’s laws.

The Foundation’s business office is located at 809 North 1500


West, Salt Lake City, UT 84116, (801) 596-1887. Email contact
links and up to date contact information can be found at the
Foundation’s website and official page at
www.gutenberg.org/contact

Section 4. Information about Donations to


the Project Gutenberg Literary Archive
Foundation
Project Gutenberg™ depends upon and cannot survive without
widespread public support and donations to carry out its mission
of increasing the number of public domain and licensed works
that can be freely distributed in machine-readable form
accessible by the widest array of equipment including outdated
equipment. Many small donations ($1 to $5,000) are particularly
important to maintaining tax exempt status with the IRS.

The Foundation is committed to complying with the laws


regulating charities and charitable donations in all 50 states of
the United States. Compliance requirements are not uniform
and it takes a considerable effort, much paperwork and many
fees to meet and keep up with these requirements. We do not
solicit donations in locations where we have not received written
confirmation of compliance. To SEND DONATIONS or
determine the status of compliance for any particular state visit
www.gutenberg.org/donate.

While we cannot and do not solicit contributions from states


where we have not met the solicitation requirements, we know
of no prohibition against accepting unsolicited donations from
donors in such states who approach us with offers to donate.

International donations are gratefully accepted, but we cannot


make any statements concerning tax treatment of donations
received from outside the United States. U.S. laws alone swamp
our small staff.

Please check the Project Gutenberg web pages for current


donation methods and addresses. Donations are accepted in a
number of other ways including checks, online payments and
credit card donations. To donate, please visit:
www.gutenberg.org/donate.

Section 5. General Information About Project


Gutenberg™ electronic works
Professor Michael S. Hart was the originator of the Project
Gutenberg™ concept of a library of electronic works that could
be freely shared with anyone. For forty years, he produced and
distributed Project Gutenberg™ eBooks with only a loose
network of volunteer support.

Project Gutenberg™ eBooks are often created from several


printed editions, all of which are confirmed as not protected by
copyright in the U.S. unless a copyright notice is included. Thus,
we do not necessarily keep eBooks in compliance with any
particular paper edition.

Most people start at our website which has the main PG search
facility: www.gutenberg.org.

This website includes information about Project Gutenberg™,


including how to make donations to the Project Gutenberg
Literary Archive Foundation, how to help produce our new
eBooks, and how to subscribe to our email newsletter to hear
about new eBooks.
back
Welcome to our website – the ideal destination for book lovers and
knowledge seekers. With a mission to inspire endlessly, we offer a
vast collection of books, ranging from classic literary works to
specialized publications, self-development books, and children's
literature. Each book is a new journey of discovery, expanding
knowledge and enriching the soul of the reade

Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.

Let us accompany you on the journey of exploring knowledge and


personal growth!

textbookfull.com

You might also like