0% found this document useful (0 votes)
6 views78 pages

Behavior Analysis With Machine Learning and R A Sensors and Data Driven Approach Enrique Garcia Ceja PDF Download

The document is a promotional description for the book 'Behavior Analysis With Machine Learning And R: A Sensors And Data Driven Approach' by Enrique Garcia Ceja, which is available for download. It includes links to related products and other recommended readings in the field of behavior analysis and machine learning. The book covers various topics including machine learning techniques, data analysis, and behavioral data visualization.

Uploaded by

aydcknf9093
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views78 pages

Behavior Analysis With Machine Learning and R A Sensors and Data Driven Approach Enrique Garcia Ceja PDF Download

The document is a promotional description for the book 'Behavior Analysis With Machine Learning And R: A Sensors And Data Driven Approach' by Enrique Garcia Ceja, which is available for download. It includes links to related products and other recommended readings in the field of behavior analysis and machine learning. The book covers various topics including machine learning techniques, data analysis, and behavioral data visualization.

Uploaded by

aydcknf9093
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 78

Behavior Analysis With Machine Learning And R A

Sensors And Data Driven Approach Enrique Garcia


Ceja download

https://ebookbell.com/product/behavior-analysis-with-machine-
learning-and-r-a-sensors-and-data-driven-approach-enrique-garcia-
ceja-33377446

Explore and download more ebooks at ebookbell.com


Here are some recommended products that we believe you will be
interested in. You can click the link to download.

Behavior Analysis With Machine Learning Using R 1st Edition Ceja

https://ebookbell.com/product/behavior-analysis-with-machine-learning-
using-r-1st-edition-ceja-36103552

Applied Behavior Analysis For Children With Autism Spectrum Disorders


1st Edition Johnny L Matson

https://ebookbell.com/product/applied-behavior-analysis-for-children-
with-autism-spectrum-disorders-1st-edition-johnny-l-matson-4391242

Applied Behavior Analysis For Teachers With Etext Access Code Ninth
Edition Paul A Alberto Anne C Troutman

https://ebookbell.com/product/applied-behavior-analysis-for-teachers-
with-etext-access-code-ninth-edition-paul-a-alberto-anne-c-
troutman-10008822

Handbook Of Applied Behavior Analysis For Children With Autism


Clinical Guide To Assessment And Treatment 1st Edition Johnny L Matson

https://ebookbell.com/product/handbook-of-applied-behavior-analysis-
for-children-with-autism-clinical-guide-to-assessment-and-
treatment-1st-edition-johnny-l-matson-50440210
Applied Behavior Analysis Treatment Of Violence And Aggression In
Persons With Neurodevelopmental Disabilities James K Luiselli

https://ebookbell.com/product/applied-behavior-analysis-treatment-of-
violence-and-aggression-in-persons-with-neurodevelopmental-
disabilities-james-k-luiselli-46099444

Applied Behavior Analysis Treatment Of Violence And Aggression In


Persons With Neurodevelopmental Disabilities James K Luiselli

https://ebookbell.com/product/applied-behavior-analysis-treatment-of-
violence-and-aggression-in-persons-with-neurodevelopmental-
disabilities-james-k-luiselli-43644832

Applied Behavior Analysis Treatment Of Violence And Aggression In


Persons With Neurodevelopmental Disabilities James K Luiselli

https://ebookbell.com/product/applied-behavior-analysis-treatment-of-
violence-and-aggression-in-persons-with-neurodevelopmental-
disabilities-james-k-luiselli-43644830

The Verbal Behavior Analysis Inducing And Expanding New Verbal


Capabilities In Children With Language Delays R Douglas Greer Denise F
Ross

https://ebookbell.com/product/the-verbal-behavior-analysis-inducing-
and-expanding-new-verbal-capabilities-in-children-with-language-
delays-r-douglas-greer-denise-f-ross-10466066

Autism Spectrum Disorders Triumph Over With Ayurveda And Applied


Behavior Analysis Aba Ramachandran Sk

https://ebookbell.com/product/autism-spectrum-disorders-triumph-over-
with-ayurveda-and-applied-behavior-analysis-aba-ramachandran-
sk-46334784
Behavior Analysis with Machine Learning and R
A Sensors and Data Driven Approach
Enrique Garcia Ceja

This book is for sale at: http://leanpub.com/

Copyright © 2020-2021 Enrique Garcia Ceja


All rights reserved. No part of this book may be reproduced or used in any manner without
the prior written permission of the copyright owner, except for the use of brief quotations.
To request permissions, contact the author.
To My Family, who have put up with me despite my bad behavior.

To Darlene.
Contents

Welcome ix

Preface xi
Supplemental Material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii

1 Introduction 1
1.1 What is Machine Learning? . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 Types of Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3.1 Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3.2 Variable Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.3.3 Predictive Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.4 Data Analysis Pipeline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.5 Evaluating Predictive Models . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.6 Simple Classification Example . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.6.1 K-fold Cross-Validation Example . . . . . . . . . . . . . . . . . . . . 23
1.7 Simple Regression Example . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
1.8 Underfitting and Overfitting . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
1.9 Bias and Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
1.10 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

iii
CONTENTS iv

2 Predicting Behavior with Classification Models 37


2.1 k-nearest Neighbors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.1.1 Indoor Location with Wi-Fi Signals . . . . . . . . . . . . . . . . . . . 39
2.2 Performance Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.2.1 Confusion Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
2.3 Decision Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
2.3.1 Activity Recognition with Smartphones . . . . . . . . . . . . . . . . 54
2.4 Naive Bayes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
2.4.1 Activity Recognition with Naive Bayes . . . . . . . . . . . . . . . . . 66
2.5 Dynamic Time Warping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
2.5.1 Hand Gesture Recognition . . . . . . . . . . . . . . . . . . . . . . . . 82
2.6 Dummy Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
2.6.1 Most-frequent-class Classifier . . . . . . . . . . . . . . . . . . . . . . 89
2.6.2 Uniform Classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
2.6.3 Frequency-based Classifier . . . . . . . . . . . . . . . . . . . . . . . . 94
2.6.4 Other Dummy Classifiers . . . . . . . . . . . . . . . . . . . . . . . . 94
2.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

3 Predicting Behavior with Ensemble Learning 98


3.1 Bagging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
3.1.1 Activity recognition with Bagging . . . . . . . . . . . . . . . . . . . . 100
3.2 Random Forest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
3.3 Stacked Generalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
3.4 Multi-view Stacking for Home Tasks Recognition . . . . . . . . . . . . . . . 110
3.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116

4 Exploring and Visualizing Behavioral Data 119


4.1 Talking with Field Experts . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
4.2 Summary Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
4.3 Class Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
CONTENTS v

4.4 User-Class Sparsity Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123


4.5 Boxplots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
4.6 Correlation Plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
4.6.1 Interactive Correlation Plots . . . . . . . . . . . . . . . . . . . . . . . 127
4.7 Timeseries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
4.7.1 Interactive Timeseries . . . . . . . . . . . . . . . . . . . . . . . . . . 131
4.8 Multidimensional Scaling (MDS) . . . . . . . . . . . . . . . . . . . . . . . . 133
4.9 Heatmaps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
4.10 Automated EDA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
4.11 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145

5 Preprocessing Behavioral Data 146


5.1 Missing Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
5.1.1 Imputation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
5.2 Smoothing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
5.3 Normalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
5.4 Imbalanced Classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
5.4.1 Random Oversampling . . . . . . . . . . . . . . . . . . . . . . . . . . 161
5.4.2 SMOTE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
5.5 Information Injection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
5.6 One-hot Encoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
5.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173

6 Discovering Behaviors with Unsupervised Learning 174


6.1 K-means clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
6.1.1 Grouping Student Responses . . . . . . . . . . . . . . . . . . . . . . 176
6.2 The Silhouette Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
6.3 Mining Association Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
6.3.1 Finding Rules for Criminal Behavior . . . . . . . . . . . . . . . . . . 187
6.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
CONTENTS vi

7 Encoding Behavioral Data 200


7.1 Feature Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
7.2 Timeseries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
7.3 Transactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
7.4 Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
7.5 Recurrence Plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
7.5.1 Computing Recurence Plots . . . . . . . . . . . . . . . . . . . . . . . 209
7.5.2 Recurrence Plots of Hand Gestures . . . . . . . . . . . . . . . . . . . 210
7.6 Bag-of-Words . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
7.6.1 BoW for Complex Activities. . . . . . . . . . . . . . . . . . . . . . . 218
7.7 Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
7.7.1 Complex Activities as Graphs . . . . . . . . . . . . . . . . . . . . . . 224
7.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228

8 Predicting Behavior with Deep Learning 229


8.1 Introduction to Artificial Neural Networks . . . . . . . . . . . . . . . . . . . 230
8.1.1 Sigmoid and ReLU Units . . . . . . . . . . . . . . . . . . . . . . . . 235
8.1.2 Assembling Units into Layers . . . . . . . . . . . . . . . . . . . . . . 237
8.1.3 Deep Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . 239
8.1.4 Learning the Parameters . . . . . . . . . . . . . . . . . . . . . . . . . 240
8.1.5 Parameter Learning Example in R . . . . . . . . . . . . . . . . . . . 245
8.1.6 Stochastic Gradient Descent . . . . . . . . . . . . . . . . . . . . . . . 249
8.2 Keras and TensorFlow with R . . . . . . . . . . . . . . . . . . . . . . . . . . 250
8.2.1 Keras Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
8.3 Classification with Neural Networks . . . . . . . . . . . . . . . . . . . . . . . 255
8.3.1 Classification of Electromyography Signals . . . . . . . . . . . . . . . 258
8.4 Overfitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
8.4.1 Early Stopping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
8.4.2 Dropout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
CONTENTS vii

8.5 Fine-Tuning a Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . 270


8.6 Convolutional Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . 272
8.6.1 Convolutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
8.6.2 Pooling Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
8.7 CNNs with Keras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
8.7.1 Example 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
8.7.2 Example 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
8.8 Smiles Detection with a CNN . . . . . . . . . . . . . . . . . . . . . . . . . . 282
8.9 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288

9 Multi-User Validation 290


9.1 Mixed Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
9.1.1 Skeleton Action Recognition with Mixed Models . . . . . . . . . . . . 292
9.2 User-Independent Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
9.3 User-Dependent Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
9.4 User-Adaptive Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
9.4.1 Transfer Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
9.4.2 A User-Adaptive Model for Activity Recognition . . . . . . . . . . . 304
9.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314

10 Detecting Abnormal Behaviors 317


10.1 Isolation Forests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
10.2 Detecting Abnormal Fish Behaviors . . . . . . . . . . . . . . . . . . . . . . . 321
10.2.1 Explore and Visualize Trajectories . . . . . . . . . . . . . . . . . . . 322
10.2.2 Preprocessing and Feature Extraction . . . . . . . . . . . . . . . . . 325
10.2.3 Training the Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
10.2.4 ROC curve and AUC . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
10.3 Autoencoders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
10.3.1 Autoencoders for Anomaly Detection . . . . . . . . . . . . . . . . . . 338
10.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
CONTENTS viii

A Setup Your Environment 345


A.1 Installing the Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
A.2 Installing the Examples Source Code . . . . . . . . . . . . . . . . . . . . . . 346
A.3 Running Shiny Apps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
A.4 Installing Keras and TensorFlow . . . . . . . . . . . . . . . . . . . . . . . . . 348

B Datasets 350
B.1 COMPLEX ACTIVITIES . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
B.2 DEPRESJON . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
B.3 ELECTROMYOGRAPHY . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
B.4 FISH TRAJECTORIES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
B.5 HAND GESTURES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
B.6 HOME TASKS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
B.7 HOMICIDE REPORTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
B.8 INDOOR LOCATION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
B.9 SHEEP GOATS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
B.10 SKELETON ACTIONS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
B.11 SMARTPHONE ACTIVITIES . . . . . . . . . . . . . . . . . . . . . . . . . 354
B.12 SMILES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
B.13 STUDENTS’ MENTAL HEALTH . . . . . . . . . . . . . . . . . . . . . . . . 354

Citing this Book 355


Welcome

This book aims to provide an introduction to machine learning concepts and algorithms
applied to a diverse set of behavior analysis problems. It focuses on the practical aspects of
solving such problems based on data collected from sensors or stored in electronic records.
The included examples demonstrate how to perform several of the tasks involved during a
data analysis pipeline such as: data exploration, visualization, preprocessing, representation,
model training/validation, and so on. All of this, using the R programming language and
real-life datasets.
Some of the content that you will find here includes, How To:

• Build supervised machine learning models to predict indoor locations based on


WiFi signals, recognize physical activities from smartphone sensors and 3D skeleton
data, detect hand gestures from accelerometer signals, and so on.
• Use unsupervised learning algorithms to discover criminal behavioral patterns.
• Program your own ensemble learning methods and use multi-view stacking to fuse
signals from heterogeneous data sources.
• Train deep learning models such as neural networks to classify muscle activity from
electromyography signals and CNNs to detect smiles in images.
• Evaluate the performance of your models in traditional and multi-user settings.
• Train anomaly detection models such as Isolation Forests and autoencoders to detect
abnormal fish trajectories.
• And much more!

The accompanying source code for all examples is available at https://github.com/


enriquegit/behavior-code. The book itself was written in R with the bookdown package1
developed by Yihui Xie2 . The front cover and summary-comics were illustrated by Vance
Capley3 .
1
https://CRAN.R-project.org/package=bookdown
2
https://twitter.com/xieyihui
3
http://www.vancecapleyart.com/

ix
CONTENTS x

The front cover depicts two brothers (Biås and Biranz) in what seems to be a typical
weekend. They are exploring and enjoying nature as usual. What they don’t know is that
their lives are about to change and there is no turning back. Suddenly, Biranz spots a strange
object approaching them. As it makes its way out of the rocks, its entire figure is revealed.
The brothers have never seen anything like that before. The closest similar image they have
in their brains is a hand-sized carnivorous plant they saw at the botanical garden during a
school visit several years ago. Without any warning, the creature releases a load of spores
into the air. Today, the breeze is not on the brothers’ side and the spores quickly enclose
them. After seconds of being exposed, their bodies start to paralyze. Moments later, they
can barely move their pupils. The creature’s multiple stems begin to inflate and all of a
sudden, multiple thorns are shot. Horrified and incapable, the brothers can only witness
how the thorns approach their bodies and they can even hear how the air is being cut by
the sharp thorns. At this point, they are only waiting for the worst. …they haven’t felt
any impact. Has time just stopped? No—the thorns were repelled by what appears to be
a bionic dolphin emitting some type of ultrasonic waves. One of the projectiles managed
to dodge the sound defense and is heading directly to Biås. While flying almost one mile
above sea level, an eagle aims for the elusive thorn and destroys it with surgical precision.
However, the creature is being persistent with its attacks. Will the brothers escape from this
crossfire battle?
About me. My name is Enrique and I am a researcher at SINTEF4 . Please feel free to
e-mail me with any questions/comments/feedback, etc.
e-mail:

twitter: e_g_mx
website: http://www.enriquegc.com

4
https://www.sintef.no/en/
Preface

Automatic behavior monitoring technologies are becoming part of our everyday lives thanks
to advances in sensors and machine learning. The automatic analysis and understanding
of behavior are being applied to solve problems in several fields, including health care,
sports, marketing, ecology, security, and psychology, to name a few. This book provides
a practical introduction to machine learning methods applied to behavior analysis with the
R programming language. The book does not assume any previous knowledge in machine
learning. You should be familiar with the basics of R and some knowledge in basic statistics
and high school-level mathematics would be beneficial.

Supplemental Material

Supplemental material consists of the examples’ code and datasets. The source code for the
examples can be downloaded from https://github.com/enriquegit/behavior-code. Instruc-
tions on how to set up the code and get the datasets are in Appendix A. A reference for all
the utilized datasets is in Appendix B.

Conventions

DATASET names are written in uppercase italics. Functions are referred to by their name
followed by parenthesis and omitting their arguments, for example: myFunction(). Class
labels are written in italics and between single quotes: ‘label1’. The following icons are used
to provide additional contextual information:

Provides additional information and notes.

xi
CONTENTS xii

Important information to consider.

Provides tips and good practice recommendations.

Lists the R scripts and files used in the corresponding section.

Interactive shiny app available. Please see Appendix A for instructions on how to run
shiny apps.

The folder icon will appear at the beginning of a section (if applicable) to indicate which
scripts were used for the corresponding examples.

Acknowledgments

I want to thank Michael Riegler, Jaime Mondragon y Ariana, Viviana M., Linda Sicilia,
Anton Aguilar, Aleksander Karlsen, my former master’s and PhD. advisor Ramon F. Brena,
and my colleagues at SINTEF.
The examples in this book rely heavily on datasets. I want to thank all the people that made
all their datasets used here publicly available. I want to thank Vance Capley who brought
to life the front cover and comic illustrations.
Chapter 1

Introduction

Living organisms are constantly sensing and analyzing their surrounding environment. This
includes inanimate objects but also other living entities. All of this is with the objective of
making decisions and taking actions, either consciously or unconsciously. If we see someone
running, we will react differently depending on whether we are at a stadium or in a bank.
At the same time, we may also analyze other cues such as the runner’s facial expressions,
clothes, items, and the reactions of the other people around us. Based on this aggregated
information, we can decide how to react and behave. All this is supported by the organisms’
sensing capabilities and decision-making processes (the brain and/or chemical reactions).
Understanding our environment and how others behave is crucial for conducting our everyday
life activities and provides support for other tasks. But, what is behavior? The Cambridge
dictionary defines behavior as:

“the way that a person, an animal, a substance, etc. behaves in a particular


situation or under particular conditions”.

Another definition by dictionary.com is:

“observable activity in a human or animal.”

Both definitions are very similar and include humans and animals. Following those defini-
tions, this book will focus on the automatic analysis of human and animal behaviors. There
are three main reasons why one may want to analyze behaviors in an automatic manner:

1. React. A biological or an artificial agent (or a combination of both) can take actions
based on what is happening in the surrounding environment. For example, if suspicious
behavior is detected in an airport, preventive actions can be triggered by security

1
CHAPTER 1. INTRODUCTION 2

Figure 1.1: Andean condor. source: wikipedia.

systems and the corresponding authorities. Without the possibility to automate such
a detection system, it would be infeasible to implement it in practice. Just imagine
trying to analyze airport traffic by hand.

2. Understand. Analyzing the behavior of an organism can help us to understand other


associated behaviors and processes and to answer research questions. For example,
Williams et al. (2020) found that Andean condors (the heaviest soaring bird) only flap
their wings for about 1% of their total flight time. In one of the cases, a condor flew
≈ 172 km without flapping. Those findings were the result of analyzing the birds’
behavior from data recorded by bio-logging devices. In this book, several examples
that make use of inertial devices will be studied.

3. Document/Archive. Finally, we may want to document certain behaviors for future


use. It could be for evidence purposes or maybe it is not clear how the information can
be used now but may come in handy later. Based on the archived information, one
could gain new knowledge in the future and use it to react (take decisions/actions).
For example, we could document our nutritional habits (what do we eat, how often,
etc.). If there is a health issue, a specialist could use this historical information to
make a more precise diagnosis and propose actions.
CHAPTER 1. INTRODUCTION 3

Some behaviors can be used as a proxy to understand other behaviors, states, and/or pro-
cesses. For example, detecting body movement behaviors during a job interview could serve
as the basis to understand stress levels. Behaviors can also be modeled as a composition
of lower-level behaviors. In chapter 7, a method called Bag of Words that can be used to
decompose complex behaviors into a set of simpler ones will be presented.
In order to analyze and monitor behaviors, we need a way to observe them. Living organ-
isms use their available senses such as eyesight, hearing, smell, echolocation (bats, dolphins),
thermal senses (snakes, mosquitoes), etc. In the case of machines, they need sensors to ac-
complish or approximate those tasks. For example, RGB and thermal cameras, microphones,
temperature sensors, and so on.
The reduction in the size of sensors has allowed the development of more powerful wearable
devices. Wearable devices are electronic devices that are worn by a user, usually as acces-
sories or embedded in clothes. Examples of wearable devices are smartphones, smartwatches,
fitness bracelets, actigraphy watches, etc. These devices have embedded sensors that allow
them to monitor different aspects of a user such as activity levels, blood pressure, temper-
ature, and location, to name a few. Examples of sensors that can be found in those devices
are accelerometers, magnetometers, gyroscopes, heart rate, microphones, Wi-Fi, Bluetooth,
Galvanic Skin Response (GSR), etc.
Several of those sensors were initially used for some specific purposes. For example, ac-
celerometers in smartphones were intended to be used for gaming or detecting the device’s
orientation. Later, some people started to propose and implement new use cases such as
activity recognition (Shoaib et al., 2015) and fall detection. The magnetometer, which mea-
sures the earth’s magnetic field, was mainly used with map applications to determine the
orientation of the device, but later, it was found that it can also be used for indoor location
purposes (Brena et al., 2017).
In general, wearable devices have proven success in tracking different types of behaviors such
as physical activity, sports activities, location, and even mental health states (Garcia-Ceja
et al., 2018c). These sensors generate a lot of raw data, but it will be our task to process
and analyze it. Doing it by hand becomes impossible given the large amounts and available
formats in which data is available. Thus, in this book, several machine learning methods will
be introduced to extract and analyze different types of behaviors from data. The next section
begins with an introduction to machine learning. The rest of this chapter will introduce the
required machine learning concepts before we start analyzing behaviors in chapter 2.
CHAPTER 1. INTRODUCTION 4

Figure 1.2: Overall Machine Learning phases.

1.1 What is Machine Learning?


You can think of Machine Learning as a set of computational algorithms that automatically
find useful patterns and relationships from data. Here, the keyword is automatic. When
trying to solve a problem, one can hard-code a predefined set of rules, for example, chained
if-else conditions. For instance, if we want to detect if the object in a picture is an orange
or a pear, we can do something like:

if(number_green_pixels > 90%)


return "pear"
else
return "orange"

This simple rule should work well and will do the job. Imagine that now your boss tells you
that the system needs to recognize green apples as well. Our previous rule will no longer
work, and we will need to include additional rules and thresholds. On the other hand, a
machine learning algorithm will automatically learn such rules based on the updated data.
So, you only need to update your data with examples of green apples and “click” the re-train
button!
The result of learning is knowledge that the system can use to solve new instances of a
problem. In this case, when you show a new image to the system it should be able to
recognize the type of fruit. Figure 1.2 shows this general idea.

For more formal definitions of machine learning, I recommend you check (Kononenko
and Kukar, 2007).

Machine learning methods rely on three main building blocks:


CHAPTER 1. INTRODUCTION 5

• Data
• Algorithms
• Models

Every machine learning method needs data to learn from. For the example of the fruits, we
need examples of images for each type of fruit we want to recognize. Additionally, we need
their corresponding output types (labels) so the algorithm can learn how to associate each
image with its corresponding label.

Not every machine learning method needs the expected output or labels (more on this
in the Taxonomy section 1.2).

Typically, an algorithm will use the data to learn a model. This is called the learning or
training phase. The learned model can then be used to generate predictions when presented
with new data. The data used to train the models is called the train set. Since we need a
way to test how the model will perform once it is deployed in a real setting (in production),
we also need what is known as the test set. The test set is used to estimate the model’s
performance on data it has never seen before (more on this will be presented in section 1.5).

1.2 Types of Machine Learning

Machine learning methods can be grouped into different types. Figure 1.3 depicts a cate-
gorization of machine learning ‘types’. This figure is based on (Biecek et al., 2012). The x
axis represents the certainty of the labels and the y axis the percent of training data that
is labeled. In our previous example, the labels are the fruits’ names associated with each
image.
From the figure, four main types of machine learning methods can be observed:

• Supervised learning. In this case, 100% of the training data is labeled and the
certainty of those labels is 100%. This is like the fruits example. For every image
used to train the system, the respective label is also known and there is no uncertainty
about the label. When the expected output is a category (the type of fruit), this is
called classification. Examples of classification models (a.k.a classifiers) are decision
trees, k-nearest neighbors, random forest, neural networks, etc. When the output is
a real number (e.g., predict temperature) it is called regression. Examples of re-
gression models are linear regression, regression trees, neural networks, random forest,
CHAPTER 1. INTRODUCTION 6

Figure 1.3: Machine learning taxonomy.

k-nearest neighbors, etc. Note that some models can be used for both classification
and regression. A supervised learning problem can be formalized as follows:

f (x) = y (1.1)

where f is a function that maps some input data x (for example images) to an output y
(types of fruits). Usually, an algorithm will try to learn the best model f given some data
consisting of n pairs (x, y) of examples. During learning, the algorithm has access to the
expected output/label y for each input x. At inference time, that is, when we want to make
predictions for new examples, we can use the learned model f and feed it with a new input
x to obtain the corresponding predicted value y.

• Semi-supervised learning. This is the case when the certainty of the labels is 100%
but not all training data are labeled. Usually, this scenario considers the case when
only a very small proportion of the data is labeled. That is, the dataset contains pairs
of examples of the form (x, y) but also examples where y is missing (x, ?). In supervised
learning, both x and y must be present. On the other hand, semi-supervised algorithms
can learn even if some examples are missing the expected output y. This is a common
situation in real life since labeling data can be expensive and time-consuming. In the
fruits example, someone needs to tag every training image manually before training
a model. Semi-supervised learning methods try to extract information also from the
CHAPTER 1. INTRODUCTION 7

unlabeled data to improve the models. Examples of semi-supervised learning methods


are self-learning, co-training, tri-training, etc. (Triguero et al., 2013).

• Partially-supervised learning. This is a generalization that encompasses super-


vised and semi-supervised learning. Here, the examples have uncertain (soft) labels.
For example, the label of a fruit image instead of being an “orange” or “pear” could be a
vector [0.7, 0.3] where the first element is the probability that the image corresponds to
an orange and the second element is the probability that it’s a pear. Maybe the image
was not very clear, and these are the beliefs of the person tagging the images encoded
as probabilities. Examples of models that can be used for partially-supervised learning
are mixture models with belief functions (Côme et al., 2009) and neural networks.

• Unsupervised learning. This is the extreme case when none of the training examples
have a label. That is, the dataset only has pairs (x, ?). Now, you may be wondering: If
there are no labels, is it possible to extract information from these data? The answer
is yes. Imagine you have fruit images with no labels. What you could try to do is to
automatically group them into meaningful categories/groups. The categories could be
the types of fruits themselves, i.e., trying to form groups in which images within the
same category belong to the same type. In the fruits example, we could infer the true
types by visually inspecting the images, but in many cases, visual inspection is difficult
and the formed groups may not have an easy interpretation, but still, they can be very
useful and can be used as a preprocessing step (like in vector quantization). These
types of algorithms that find groups (hierarchical groups in some cases) are called
clustering methods. Examples of clustering methods are k-means, k-medoids, and
hierarchical clustering. Clustering algorithms are not the only unsupervised learning
methods. Association rules, word embeddings, and autoencoders are examples of other
unsupervised learning methods. Some people may disagree that word embeddings
and autoencoders are not fully unsupervised methods but for practical purposes, this
categorization is not relevant.

Additionally, there is another type of machine learning called Reinforcement Learning


(RL) which has substantial differences from the previous ones. This type of learning does
not rely on example data as the previous ones but on stimuli from the agent’s environment.
At any given point in time, an agent can perform an action which will lead it to a new state
where a reward is collected. The aim is to learn the sequence of actions that maximize the
reward. This type of learning is not covered in this book. A good introduction to the topic
can be consulted here1 .
This book will mainly cover supervised learning problems and more specifically, classifica-
tion problems. For example, given a set of wearable sensor readings, we want to predict
1
http://www.scholarpedia.org/article/Reinforcement_learning
CHAPTER 1. INTRODUCTION 8

contextual information about a given user such as location, current activity, mood, and so
on. Unsupervised learning methods (clustering and association rules) will be covered as well
in chapter 6.

1.3 Terminology
This section introduces some basic terminology that will be helpful for the rest of the book.

1.3.1 Tables

Since data is the most important ingredient in machine learning, let’s start with some related
terms. First, data needs to be stored/structured so it can be easily manipulated and pro-
cessed. Most of the time, datasets will be stored as tables or in R terminology, data frames.
Figure 1.4 shows the mtcars dataset stored in a data frame.
Columns represent variables and rows represent examples also known as instances or data
points. In this table, there are 5 variables mpg, cyl, disp, hp and the model (the first column).
In this example, the first column does not have a name, but it is still a variable. Each row

Figure 1.4: Table/Data frame components.


CHAPTER 1. INTRODUCTION 9

Figure 1.5: Table/Data frame components (cont.).

represents a specific car model with its values per variable. In machine learning terminology,
rows are more commonly called instances whereas in statistics they are often called data
points or observations. Here, those terms will be used interchangeably.
Figure 1.5 shows a data frame for the iris dataset which consists of different kinds of plants.
Suppose that we are interested in predicting the Species based on the other variables. In
machine learning terminology, the variable of interest (the one that depends on the others)
is called the class or label for classification problems. For regression, it is often referred to
as y. In statistics, it is more commonly known as the response, dependent, or y variable, for
both classification and regression.
In machine learning terminology, the rest of the variables are called features or attributes. In
statistics, they are called predictors, independent variables, or just X. From the context, most
of the time it should be easy to identify dependent from independent variables regardless of
the used terminology.

1.3.2 Variable Types

When working with machine learning algorithms, the following are the most commonly used
variable types. Here, when I talk about variable types, I do not refer to programming-
language-specific data types (int, boolean, string, etc.) but to more general types regardless
of the underlying implementation for each specific programming language.

• Categorical/Nominal: These variables take values from a discrete set of possible


CHAPTER 1. INTRODUCTION 10

values (categories). For example, the categorical variable color can take the values
“red”, “green”, “black”, and so on. Categorical variables do not have an ordering.

• Numeric: Real values such as height, weight, price, etc.

• Integer: Integer values such as number of rooms, age, number of floors, etc.

• Ordinal: Similar to categorical variables, these take their values from a set of discrete
values, but they do have an ordering. For example, low < medium < high.

1.3.3 Predictive Models

In machine learning terminology, a predictive model is a model that takes some input and
produces an output. Classifiers and Regressors are predictive models. I will use the terms
classifier/model and regressor/model interchangeably.

1.4 Data Analysis Pipeline


Usually, the data analysis pipeline consists of several steps which are depicted in Figure 1.6.
This is not a complete list but includes the most common steps. It all starts with the data
collection. Then the data exploration and so on, until the results are presented. These steps
can be followed in sequence, but you can always jump from one step to another one. In fact,
most of the time you will end up using an iterative approach by going from one step to the
other (forward or backward) as needed.
The big gray box at the bottom means that machine learning methods can be used in all those
steps and not just during training or evaluation. For example, one may use dimensionality
reduction methods in the data exploration phase to plot the data or classification or regression
methods in the cleaning phase to impute missing values. Now, let’s give a brief description
of each of those phases:

Figure 1.6: Data analysis pipeline.


CHAPTER 1. INTRODUCTION 11

• Data exploration. This step aims to familiarize yourself and understand the data so
you can make informed decisions during the following steps. Some of the tasks involved
in this phase include summarizing your data, generating plots, validating assumptions,
and so on. During this phase you can, for example, identify outliers, missing values,
or noisy data points that can be cleaned in the next phase. Chapter 4 will introduce
some data exploration techniques. Throughout the book, we will also use some other
data exploratory methods but if you are interested in diving deeper into this topic,
I recommend you check out the “Exploratory Data Analysis with R” book by Peng
(2016).

• Data cleaning. After the data exploration phase here, we can remove the identified
outliers, remove noisy data points, remove variables that are not needed for further
computation, and so on.

• Preprocessing. Predictive models expect the data to be in some structured format


and satisfying some constraints. For example, several models are sensitive to class
imbalances, i.e., the presence of many instances with a given class but a small number
of instances with other classes. In fraud detection scenarios, most of the instances
will belong to the normal class but just a small proportion will be of type ‘illegal
transaction’. In this case, we may want to do some preprocessing to try to balance the
dataset. Some models are also sensitive to feature scale differences. For example, a
variable weight could be in kilograms but another variable height in centimeters. Before
training a predictive model, the data needs to be prepared in such a way that the models
can get the most out of it. Chapter 5 will present some common preprocessing steps.

• Training and evaluation. Once the data is preprocessed, we can then proceed to
train the models. Furthermore, we also need ways to evaluate their generalization
performance on new unseen instances. The purpose of this phase is to try, and fine-
tune different models to find the one that performs the best. Later in this chapter,
some model evaluation techniques will be introduced.

• Interpretation and presentation of results. The purpose of this phase is to


analyze and interpret the models’ results. We can use performance metrics derived
from the evaluation phase to make informed decisions. We may also want to understand
how the models work internally and how the predictions are derived.

1.5 Evaluating Predictive Models


Before showing how to train a machine learning model, in this section, I would like to
introduce the process of evaluating a predictive model, which is part of the data analysis
CHAPTER 1. INTRODUCTION 12

Figure 1.7: Hold-out validation.

pipeline. This applies to both classification and regression problems. I’m starting with this
topic because it will be a recurring one every time you work with machine learning. You will
also be training a lot of models, but you will need ways to validate them as well.
Once you have trained a model (with a training set), that is, finding the best function f
that maps inputs to their corresponding outputs, you may want to estimate how good the
model is at solving a particular problem when presented with examples it has never seen
before (that were not part of the training set). This estimate of how good the model is at
predicting the output of new examples is called the generalization performance.
To estimate the generalization performance of a model, a dataset is usually divided into a
train set and a test set. As the name implies, the train set is used to train the model (learn its
parameters) and the test set is used to evaluate/test its generalization performance. We need
independent sets because when deploying models in the wild, they will be presented with
new instances never seen before. By dividing the dataset into two subsets, we are simulating
this scenario where the test set instances were never seen by the model at training time so
the performance estimate will be more accurate rather than if we used the same set to train
and evaluate the performance. There are two main validation methods that differ in the way
the dataset is divided: hold-out validation and k-fold cross validation.
1) Hold-out validation. This method randomly splits the dataset into train and test sets
based on some predefined percentages. For example, randomly select 70% of the instances
and use them as the train set and use the remaining 30% of the examples for the test set. This
will vary depending on the application and the amount of data, but typical splits are 50/50
and 70/30 percent for the train and test sets, respectively. Figure 1.7 shows an example of
a dataset divided into 70/30.
Then, the train set is used to train (fit) a model, and the test set to evaluate how well that
CHAPTER 1. INTRODUCTION 13

model performs on new data. The performance can be measured using performance metrics
such as the accuracy for classification problems. The accuracy is the percent of correctly
classified instances.

It is a good practice to estimate the performance on both, the train and test sets.
Usually, the performance on the train set will be higher since the model was trained
with that very same data. It is also common to measure the performance computing the
error instead of accuracy. For example, the percent of misclassified instances. These
are called the train error and test error (also known as the generalization error), for
both the train and test sets respectively. Estimating these two errors will allow you
to ‘debug’ your models and understand if they are underfitting or overfitting (more on
this in the following sections).

2) K-fold cross-validation. Hold-out validation is a good way to evaluate your models


when you have a lot of data. However, in many cases, your data will be limited. In those
cases, you want to make efficient use of the data. With hold-out validation, each instance
is included either in the train or test set. K-fold cross-validation provides a way in which
instances take part in both, the test and train set, thus making more efficient use of the
data.
This method consists of randomly assigning each instance into one of k folds (subsets) with
approximately the same size. Then, k iterations are performed. In each iteration, one of the
folds is used to test the model while the remaining ones are used to train it. Each fold is used
once as the test set and k − 1 times it’s used as part of the train set. Typical values for k are
3, 5, and 10. In the extreme case where k is equal to the total number of instances in the
dataset, it is called leave-one-out cross-validation (LOOCV). Figure 1.7 shows an example
of cross-validation with k = 5.
The generalization performance is then computed by taking the average accuracy/error from
each iteration.
Hold-out validation is typically used when there is a lot of available data and models take
significant time to be trained. On the other hand, k-fold cross-validation is used when data
is limited. However, it is more computational intensive since it requires training k models.
Validation set.
Most predictive models require some hyperparameter tuning. For example, a k-NN model
requires to set k, the number of neighbors. For decision trees, one can specify the maximum
allowed tree depth, among other hyperparameters. Neural networks require even more hy-
perparameter tuning to work properly. Also, one may try different preprocessing techniques
CHAPTER 1. INTRODUCTION 14

Figure 1.8: k-fold cross validation with k=5 and 5 iterations.

and features. All those changes affect the final performance. If all those hyperparameter
changes are evaluated using the test set, there is a risk of overfitting the model. That is,
making the model very specific to this particular data. Instead of using the test set to fine-
tune parameters, a validation set needs to be used instead. Thus, the dataset is randomly
partitioned into three subsets: train/validation/test sets. The train set is used to train
the model. The validation set is used to estimate the model’s performance while trying
different hyperparameters and preprocessing methods. Once you are happy with your final
model, you use the test set to assess the final generalization performance and this is what you
report. The test set is used only once. Remember that we want to assess performance
on unseen instances. When using k-fold cross validation, first, an independent test set needs
to be put aside. Hyperparameters are tuned using cross-validation and the test set is used
at the very end and just once to estimate the final performance.

When working with multi-user systems, we need to additionally take into account
between-user differences. In those situations, it is advised to perform extra validations.
Those multi-user validation techniques will be covered in chapter 9.
CHAPTER 1. INTRODUCTION 15

Figure 1.9: First 10 instances of felines dataset.

1.6 Simple Classification Example

simple_model.R

So far, a lot of terminology and concepts have been introduced. In this section, we will
work through a practical example that will demonstrate how most of those concepts fit
together. Here you will build (from scratch) your first classification and regression models!
Furthermore, you will learn how to evaluate their generalization performance.
Suppose you have a dataset that contains information about felines including their maximum
speed in km/hr and their specific type. For the sake of the example, suppose that these two
variables are the only ones that we can observe. As for the types, consider that there are
two possibilities: ‘tiger’ and ‘leopard’. Figure 1.9 shows the first 10 instances (rows) of the
dataset.
This table has 2 variables: speed and class. The first one is a numeric variable. The second
one is a categorical variable. In this case, it can take two possible values: ‘tiger’ or ‘leopard’.
This dataset was synthetically created for illustration purposes, but I promise that after this,
we will mostly use real datasets.
The code to reproduce this example is contained in the Introduction folder in the script file
simple_model.R. The script contains the code used to generate the dataset. The dataset is
stored in a data frame named dataset. Let’s start by doing a simple exploratory analysis
of the dataset. More detailed exploratory analysis methods will be presented in chapter 4.
First, we can print the data frame dimensions with the dim() function.
CHAPTER 1. INTRODUCTION 16

# Print number of rows and columns.


dim(dataset)
#> [1] 100 2

The output tells us that the data frame has 100 rows and 2 columns. Now we may be
interested to know from those 100 instances, how many correspond to tigers. We can use
the table() function to get that information.

# Count instances in each class.


table(dataset$class)
#> leopard tiger
#> 50 50

Here we can see that 50 instances are of type ‘leopard’ and also 50 instances are of type
‘tiger’. In fact, this is how the dataset was intentionally generated. The next thing we
can do is compute some summary statistics for each column. R already provides a very
convenient function for that purpose. Yes, it is the summary() function.

# Compute some summary statistics.


summary(dataset)
#> speed class
#> Min. :42.96 leopard:50
#> 1st Qu.:48.41 tiger :50
#> Median :51.12
#> Mean :51.53
#> 3rd Qu.:53.99
#> Max. :61.65

Since speed is a numeric variable, summary() computes some statistics like the mean, min,
max, etc. The class variable is a factor. Thus, it returns row counts instead. In R, categorical
variables are usually encoded as factors. It is similar to a string, but R treats factors in
a special way. We can already appreciate that with the previous code snippet when the
summary function returned class counts.
There are many other ways in which you can explore a dataset, but for now, let’s assume
we already feel comfortable and that we have a good understanding of the data. Since this
dataset is very simple, we won’t need to do any further data cleaning or preprocessing.
CHAPTER 1. INTRODUCTION 17

Figure 1.10: Feline speeds with vertical dashed lines at the means.

Now, imagine that you are asked to build a model that is able to predict the type of feline
based on observed attributes. In this case, the only thing we can observe is the speed. Our
task is to build a function that maps speed measurements to classes. That is, we want to be
able to predict what type of feline it is based on how fast it runs. Based on the terminology
presented in section 1.3, speed would be a feature variable and class would be the class
variable.
Based on the types of machine learning presented in section 1.2, this one corresponds to
a supervised learning problem because, for each instance, we have its respective label
or class which we can use to train a model. And, specifically, since we want to predict a
category, this is a classification problem.
Before building our classification model, it would be worth plotting the data. Let’s plot the
speeds for both tigers and leopards.
Here, I omitted the code for building the plot, but it is included in the script. I have also
added vertical dashed lines at the mean speeds for the two classes. From this plot, it seems
that leopards are faster than tigers (with some exceptions). One thing we can note is that
the data points are grouped around the mean values of their corresponding classes. That is,
most tiger data points are closer to the mean speed for tigers and the same can be observed
for leopards. Of course, there are some exceptions in which an instance is closer to the mean
of the opposite class. This could be because some tigers may be as fast as leopards. Some
leopards may also be slower than the average, maybe because they are newborns or they are
old. Unfortunately, we do not have more information so the best we can do is use our single
feature speed. We can use these insights to come up with a simple model that discriminates
between the two classes based on this single feature variable.
CHAPTER 1. INTRODUCTION 18

One thing we can do for any new instance we want to classify is to compute its distance to
the ‘center’ of each class and predict the class that is the closest one. In this case, the center
is the mean value. We can formally define our model as the set of n centrality measures
where n is the number of classes (2 in our example).

M = {µ1 , . . . , µn } (1.2)

Those centrality measures (the class means in this particular case) are called the parameters
of the model. Training a model consists of finding those optimal parameters that will allow
us to achieve the best performance on new instances that were not part of the training data.
In most cases, we will need an algorithm to find those parameters. In our example, the
algorithm consists of simply computing the mean speed for each class. That is, for each
class, sum all the speeds belonging to that class and divide them by the number of data
points in that class.
Once those parameters are found, we can start making predictions on new data points. This
is called inference or prediction time. In this case, when a new data point arrives, we can
predict its class by computing its distance to each of the n centrality measures in M and
returning the class of the closest one.
The following function implements the training part of our model.

# Define a simple classifier that learns


# a centrality measure for each class.
simple.model.train <- function(data, centrality=mean){

# Store unique classes.


classes <- unique(data$class)

# Define an array to store the learned parameters.


params <- numeric(length(classes))

# Make this a named array.


names(params) <- classes

# Iterate through each class and compute its centrality measure.


for(c in classes){

# Filter instances by class.


CHAPTER 1. INTRODUCTION 19

tmp <- data[which(data$class == c),]

# Compute the centrality measure.


centrality.measure <- centrality(tmp$speed)

# Store the centrality measure for this class.


params[c] <- centrality.measure
}

return(params)

The first argument is the training data and the second argument is the centrality function we
want to use (the mean, by default). This function iterates each class, computes the centrality
measure based on the speed, and stores the results in a named array called params which is
then returned at the end.
Most of the time, training a model involves passing the training data and any additional
hyperparameters specific to each model. In this case, the centrality measure is a hyper-
parameter and here we want to use the mean.

The difference between parameters and hyperparameters is that the former are
learned during training. The hyperparameters are settings specific to each model
that we can define before the actual training starts.

Now that we have a function that performs the training, we need another one that
performs the actual inference or prediction on new data points. Let’s call this one
simple.classifier.predict(). Its first argument is a data frame with the instances
we want to get predictions for. The second argument is the named vector of parameters
learned during training. This function will return an array with the predicted class for each
instance in newdata.

# Define a function that predicts a class


# based on the learned parameters.
simple.classifier.predict <- function(newdata, params){
CHAPTER 1. INTRODUCTION 20

# Variable to store the predictions of


# each instance in newdata.
predictions <- NULL

# Iterate instances in newdata


for(i in 1:nrow(newdata)){

instance <- newdata[i,]

# Predict the name of the class which


# centrality measure is closest.
pred <- names(which.min(abs(instance$speed - params)))

predictions <- c(predictions, pred)


}

return(predictions)
}

This function iterates through each row and computes the distance to each centrality measure
and returns the name of the class that was the closest one. The distance computation is
done with the following line of code:

pred <- names(which.min(abs(instance$speed - params)))

First, it computes the absolute difference between the speed and each centrality measure
stored in params and then, it returns the name of the one that was the minimum. Now that
we have defined the training and prediction procedures, we are ready to test our classifier!
In section 1.5, two evaluation methods were presented. Hold-out and k-fold cross-
validation. These methods allow you to estimate how your model will perform on new
data. Let’s first start with hold-out validation.
First, we need to split the data into two independent sets. We will use 70% of the data to
train our classifier and the remaining 30% to test it. The following code splits dataset into
a trainset and testset.
CHAPTER 1. INTRODUCTION 21

# Percent to be used as training data.


pctTrain <- 0.7

# Set seed for reproducibility.


set.seed(123)

idxs <- sample(nrow(dataset),


size = nrow(dataset) * pctTrain,
replace = FALSE)

trainset <- dataset[idxs,]

testset <- dataset[-idxs,]

The sample() function was used to select integer numbers at random from 1 to n, where
n is the total number of data points in dataset. These randomly selected data points are
the ones that will go to the train set. Thus, with the size argument we tell the function to
return 70 numbers which correspond to 70% of the total since dataset has 100 instances.

The last argument replace is set to FALSE because we do not want repeated numbers.
This ensures that any instance only belongs to either the train or the test set. We
don’t want an instance to be copied into both sets.

Now it’s time to test our functions. We can train our model using the trainset by calling our
previously defined function simple.model.train().

# Train the model using the trainset.


params <- simple.model.train(trainset, mean)

# Print the learned parameters.


print(params)
#> tiger leopard
#> 48.88246 54.58369

After training the model, we print the learned parameters. In this case, the mean for tiger
is 48.88 and for leopard, it is 54.58. With these parameters, we can start making predictions
CHAPTER 1. INTRODUCTION 22

on our test set! We pass the test set and the newly-learned parameters to our function
simple.classifier.predict().

# Predict classes on the test set.


test.predictions <- simple.classifier.predict(testset, params)

# Display first predictions.


head(test.predictions)
#> [1] "tiger" "tiger" "leopard" "tiger" "tiger" "leopard"

Our predict function returns predictions for each instance in the test set. We can use the
head() function to print the first predictions. The first two instances were classified as tigers,
the third one as leopard, and so on.
But how good are those predictions? Since we know what the true classes are (also known
as ground truth) in our test set, we can compute the performance. In this case, we will
compute the accuracy, which is the percentage of correct classifications. Note that we did not
use the class information when making predictions, we only used the speed. We pretended
that we didn’t have the true class. We will use the true class only to evaluate the model’s
performance.

# Compute test accuracy.


sum(test.predictions == as.character(testset$class)) /
nrow(testset)
#> [1] 0.8333333

We can compute the accuracy by counting how many predictions were equal to the true
classes and divide them by the total number of points in the test set. In this case, the test
accuracy was 83.0%. Congratulations! you have trained and evaluated your first
classifier.
It is also a good idea to compute the performance on the same train set that was used to
train the model.

# Compute train accuracy.


train.predictions <- simple.classifier.predict(trainset, params)
sum(train.predictions == as.character(trainset$class)) /
CHAPTER 1. INTRODUCTION 23

nrow(trainset)
#> [1] 0.8571429

The train accuracy was 85.7%. As expected, this was higher than the test accuracy. Typi-
cally, what you report is the performance on the test set, but we can use the performance
on the train set to look for signs of over/under-fitting which will be covered in the following
sections.

1.6.1 K-fold Cross-Validation Example

Now, let’s see how k-fold cross-validation can be implemented to test our classifier. I will
choose a k = 5. This means that 5 independent sets are going to be generated and 5 iterations
will be run.

# Number of folds.
k <- 5

set.seed(123)

# Generate random folds.


folds <- sample(k, size = nrow(dataset), replace = TRUE)

# Print how many instances ended up in each fold.


table(folds)
#> folds
#> 1 2 3 4 5
#> 21 20 23 17 19

Again, we can use the sample() function. This time we want to select random integers
between 1 and k. The total number of integers will be equal to the total number of instances
n in the entire dataset. Note that this time we set replace = TRUE since k < n we need to
pick repeated numbers. Each number will represent the fold to which each instance belongs
to. As before, we need to make sure that each instance belongs only to one of the sets. Here,
we are guaranteeing that by assigning each instance a single fold number. We can use the
table() function to print how many instances ended up in each fold. Here, we see that the
folds will contain between 17 and 23 instances.
CHAPTER 1. INTRODUCTION 24

K-fold cross-validation consists of iterating k times. In each iteration, we select one of the
folds to function as the test set and the remaining folds are used as the train set. We can
then train a model with the train set and evaluate it with the test set. In the end, we report
the average accuracy across folds.

# Variable to store accuracies on each fold.


test.accuracies <- NULL
train.accuracies <- NULL

for(i in 1:k){
testset <- dataset[which(folds == i),]
trainset <- dataset[which(folds != i),]

params <- simple.model.train(trainset, mean)


test.predictions <- simple.classifier.predict(testset, params)
train.predictions <- simple.classifier.predict(trainset, params)

# Accuracy on test set.


acc <- sum(test.predictions ==
as.character(testset$class)) /
nrow(testset)

test.accuracies <- c(test.accuracies, acc)

# Accuracy on train set.


acc <- sum(train.predictions ==
as.character(trainset$class)) /
nrow(trainset)

train.accuracies <- c(train.accuracies, acc)


}

# Print mean accuracy across folds on the test set.


mean(test.accuracies)
#> [1] 0.829823

# Print mean accuracy across folds on the train set.


mean(train.accuracies)
CHAPTER 1. INTRODUCTION 25

#> [1] 0.8422414

The test mean accuracy across the 5 folds was ≈ 83% which is very similar to the accuracy
estimated by hold-out validation.

Note that in section 1.5 a validation set was also mentioned. This one is useful when
you want to fine-tune a model and/or try different preprocessing methods on your
data. In case you are using hold-out validation, you may want to split your data into
three sets: train/validation/test sets. So, you train your model using the train set and
estimate its performance using the validation set. Then you can fine-tune your model.
For example, here, instead of the mean as centrality measure, you can try to use the
median and measure the performance again with the validation set. When you are
pleased with your settings, you estimate the final performance of the model with the
test set only once.
In the case of k-fold cross-validation, you can set aside a test set at the beginning.
Then you use the remaining data to perform cross-validation and fine-tune your model.
Within each iteration, you test the performance with the validation data. Once you
are sure you are not going to do any parameter tuning, you can train a model with the
train and validation sets and test the generalization performance using the test set.

One of the benefits of machine learning is that it allows us to find patterns based on
data freeing us from having to program hard-coded rules. This means a more scalable
and flexible code. If for some reason, now, instead of 2 classes we needed to add another
class, for example, a jaguar, the only thing we need to do is update our database and
retrain our model. We don’t need to modify the internals of the algorithms. They will
update themselves based on the data.
We can try this by adding a third class to the dataset. The simple_model.R script
shows how to add a new class, ‘jaguar’, to the dataset. It then trains the model as
usual and performs predictions.
CHAPTER 1. INTRODUCTION 26

1.7 Simple Regression Example

simple_model.R

As opposed to classification models where the aim is to predict a category, regression


models predict numeric values. To exemplify this, we can use our felines dataset but
this time we can try to predict speed based on the type of feline. The class column will be
treated as a feature variable and speed will be the response variable. Since there is only
one predictor, and it is categorical, the best thing we can do to implement our regression
model is to predict the mean speed depending on the class.
Recall that for classification, our learned parameters consisted of the means for each class.
Thus, we can reuse our training function simple.model.train(). All we need to do is define
a new predict function that returns the speed based on the class. This is the opposite of
what we did in classification (return the class based on the speed).

# Define a function that predicts speed


# based on the type of feline.
simple.regression.predict <- function(newdata, params){

# Variable to store the predictions of


# each instance in newdata.
predictions <- NULL

# Iterate instances in newdata


for(i in 1:nrow(newdata)){

instance <- newdata[i,]

# Return the mean value of the corresponding class stored in params.


pred <- params[which(names(params) == instance$class)]

predictions <- c(predictions, pred)


}

return(predictions)
}
CHAPTER 1. INTRODUCTION 27

The simple.regression.predict() function iterates through each instance in newdata and


returns the mean speed from params for the corresponding class.
Again, we can validate our model using hold-out validation. The train set will have 70%
of the instances and the remaining will be used as the test set.

pctTrain <- 0.7


set.seed(123)
idxs <- sample(nrow(dataset),
size = nrow(dataset) * pctTrain,
replace = FALSE)

trainset <- dataset[idxs,]


testset <- dataset[-idxs,]

# Reuse our train function.


params <- simple.model.train(trainset, mean)

print(params)
#> tiger leopard
#> 48.88246 54.5836

Here, we reused our previous function simple.model.train() to learn the parameters and
then print them. Then we can use those parameters to infer the speed. If a test instance
belongs to the class ‘tiger’ then return 48.88. If it is of class ‘leopard’ then return 54.58.

# Predict speeds on the test set.


test.predictions <-
simple.regression.predict(testset, params)

# Print first predictions.


head(test.predictions)
#> 48.88246 54.58369 54.58369 48.88246 48.88246 54.58369

Since these are numeric predictions, we cannot use accuracy as with classification to evaluate
the performance. One way to evaluate how well these predictions are is by computing
the mean absolute error (MAE). This measure tells you, on average, how much each
CHAPTER 1. INTRODUCTION 28

Figure 1.11: Prediction errors.

prediction deviates from its true value. It is computed by subtracting each prediction from
its real value and taking the absolute value: |predicted − realV alue|. This can be visualized
in Figure 1.11. The distances between the true and predicted values are the errors and the
MAE is the average of all those errors.
We can use the following code to compute the MAE:

# Compute mean absolute error (MAE) on the test set.


mean(abs(test.predictions - testset$speed))
#> [1] 2.562598

The MAE on the test set was 2.56. That is, on average, our simple model had a deviation
of 2.56 km/hr. with respect to the true values, which is not bad. We can also compute the
MAE on the train set.

# Predict speeds on the train set.


train.predictions <-
simple.regression.predict(trainset, params)

# Compute mean absolute error (MAE) on the train set.


mean(abs(train.predictions - trainset$speed))
#> [1] 2.16097

The MAE on the train set was 2.16, which is better than the test set MAE (small MAE
values are preferred). Now, you have built, trained, and evaluated a regression
model!
CHAPTER 1. INTRODUCTION 29

This was a simple example, but it illustrates the basic idea of regression and how it differs
from classification. It also shows how the performance of regression models is typically
evaluated with the MAE as opposed to the accuracy used in classification. In chapter 8,
more advanced methods such as neural networks will be introduced, which can be used to
solve regression problems.
In this section, we have gone through several of the data analysis pipeline phases. We did a
simple exploratory analysis of the data and then we built, trained, and validated the models
to perform both classification and regression. Finally, we estimated the overall performance
of the models and presented the results. Here, we coded our models from scratch, but in
practice, you typically use models that have already been implemented and tested. All in all,
I hope these examples have given you the feeling of how it is to work with machine learning.

1.8 Underfitting and Overfitting

From the felines classification example, we saw how we can separate two classes by computing
the mean for each class. For the two-class problem, this is equivalent to having a decision
line between the two means (Figure 1.12). Everything to the right of this decision line will
be closer to the mean that corresponds to ‘leopard’ and everything to the left to ‘tiger’. In
this case, the classification function is a vertical line, and during learning the position of the
line that reduces the classification error is searched for. We implicitly estimated the position
of that line by finding the mean values for each of the classes.
Now, imagine that we do not only have access to the speed but also to the felines’ age. This

Figure 1.12: Decision line between the two classes.


CHAPTER 1. INTRODUCTION 30

Figure 1.13: Underfitting and overfitting.

extra information could help us reduce the prediction error since age plays an important role
in how fast a feline is. Figure 1.13 (left) shows how it will look like if we plot age in the
x-axis and speed in the y-axis. Here, we can see that for both, tigers and leopards, the speed
seems to increase as age increases. Then, at some point, as age increases the speed begins to
decrease.
Constructing a classifier with a single vertical line as we did before will not work in this 2-
dimensional case where we have 2 predictors. We will need a more complex decision boundary
(function) to separate the two classes. One approach would be to use a line as before but
this time we allow the line to have an angle. Everything below the line is classified as ‘tiger’
and everything else as ‘leopard’. Thus, the learning phase involves finding the line’s position
and its angle that achieves the smallest error.
Figure 1.13 (left) shows a possible decision line. Even though this function is more complex
than a vertical line, it will still produce a lot of misclassifications (it does not clearly separate
both classes). This is called underfitting, that is, the model is so simple that it is not able
to capture the underlying data patterns.
Let’s try a more complex function, for example, a curve. Figure 1.13 (middle) shows that a
curve does a better job at separating the two classes with fewer misclassifications but still,
3 leopards were misclassified as tigers and 1 tiger was misclassified as a leopard. Can we do
better than that? Yes, just keep increasing the complexity of the decision function.
Figure 1.13 (right) shows a more complex function that was able to separate the two classes
with 100% accuracy or equivalently, with a 0% error. However, there is a problem. This
function learned how to accurately separate the training data, but it is likely that it will
not do as well with a new test set. This function became so specialized in this data that
it failed to capture the overall pattern. This is called overfitting. In this case, the model
CHAPTER 1. INTRODUCTION 31

Figure 1.14: Model complexity v.s. train and validation error.

starts to ‘memorize’ the train set instead of finding general patterns applicable to new unseen
instances. If we were to choose a model, the best one would be the one in the middle. Even if
it was not perfect on the train data, it will do better than the other models when evaluating
it on new test data.
Overfitting is a common problem in machine learning. One way to know if a model is
overfitting is if the error in the train set is low, but it is high on a new set (can be a test or
validation set). Figure 1.14 illustrates this idea. Too-simple models will have a high error in
both, the train and validation sets. As the complexity of the model increases, the error on
both sets is reduced. Then, at some point, the complexity of a model is too high so that it
gets too specific on the train set and fails to perform well on a new independent set.
In this example, we saw how underfitting and overfitting can affect the generalization perfor-
mance of a model in a classification setting but the same can occur in regression problems.
There are several methods that aim to reduce overfitting but many of them are specific to
each type of model. For example, with decision trees (covered in chapter 2), one way to
reduce overfitting is to limit their depth or build ensembles of trees (chapter 3). Neural
networks are also highly prone to overfitting since they can be very complex with millions of
parameters and there are also several techniques to reduce the effect of overfitting (chapter
8).

1.9 Bias and Variance

So far, we have seen how we can train predictive models and evaluate how well they do
on new data (test/validation sets). The main goal is to have predictive models that have
a low error rate with new data. Understanding the source of the error can help us make
CHAPTER 1. INTRODUCTION 32

Figure 1.15: Error due to bias and variance.

more informed decisions when building predictive models. The test error, also known as the
generalization error of a predictive model can be decomposed into three components: a bias,
a variance, and noise.
Noise. This component is related to the data itself and there is nothing we can do about
it. For example, two instances having the same values in their features but with a different
label.
Bias. The bias is how much the average prediction differs from the true value. Note the
average keyword. This means that we make the assumption that an infinite (or very large)
number of train sets can be generated, and for each a predictive model is trained. Then we
average the predictions of all those models and see how much that average differs from the
true value.
Variance. The variance relates to how much the predictions change for a given data point
when training a model using a different train set each time.
The following picture graphically illustrates different scenarios for high/low bias and variance.
The center of the target represents the true value and the small red dots the predictions on
a hypothetical test set.
Bias and variance are closely related to underfitting and overfitting. High variance is a sign
CHAPTER 1. INTRODUCTION 33

Figure 1.16: High variance and overfitting.

of overfitting. That is, a model is so complex that it will fit a particular train set very well.
Every time it is trained with a different train set, the train error will be low, but it will likely
generate very different predictions for the same test points and a much higher test error.
Figure 1.16 illustrates the relation between overfitting and high variance with a regression
problem. Given a feature x, two models are trained to predict y: i) a complex model (top
row), and ii) a simpler model (bottom row). Both models are fitted with two training sets (a
and b) sampled from the same distribution. The complex model fits the train data perfectly
but makes very different predictions (big ∆) for the same test point when using a different
train set. The simpler model does not fit the train data so well but has a smaller ∆ and
a lower error on the test point as well. Visually, the function (red curve) of the complex
model also varies a lot across train sets whereas the shapes of the simpler model functions
look very similar.
On the other hand, if a model is too simple, it will underfit causing highly biased results and
not being able to capture the input-output relationships. This results in a high train error
and in consequence a high test error as well.
CHAPTER 1. INTRODUCTION 34

A formal formulation of the error decomposition can be consulted in the book “The
elements of statistical learning: data mining, inference, and prediction” (Hastie et al.,
2009).
CHAPTER 1. INTRODUCTION 35

1.10 Summary

In this chapter, several introductory machine learning concepts and terminology were intro-
duced. These concepts are the basis for the methods that will be covered in the following
chapters.

• Behavior can be defined as “an observable activity in a human or animal”.


• Three main reasons of why we may want to analyze behavior automatically were dis-
cussed: react, understand, and document/archive.
• One way to observe behavior automatically is through the use of sensors and/or data.
• Machine Learning consists of a set of computational algorithms that automatically
find useful patterns and relationships from data.
• The three main building blocks of machine learning are: data, algorithms, and mod-
els.
• The main types of machine learning are supervised learning, semi-supervised
learning, partially-supervised learning, and unsupervised learning.
• In R, data is usually stored in data frames. Data frames have variables (columns) and
instances (rows). Depending on the task, variables can be independent or depen-
dent.
• A predictive model is a model that takes some input and produces an output.
Classifiers and regressors are predictive models.
• A data analysis pipeline consists of several tasks including data collection, cleaning,
preprocessing, training/evaluation, and presentation of results.
• Model evaluation can be performed with hold-out validation or k-fold cross-
validation.
• Overfitting occurs when a model ‘memorizes’ the training data instead of finding
useful underlying patterns.
• The test error can be decomposed into noise, bias, and variance.
Chapter 2

Predicting Behavior with


Classification Models

In the previous chapter, the concept of classification was introduced along with a simple
example (feline type classification). This chapter will cover more in depth concepts on
classification methods and their application to behavior analysis tasks. Moreover, additional
performance metrics will be introduced. This chapter begins with an introduction to k-
nearest neighbors (k-NN) which is one of the simplest classification algorithms. Then, an
example of k-NN applied to indoor location using Wi-Fi signals is presented. This chapter
also covers Decision Trees and Naive Bayes classifiers and how they can be used for
activity recognition based on smartphone accelerometer data. After that, Dynamic Time
Warping (DTW) (a method for aligning time series) is introduced, and an example of how
it can be used for hand gesture recognition is presented.

2.1 k-nearest Neighbors

k-nearest neighbors (k-NN) is one of the simplest classification algorithms. The predicted
class for a given query instance is the most common class of its k nearest neighbors. A query
instance is just the instance we want to make predictions on. In its most basic form, the
algorithm consists of two steps:

1. Compute the distance between the query instance and all training instances.
2. Return the most common class among the k nearest training instances.

This is a type of lazy-learning algorithm because all the computations take place at predic-
tion time. There are no parameters to learn at training time! The training phase consists

37
Discovering Diverse Content Through
Random Scribd Documents
Dumouriez replied, asking for an audience, and requested his
successor to be sought for. It was clear that the anti-revolutionist
party felt strong.
Indeed, they were reckoning on the following forces:
The Constitutional Guards, six thousand strong, disbanded, but
ready to fly to arms at the first call; seven or eight thousand Knights
of the Order of St. Louis, whose red ribbon was the rallying token;
three battalions of Switzers, sixteen hundred men, picked soldiers,
unshaken as the old Helvetic rocks.
Better than all, Lafayette had written: "Persist, sire; fortified with the
authority the National Assembly has delegated to you, you will find
all good citizens on your side!"
The plan was to gather all the forces at a given signal, seize the
cannon of each section of Paris, shut up the Jacobin's Club-house
and the Assembly, add all the Royalists in the National Guard, say, a
contingent of fifteen thousand men, and wait for Lafayette, who
might march up in three days.
The misfortune was that the queen would not hear of Lafayette.
Lafayette was merely the Revolution moderated, and might prolong
it and lead to a republic like that he had brought round in America;
while the Jacobins' outrageous rule would sicken the people and
could not endure.
Oh, had Charny been at hand! But it was not even known where he
was; and were it known, it would be too low an abasement for the
woman, if not the queen, to have recourse to him.
The night passed tumultuously at the palace, where they had the
means of defense and attack, but not a hand strong enough to
grasp and hurl them.
Dumouriez and his colleagues came to resign. They affirmed they
were willing to die for the king, but to do this for the clergy would
only precipitate the downfall of the monarchy.
"Sire," pleaded Dumouriez, "your conscience is misled; you are
beguiled into civil war. Without strength, you must succumb, and
history, while sorrowing for you, will blame you for causing the woes
of France."
"Heaven be my witness that I wished but her happiness!"
"I do not doubt that; but one must account to the King of kings not
only for purity of intentions, but the enlightened use of intentions.
You suppose you are saving religion, but you will destroy it; your
priests will be massacred; your broken crown will roll in your blood,
the queen's, your children's, perhaps—oh, my king, my king!"
Choking, he applied his lips to the royal hand. With perfect serenity,
and a majesty of which he might not be believed capable, Louis
replied.
"You are right, general. I expect death, and forgive my murderers
beforehand. You have well served me; I esteem you, and am
affected by your sympathy. Farewell, sir!"
With Dumouriez going, royalty had parted with its last stay. The king
threw off the mask, and stood with uncovered face before the
people.
Let us see what the people were doing on their side.
CHAPTER V.
THE UNINVITED VISITORS.

All day long a man in general's uniform was riding about the St.
Antoine suburb, on a large Flanders horse, shaking hands right and
left, kissing the girls and treating the men to drink. This was one of
Lafayette's half dozen heirs, the small-change of the commander of
the National Guard—Battalion Commander Santerre.
Beside him rode, on a fiery charger, like an aid next his general, a
stout man who might by his dress be taken to be a well-to-do
farmer. A scar tracked his brow, and he had as gloomy an eye and
scowling a face as the battalion commander had an open
countenance and frank smile.
"Get ready, my good friends; watch over the nation, against which
traitors are plotting. But we are on guard," Santerre kept saying.
"What are we to do, friend Santerre?" asked the working-men. "You
know that we are all your own. Where are the traitors? Lead us at
them!"
"Wait; the proper time has not come."
"When will it strike?"
Santerre did not know a word about it; so he replied at a hazard,
"Keep ready; we'll let you know."
But the man who rode by his knee, bending down over the horse's
neck, would make signs to some men, and whisper:
"June twenty."
Whereupon these men would call groups of twenty or so around
each, and repeat the date to them, so that it would be circulated.
Nobody knew what would be done on the twentieth of June, but all
felt sure that something would happen on that day.
By whom was this mob moved, stirred, and excited? By a man of
powerful build, leonine mane, and roaring voice, whom Santerre was
to find waiting in his brewery office—Danton.
None better than this terrible wizard of the Revolution could evoke
terror from the slums and hurl it into the old palace of Catherine di
Medicis. Danton was the gong of riots; the blow he received he
imparted vibratingly to all the multitude around him. Through Hebert
he was linked to the populace, as by the Duke of Orleans he was
affixed to the throne.
Whence came his power, doomed to be so fatal to royalty? To the
queen, the spiteful Austrian who had not liked Lafayette to be mayor
of Paris, but preferred Petion, the Republican, who had no sooner
brought back the fugitive king to the Tuileries than he set to watch
him closely.
Petion had made his two friends, Manuel and Danton, the Public
Prosecutor and the Vice, respectively.
On the twentieth of June, under the pretext of presenting a petition
to the king and raising a liberty pole, the palace was to be stormed.
The adepts alone knew that France was to be saved from the
Lafayettes and the Moderates, and a warning to be given to the
incorrigible monarch that there are some political tempests in which
a vessel may be swamped with all hands aboard; that is, a king be
overwhelmed with throne and family as in the oceanic abysses.
Billet knew more than Santerre when he accompanied him on his
tour, after presenting himself as from the committee.
Danton called on the brewer to arrange for the meeting of the
popular leaders that night at Charenton for the march on the
morrow, presumably to the House, but really to the Tuileries.
The watchword was, "Have done with the palace!" but the way
remained vague.
On the evening of the nineteenth, the queen saw a woman clad in
scarlet, with a belt full of pistols, gallop, bold and terrible, along the
main streets. It was Theroigne Mericourt, the beauty of Liege, who
had gone back to her native country to help its rebellion; but the
Austrians had caught her and kept her imprisoned for eighteen
months.
She returned mysteriously to be at the bloody feast of the coming
day. The courtesan of opulence, she was now the beloved of the
people; from her noble lovers had come the funds for her costly
weapons, which were not all for show. Hence the mob hailed her
with cheers.
From the Tuileries garret, where the queen had climbed on hearing
the uproar, she saw tables set out in the public squares and wine
broached; patriotic songs were sung and at every toast fists were
shaken at the palace.
Who were the guests? The Federals of Marseilles, led by Barbaroux,
who brought with them the song worth an army—"the Marseillaise
Hymn of Liberty."
Day breaks early in June. At five o'clock the battalions were
marshaled, for the insurrection was regularized by this time and had
a military aspect. The mob had chiefs, submitted to discipline, and
fell into assigned places under flags.
Santerre was on horseback, with his staff of men from the working
district. Billet did not leave him, for the occult power of the Invisibles
charged him to watch over him.
Of the three corps into which the forces were divided, Santerre
commanded the first, St. Huruge the second, and Theroigne the last.
About eleven, on an order brought by an unknown man, the
immense mass started out. It numbered some twenty thousand
when it left the Bastile Square.
It had a wild, odd, and horrible look.
Santerre's battalion was the most regular, having many in uniform,
and muskets and bayonets among the weapons. But the other two
were armed mobs, haggard, thin, and in rags from three years of
revolutions and four of famine.
Neither had uniforms nor muskets, but tattered coats and smocks;
quaint arms snatched up in the first impulse of self-defense and
anger: pikes, cooking-spits, jagged spears, hiltless swords, knives
lashed to long poles, broad-axes, stone-masons' hammers and
curriers' knives.
For standards, a gallows with a dangling doll, meant for the queen;
a bull's head, with an obscene card stuck on the horns; a calf's heart
on a spit, with the motto: "An Aristocrat's;" while flags showed the
legends: "Sanction the decrees, or death!"—"Recall the patriotic
ministers!"—"Tremble, tyrant; your hour has come!"
At every crossing and from each by-way the army was swollen.
The mass was silent, save now and then when a cheer burst from
the midst, or a snatch of the "It shall go on" was sung, or cries went
up of "The nation forever!"—"Long live the Breechless!"—"Down
with Old Veto and Madame Veto!"
They came out for sport—to frighten the king and queen, and did
not mean murdering. They demanded to march past the Assembly
through the Hall, and for three hours they defiled under the eyes of
their representatives.
It was three o'clock. The mob had obtained half their programme,
the placing of their petition before the Assembly. The next thing was
to call on the king for his sanction to the decree.
As the Assembly had received them, how could the king refuse?
Surely he was not a greater potentate than the Speaker of the
House, whose chair was like his and in the grander place?
In fact, the king assented to receiving their deputation of twenty.
As the common people had never entered the palace, they merely
expected their representatives would be received while they
marched by under the windows. They would show the king their
banners with the odd devices and the gory standards.
All the palace garden gates were closed; in the yards and gardens
were soldiers with four field-pieces. Seeing this apparently ample
protection, the royal family might be tranquil.
Still without any evil idea, the crowd asked for the gates to be
opened which allowed entrance on the Feuillants Terrace.
Three municipal officers went in and got leave from the king for
passage to be given over the terrace and out by the stable doors.
Everybody wanted to go in as soon as the gates were open, and the
throng spread over the lawn; it was forgotten to open the outlet by
the stables, and the crush began to be severe. They streamed
before the National Guards in a row along the palace wall to the
Carrousel gates, by which they might have resumed the homeward
route. They were locked and guarded.
Sweltering, crushed, and turned about, the mob began to be
irritated. Before its growls the gates were opened and the men
spread over the capacious square.
There they remembered what the main affair was—to petition the
king to revoke his veto. Instead of continuing the road, they waited
in the square for an hour, when they grew impatient.
They might have gone away, but that was not the aim of the
agitators, who went from group to group, saying:
"Stay; what do you want to sneak away for? The king is going to
give his sanction; if we were to go home without that, we should
have all our work to do over again."
The level-headed thought this sensible advice, but at the same time
that the sanction was a long time coming. They were getting hungry,
and that was the general cry.
Bread was not so dear as it had been, but there was no work going
on, and however cheap bread may be, it is not made for nothing.
Everybody had risen at five, workmen and their wives, with their
children, and come to the palace with the idea that they had but to
get the royal sanction to have hard times end. But the king did not
seem to be at all eager to give his sanction.
It was hot, and thirst began to be felt. Hunger, thirst, and heat drive
dogs mad; yet the poor people waited and kept patient. But those
next to the railings set to shaking them. A municipal officer made a
speech to them:
"Citizens, this is the king's residence, and to enter with arms is to
violate it. The king is quite ready to receive your petition, but only
from twenty deputies bearing it."
What! had not their deputation, sent in an hour ago, been attended
to yet?
Suddenly loud shouts were heard on the streets. It was Santerre,
Billet, and Huruge on their horses, and Theroigne riding on her
cannon.
"What are you fellows hanging round this gate for?" queried Huruge.
"Why do you not go right in?"
"Just so; why haven't we?" said the thousands.
"Can't you see it is fast?" cried several voices.
Theroigne jumped off her cannon, saying:
"The barker is full to the muzzle; let's blow the old gate open."
"Wait! wait!" shouted two municipal officers; "no roughness. It shall
be opened to you."
Indeed, by pressing on the spring-catch they released the two gates,
which drew aside, and the mass rushed through.
Along with them came the cannon, which crossed the yard with
them, mounted the steps, and reached the head of the stairs in their
company. Here stood the city officials in their scarfs of office.
"What do you intend doing with a piece of artillery?" they
challenged. "Great guns in the royal apartments! Do you believe
anything is to be gained by such violence?"
"Quite right," said the ringleaders, astonished themselves to see the
gun there; and they turned it round to get it down-stairs. The hub
caught on the jamb, and the muzzle gaped on the crowd.
"Why, hang them all, they have got cannon all over the palace!"
commented the new-comers, not knowing their own artillery.
Police-Magistrate Mouchet, a deformed dwarf, ordered the men to
chop the wheel clear, and they managed to hack the door-jamb
away so as to free the piece, which was taken down to the yard.
This led to the report that the mob were smashing all the doors in.
Some two hundred noblemen ran to the palace, not with the hope of
defending it, but to die with the king, whose life they deemed
menaced. Prominent among these was a man in black, who had
previously offered his breast to the assassin's bullet, and who always
leaped like a last Life-Guard between danger and the king, from
whom he had tried to conjure it. This was Gilbert.
After being excited by the frightful tumult, the king and queen
became used to it.
It was half past three, and it was hoped that the day would close
with no more harm done.
Suddenly, the sound of the ax blows was heard above the noise of
clamor, like the howling of a coming tempest. A man darted into the
king's sleeping-room and called out:
"Sire, let me stand by you, and I will answer for all."
It was Dr. Gilbert, seen at almost periodical intervals, and in all the
"striking situations" of the tragedy in play.
"Oh, doctor, is this you? What is it?" King and queen spoke together.
"The palace is surrounded, and the people are making this uproar in
wanting to see you."
"We shall not leave you, sire," said the queen and Princess Elizabeth.
"Will the king kindly allow me for an hour such power as a captain
has over his ship?" asked Gilbert.
"I grant it," replied the monarch. "Madame, hearken to Doctor
Gilbert's advice, and obey his orders, if needs must." He turned to
the doctor: "Will you answer to me for the queen and the dauphin?"
"I do, or I shall die with them; it is all a pilot can say in the
tempest!"
The queen wished to make a last effort, but Gilbert barred the way
with his arms.
"Madame," he said, "it is you and not the king who run the real
danger. Rightly or wrongly, they accuse you of the king's resistance,
so that your presence will expose him without defending him. Be the
lightning-conductor—divert the bolt, if you can!"
"Then let it fall on me, but save my children!"
"I have answered for you and them to the king. Follow me."
He said the same to Princess Lamballe, who had returned lately from
London, and the other ladies, and guided them to the Council Hall,
where he placed them in a window recess, with the heavy table
before them.
The queen stood behind her children—Innocence protecting
Unpopularity, although she wished it to be the other way.
"All is well thus," said Gilbert, in the tone of a general commanding a
decisive operation; "do not stir."
There came a pounding at the door, which he threw open with both
folds, and as he knew there were many women in the crowd, he
cried:
"Walk in, citizenesses; the queen and her children await you."
The crowd burst in as through a broken dam.
"Where is the Austrian? where is the Lady Veto?" demanded five
hundred voices.
It was the critical moment.
"Be calm," said Gilbert to the queen, knowing that all was in
Heaven's hand, and man was as nothing. "I need not recommend
you to be kind."
Preceding the others was a woman with her hair down, who
brandished a saber; she was flushed with rage—perhaps from
hunger.
"Where is the Austrian cat? She shall die by no hand but mine!" she
screamed.
"This is she," said Gilbert, taking her by the hand and leading her up
to the queen.
"Have I ever done you a personal wrong?" demanded the latter, in
her sweetest voice.
"I can not say you have," faltered the woman of the people, amazed
at the majesty and gentleness of Marie Antoinette.
"Then why should you wish to kill me?"
"Folks told me that you were the ruin of the nation," faltered the
abashed young woman, lowering the point of her saber to the floor.
"Then you were told wrong. I married your King of France, and am
mother of the prince whom you see here. I am a French woman,
one who will nevermore see the land where she was born; in France
alone I must dwell, happy or unhappy. Alas! I was happy when you
loved me." And she sighed.
The girl dropped the sword, and wept.
"Beg your pardon, madame, but I did not know what you were like.
I see you are a good sort, after all."
"Keep on like that," prompted Gilbert, "and not only will you be
saved, but all these people will be at your feet in an hour."
Intrusting her to some National Guardsmen and the War Minister,
who came in with the mob, he ran to the king.
Louis had gone through a similar experience. On hastening toward
the crowd, as he opened the Bull's-eye Room, the door panels were
dashed in, and pikes, bayonets, and axes showed their points and
edges.
"Open the doors!" cried the king.
Servants heaped up chairs before him, and four grenadiers stood in
front, but he made them put up their swords, as the flash of steel
might seem a provocation.
A ragged fellow, with a knife-blade set in a pole, darted at the king,
yelling:
"Take that for your veto!"
One grenadier, who had not yet sheathed his sword, struck down the
stick with the blade. But it was the king who, entirely recovering self-
command, put the soldier aside with his hand, and said:
"Let me stand forward, sir. What have I to fear amid my people?"
Taking a forward step, Louis XVI., with a majesty not expected in
him, and a courage strange heretofore in him, offered his breast to
the weapons of all sorts directed against him.
"Hold your noise!" thundered a stentorian voice in the midst of the
awful din. "I want a word in here."
A cannon might have vainly sought to be heard in this clamor, but at
this voice all the vociferation ceased. This was the butcher Legendre.
He went up almost to touching the king, while they formed a ring
round the two.
Just then, on the outer edge of the circle, a man made his
appearance, and behind the dread double of Danton, the king
recognized Gilbert, pale and serene of face. The questioning glance
implying: "What have you done with the queen?" was answered by
the doctor's smile to the effect that she was in safety. He thanked
him with a nod.
"Sirrah," began Legendre.
This expression, which seemed to indicate that the sovereign was
already deposed, made the latter turn as if a snake had stung him.
"Yes, sir, I am talking to you, Veto," went on Legendre. "Just listen
to us, for it is our turn to have you hear us. You are a double-dealer,
who have always cheated us, and would try it again, so look out for
yourself. The measure is full, and the people are tired of being your
plaything and victim."
"Well, I am listening to you, sir," rejoined the king.
"And a good thing, too. Do you know what we have come here for?
To ask the sanction of the decrees and the recall of the ministers.
Here is our petition—see!"
Taking a paper from his pocket, he unfolded it, and read the same
menacing lines which had been heard in the House. With his eyes
fixed on the speaker, the king listened, and said, when it was ended,
without the least apparent emotion:
"Sir, I shall do what the laws and the Constitution order me to do!"
"Gammon!" broke in a voice; "the Constitution is your high horse,
which lets you block the road of the whole country, to keep France
in-doors, for fear of being trampled on, and wait till the Austrians
come up to cut her throat."
The king turned toward this fresh voice, comprehending that it was
a worse danger. Gilbert also made a movement and laid his hand on
the speaker's shoulder.
"I have seen you somewhere before, friend," remarked the king.
"Who are you?"
He looked with more curiosity than fear, though this man wore a
front of terrible resolution.
"Ay, you have seen me before, sire. Three times: once, when you
were brought back from Versailles; next at Varennes; and the last
time, here. Sire, bear my name in mind, for it is of ill omen. It is
Billet."
At this the shouting was renewed, and a man with a lance tried to
stab the king; but Billet seized the weapon, tore it from the wielder's
grip, and snapped it across his knee.
"No foul play," he said; "only one kind of steel has the right to touch
this man: the ax of the executioner! I hear that a King of England
had his head cut off by the people whom he betrayed—you ought to
know his name, Louis. Don't you forget it."
"'Sh, Billet!" muttered Gilbert.
"Oh, you may say what you like," returned Billet, shaking his head;
"this man is going to be tried and doomed as a traitor."
"Yes, a traitor!" yelled a hundred voices; "traitor, traitor!"
Gilbert threw himself in between.
"Fear nothing, sire, and try by some material token to give
satisfaction to these mad men."
Taking the physician's hand, the king laid it on his heart.
"You see that I fear nothing," he said; "I received the sacraments
this morning. Let them do what they like with me. As for the
material sign which you suggest I should display—are you satisfied?"
Taking the red cap from a by-stander, he set it on his own head. The
multitude burst into applause.
"Hurrah for the king!" shouted all the voices.
A fellow broke through the crowd and held up a bottle.
"If fat old Veto loves the people as much as he says, prove it by
drinking our health."
"Do not drink," whispered a voice. "It may be poisoned."
"Drink, sire, I answer for the honesty," said Gilbert.
The king took the bottle, and saying, "To the health of the people,"
he drank. Fresh cheers for the king resounded.
"Sire, you have nothing to fear," said Gilbert; "allow me to return to
the queen."
"Go," said the other, gripping his hand.
More tranquil, the doctor hastened to the Council Hall, where he
breathed still easier after one glance. The queen stood in the same
spot; the little prince, like his father, was wearing the red cap.
In the next room was a great hubbub; it was the reception of
Santerre, who rolled into the hall.
"Where is this Austrian wench?" demanded he.
Gilbert cut slanting across the hall to intercept him.
"Halloo, Doctor Gilbert!" said he, quite joyfully.
"Who has not forgotten that you were one of those who opened the
Bastile doors to me," replied the doctor. "Let me present you to the
queen."
"Present me to the queen?" growled the brewer.
"You will not refuse, will you?"
"Faith, I'll not. I was going to introduce myself; but as you are in the
way—"
"Monsieur Santerre needs no introduction," interposed the queen. "I
know how at the famine time he fed at his sole expense half the St.
Antoine suburb."
Santerre stopped, astonished; then, his glance happening to fall,
embarrassed, on the dauphin, whose perspiration was running down
his cheeks, he roared:
"Here, take that sweater off the boy—don't you see he is
smothering?"
The queen thanked him with a look. He leaned on the table, and
bending toward her, he said in an under-tone:
"You have a lot of clumsy friends, madame. I could tell you of some
who would serve you better."
An hour afterward all the mob had flowed away, and the king,
accompanied by his sister, entered the room where the queen and
his children awaited him.
She ran to him and threw herself at his feet, while the children
seized his hands, and all acted as though they had been saved from
a shipwreck. It was only then that the king noticed that he was
wearing the red cap.
"Faugh!" he said; "I had forgotten!"
Snatching it off with both hands, he flung it far from him with
disgust.
The evacuation of the palace was as dull and dumb as the taking
had been gleeful and noisy. Astonished at the little result, the mob
said:
"We have not made anything; we shall have to come again."
In fact, it was too much for a threat, and not enough for an attempt
on the king's life.
Louis had been judged on his reputation, and recalling his flight to
Varennes, disguised as a serving-man, they had thought that he
would hide under a table at the first noise, and might be done to
death in the scuffle, like Polonius behind the arras.
Things had happened otherwise; never had the monarch been
calmer, never so grand. In the height of the threats and the insults
he had not ceased to say: "Behold your king!"
The Royalists were delighted, for, to tell the truth, they had carried
the day.
CHAPTER VI.
"THE COUNTRY IS IN DANGER!"

The king wrote to the Assembly to complain of the violation of his


residence, and he issued a proclamation to "his people." So it
appeared there were two peoples—the king's, and those he
complained of.
On the twenty-fourth, the king and queen were cheered by the
National Guards, whom they were reviewing, and on this same day,
the Paris Directory suspended Mayor Petion, who had told the king
to his face that the city was not riotous.
Whence sprung such audacity?
Three days after, the murder was out.
Lafayette came to beard the Assembly in its House, taunted by a
member, who had said, when he wrote to encourage the king in his
opposition and to daunt the representatives:
"He is very saucy in the midst of his army; let us see if he would talk
as big if he stood among us."
He escaped censure by a nominal majority—a victory worse than a
defeat.
Lafayette had again sacrificed his popularity for the Royalists.
He cherished a last hope. With the enthusiasm to be kindled among
the National Guards by the king and their old commander, he
proposed to march on the Assembly and put down the Opposition,
while in the confusion the king should gain the camp at Maubeuge.
It was a bold scheme, but was almost sure in the state of minds.
Unfortunately, Danton ran to Petion at three in the morning with the
news, and the review was countermanded.
Who had betrayed the king and the general? The queen, who had
said she would rather be lost than owe safety to Lafayette.
She was helping fate, for she was doomed to be slain by Danton.
But supposing she had less spite, and the Girondists might have
been crushed. They were determined not to be caught napping
another time.
It was necessary to restore the revolutionary current to its old
course, for it had been checked and was running up-stream.
The soul of the party, Mme. Roland, hoped to do this by rousing the
Assembly. She chose the orator Vergniaud to make the appeal, and
in a splendid speech, he shouted from the rostrum what was already
circulating in an under-tone:
"The country is in danger!"
The effect was like a waterspout; the whole House, even to the
Royalists, spectators, officials, all were enveloped and carried away
by this mighty cyclone; all roared with enthusiasm.
That same evening Barbaroux wrote to his friend Rebecqui, at
Marseilles:
"Send me five hundred men eager to die."
On the eleventh of July, the Assembly declared the country to be in
danger, but the king withheld his authorization until the twenty-first,
late at night. Indeed, this call to arms was an admission that the
ruler was impotent, for the nation would not be asked to help herself
unless the king could or would do nothing.
Great terror made the palace quiver in the interval, as a plot was
expected to break out on the fourteenth, the anniversary of the
taking of the Bastile—a holiday.
Robespierre had sent an address out from the Jacobin Club which
suggested regicide.
So persuaded was the Court party, that the king was induced to
wear a shirt of mail to protect him against the assassin's knife, and
Mme. Campan had another for the queen, who refused to don it.
"I should be only too happy if they would slay me," she observed, in
a low voice. "Oh, God, they would do me a greater kindness than
Thou didst in giving me life! they would relieve me of a burden!"
Mme. Campan went out, choking. The king, who was in the corridor,
took her by the hand and led her into the lobby between his rooms
and his son's, and stopping, groped for a secret spring; it opened a
press, perfectly hidden in the wall, with the edges guarded by the
moldings. A large portfolio of papers was in the closet, with gold
coin on the shelves.
The case of papers was so heavy that the lady could not lift it, and
the king carried it to her rooms, saying that the queen would tell her
how to dispose of it. She thrust it between the bed and the
mattress, and went to the queen, who said:
"Campan, those are documents fatal to the king if he were placed on
trial, which the Lord forbid. Particularly—which is why, no doubt, he
confides it all to you—there is a report of a council, in which the king
gave his opinion against war; he made all the ministers sign it, and
reckons on this document being as beneficial in event of a trial as
the others may be hurtful."
The July festival arrived. The idea was to celebrate the triumph of
Petion over the king—that of murdering the latter not being probably
entertained.
Suspended in his functions by the Assembly, Petion was restored to
them on the eve of the rejoicings.
At eleven in the morning, the king came down the grand staircase
with the queen and the royal children. Three or four thousand
troops, of unknown tendencies, escorted them. In vain did the
queen seek on their faces some marks of sympathy; the kindest
averted their faces.
There was no mistaking the feeling of the crowd, for cheers for
Petion rose on all sides. As if, too, to give the ovation a more durable
stamp than momentary enthusiasm, the king and the queen could
read on all hats a lettered ribbon: "Petion forever!"
The queen was pale and trembling. Convinced that a plot was aimed
at her husband's life, she started at every instant, fancying she saw
a hand thrust out to bring down a dagger or level a pistol.
On the parade-ground, the monarch alighted, took a place on the
left of the Speaker of the House, and with him walked up to the
Altar of the Country. The queen had to separate from her lord here
to go into the grand stand with her children; she stopped, refusing
to go any further until she saw how he got on, and kept her eyes on
him.
At the foot of the altar, one of those rushes came which is common
to great gatherings. The king disappeared as though submerged.
The queen shrieked, and made as if to rush to him; but he rose into
view anew, climbing the steps of the altar.
Among the ordinary symbols figuring in these feasts, such as justice,
power, liberty, etc., one glittered mysteriously and dreadfully under
black crape, carried by a man clad in black and crowned with
cypress. This weird emblem particularly caught the queen's eyes.
She was riveted to the spot, and, while encouraged a little by the
king's fate, she could not take her gaze from this somber apparition.
Making an effort to speak, she gasped, without addressing any one
specially:
"Who is that man dressed in mourning?"
"The death's-man," replied a voice which made her shudder.
"And what has he under the veil?" continued she.
"The ax which chopped off the head of King Charles I."
The queen turned round, losing color, for she thought she
recognized the voice. She was not mistaken; the speaker was the
magician who had shown her the awful future in a glass at Taverney,
and warned her at Sèvres and on her return from Varennes—
Cagliostro, in fact.
She screamed, and fell fainting into Princess Elizabeth's arms.
One week subsequently, on the twenty-second, at six in the
morning, all Paris was aroused by the first of a series of minute
guns. The terrible booming went on all through the day.
At day-break the six legions of the National Guards were collected at
the City Hall. Two processions were formed throughout the town and
suburbs to spread the proclamation that the country was in danger.
Danton had the idea of this dreadful show, and he had intrusted the
details to Sergent, the engraver, an immense stage-manager.
Each party left the Hall at six o'clock.
First marched a cavalry squadron, with the mounted band playing a
funeral march, specially composed. Next, six field-pieces, abreast
where the road-way was wide enough, or in pairs. Then four heralds
on horseback, bearing ensigns labeled
"Liberty"—"Equality"—"Constitution"—"Our Country." Then came
twelve city officials, with swords by the sides and their scarfs on.
Then, all alone, isolated like France herself, a National Guardsman,
in the saddle of a black horse, holding a large tri-color flag, on which
was lettered:
"CITIZENS, THE COUNTRY IS IN DANGER!"
In the same order as the preceding, rolled six guns with weighty
jolting and heavy rumbling, National Guards and cavalry at the rear.
Welcome to our website – the perfect destination for book lovers and
knowledge seekers. We believe that every book holds a new world,
offering opportunities for learning, discovery, and personal growth.
That’s why we are dedicated to bringing you a diverse collection of
books, ranging from classic literature and specialized publications to
self-development guides and children's books.

More than just a book-buying platform, we strive to be a bridge


connecting you with timeless cultural and intellectual values. With an
elegant, user-friendly interface and a smart search system, you can
quickly find the books that best suit your interests. Additionally,
our special promotions and home delivery services help you save time
and fully enjoy the joy of reading.

Join us on a journey of knowledge exploration, passion nurturing, and


personal growth every day!

ebookbell.com

You might also like