EEG Signal Processing
EEG Signal Processing
in Brain-Computer interface
By
Mohammad-Mahdi Moazzami
A THESIS
Submitted to
Michigan State University
in partial fulfillment of the requirements
for the degree of
MASTER OF SCIENCE
Computer Science
2012
Abstract
It has been known for a long time that as neurons within the brain fire, some
utilizing of EEG measurements as inputs to the control devices has still been an
emerging technology over the past 20 years, current BCIs suffer from many prob-
lems including inaccuracies, delays between thought, false positive detections, inter
people variances, high costs, and constraints on invasive technologies, that needs
a thought-based BCI to control the cursor movement. This research also assists
the analysis of EEG data by introducing the algorithms and techniques useful in
processing of EEG signals and inferring the desired actions from the thoughts. It
also offers a brief future potential capabilities of research based on the platform
provided.
.
To
who have offered me unconditional love and care throughout the course of my life,
My lovely parents
iii
Acknowledgements
I would like to thank my advisor, Prof. Matt Mutka, for his constant support
and guidance rendered during my studies and this thesis work. I am grateful to Dr.
Esfahanian and Dr. Xiao for serving on my committee. I would also like to thank
the Computer Science Department at Michigan State University for providing the
facilities for my studies, without which my research would have been impossible.
Furthermore, I extend my gratitude towards the Emotiv Team who have developed
a great cost effective product for anyone looking to get into EEG research and who
have answered many of my questions on their closely monitored forums. Lastly I
iv
.
v
TABLE OF CONTENTS
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 ElectroEncephaloGraphy(EEG) . . . . . . . . . . . . . . . . . . . . . 1
2 Background Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.1 Brain-Computer Interface . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2 Pattern Recognition and Machine Learning in BCI systems . . . . . . 15
2.2.1 Signal acquisition and quality issues . . . . . . . . . . . . . . . 17
2.2.2 Pre-processing . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.2.3 Feature Extraction . . . . . . . . . . . . . . . . . . . . . . . . 20
2.2.4 Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
4.1 Training Emotiv engine and Dial a phone by thoughts . . . . . . . . . 49
A Code Snippets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
BIBLIOGRAPHY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
vi
LIST OF TABLES
vii
LIST OF FIGURES
2.1 BCI-Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.2 BCI Components Details . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.3 ERP Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.4 ERP vs. EEG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.5 Multi-Sensor Feature Space: . . . . . . . . . . . . . . . . . . . . . . . 21
2.6 PCA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.7 SVM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.8 ANN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.9 Logistic function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.10 HMM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.1 Headset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.2 Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.3 Emotiv EPOC API . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.4 EmoEngine Sequence Diagram . . . . . . . . . . . . . . . . . . . . . . 43
3.5 Right Wink . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.6 Head Movement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
viii
Chapter 1
Introduction
Who don’t love to control a device, a computer or a system with their mind? In-
terfaces between the brain and computers have been an essential element of science
fiction where they are used in a variety of applications from controlling powered
exoskeletons, robots, and artificial limbs to creating art envisioned by the user to
allowing for machine-assisted telepathy.
This fantasy is not truly real yet, however simple BCIs do currently exist and
research and public interest in them continues to grow.
This research explores the process in creating a novel BCI that utilizes the Emotiv
EPOC System to measure EEG waves and controls the mouse cursor to do an action
like dialling a phone.
1.1 ElectroEncephaloGraphy(EEG)
EEG waves are created by the firing of neurons in the brain and were first measured
1
intense research in utilizing these electrical measurements in the fields of neuroscience
and psychology [23].
EEG waves are measured using electrodes attached to the scalp which are sen-
sitive to changes in postsynaptic potentials of neurons in the cerebral cortex. Post-
synaptic potentials are created by the combination of inhibitory and excitatory
potentials located in the dendrites. These potentials are created in areas of lo-
rhythmic activity is dependent upon mental state and can be influenced by level
of alertness or various mental diseases. One of the historical downsides of EEG
measurement has been the corruption of EEG data by artifacts, which are electrical
signals that are picked up by the sensors that do not originate from cortical neurons.
One of the most common cause of artifacts is eye movement and blinking, however
other causes can include the use of scalp, neck, or other muscles or even poor contact
between the scalp and the electrodes [30]. Many EEG systems attempt to reduce
artifacts and general noise by utilizing reference electrodes placed in locations where
there is little cortical activity and attempting to filter out correlated patterns [21].
The human brain is the site of consciousness, allowing humans to think, learn and
create. The brain is broadly divided into the cerebrum, cerebellum, limbic system
and the brain stem. The cerebrum is covered with a symmetric convoluted cortex
2
Band Frequency(Hz)
Delta 1 to 4
Theta 4 to 7
Alpha 7 to 13
Beta 13 to 30
Gamma 30+
Table 1.1: EEG Bands and Frequencies - EEG Bands and Frequencies.
Figure 1.1: EEG Frequencies - The EEG Frequencies and different wave forms -
Image reprinted from [42]. ”For interpretation of the references to color in this and
all other figures, the reader is referred to the electronic version of this thesis.”
3
Figure 1.2: International 10-20 Electrode Placement - Odd numbers on the left,
even on the right. Letters correspond to lobes F(rontal), T(emporal), P(arietal),
and O(ccipital). C stands for Central (there is no central lobe)- [26].
with a left and right hemispheres. The brain stem is located below the cerebrum
and the cerebellum is located beneath the cerebrum and behind the brain stem.
The limbic system, which lies at the core of the brain, contains the thalamus and
hypothalamus among other parts. Anatomists divide the cerebrum into Frontal,
Parietal, Temporal and Occipital lobes. These lobes inherit their names from the
bones of the skull that overlie them.
It is generally agreed that the Frontal lobe is associated with planning, problem
solving, reasoning, parts of speech, bodily movement and coordination. The Pari-
etal lobe is associated with bodily movement, orientation and recognition (such as
touch, taste, pressure, pain, heat, cold, etc). The Temporal lobe is associated with
perception, auditory stimuli, memory and speech. The Occipital lobe is associated
with visual stimuli.
Figure 1.3 is an image of human brain with the lobes and their associated func-
tions. The brain is made up of approximately 100 billion neurons. Neurons are
nerve cells with dendrite and axon projections that take information to and from
4
the nerve cells, respectively.
information (also know as firing property, impulse, spike) along the axons. Neurons
transmit messages electrochemically. A neuron fires only when its resting potential
drops below a threshold of -55 microvolts.
A spiking neuron triggers another neuron to spike and in turn another. This
causes the resting potential to fluctuate as a result of impulses arriving from other
neurons at contact points (synapses). These impulses result in post synaptic poten-
tials which cause electric current to flow along the membrane of the cell body and
dendrites.
This is the source of brain electric current. Brain electrical current consists
mostly of Na+, K+, Ca++, and Cl- ions. Only large populations of active neurons
5
can generate electrical activity strong enough to be detected at the scalp as neurons
tend to line up to fire.
of the EEG might be related in humans to cognitive processes [11]. Electrical activity
recorded of the scalp with surface electrodes constitute a non-invasive approach to
gathering EEG data, while semi-invasive or invasive approaches implant electrodes
under the skull or on the brain, respectively [17]. The trade off in these approaches
lies in the EEG source localization, quality of EEG data, surgical process involved
and/or the effect of the electrodes interacting with the tissues.
EEG recorded over a continuous period of time are characterized as spontaneous
EEG. These signals are not time locked and are usually triggered by an auditory or
visual cue. Another closely related application of EEG is the event related potential.
Typically ERPs are recorded from a single electrode over the region of activation
along with a ground electrode. EEG has applications among clinical diagnosis,
neuroscience and the entertainment (gaming) industry.
EEG activity is broadly divided into five frequency bands. The boundaries are
flexible but do not vary much from 0.5-4 Hz (delta), 5-8 Hz (theta), 9-12 Hz (alpha),
13-30 Hz (beta) and above 30 Hz (gamma) frequencies. Refer to Figure 2.2 for
EEG frequency bands. The EEG frequencies and their associated activities are
the delta activity is associated with deep sleep. Theta activity is associated with
6
hypnagogic imagery, rapid eye movement (REM), sleep, problem solving, attention
and hypnosis [11]. Alpha activity is associated with relaxation and non-cognitive
processes [3]. Beta activity with active thinking or active attention [3]. Gamma
frequencies are associated with attention and recognition [15].
Over the years various BCI models have been developed that categorically fall
into the BCI model spectrum. The primary difference lies in the interaction between
the user and the BCI. At one end of the BCI model spectrum is an architecture where
all emphasis is placed on the subject to generate quality EEG data with little effort
on the BCI to recognize the task. In a way it is training the subject to control their
EEG activity.
On the other end of the spectrum the burden lies on the BCI to recognize the task
with the user putting in little effort to generate quality data. Somewhere in between
this spectrum is an architecture where both the subject and the BCI mutually learn
and evolve together. This is achieved when the BCI gives feedback to the user
regarding the quality of EEG data generated [17].
Some BCI architectures also use the error related negativity (ERN) signals, a
type of ERP, which are used along with the EEG to aid in identification of the
7
pattern recognition and Machine Learning algorithms and techniques used in BCIs
to mine the EEG signals are introduced. Chapter 3, discusses about Emotiv EPOC
provided by the Emotiv EPOC. Chapter 5 summarizes the thesis and mentions the
potential future work.
8
Chapter 2
Background Survey
system that responds to users thoughts but a closed loop system that also gives
feedback to the user.
Researchers initially focused on the motor-cortex of the brain, the area which
controls muscle movements, and testing on animals quickly showed that the natural
learning behaviors of the brain could easily adapt to new stimuli as well as control
the firing of specific areas of the brain [11]. This research dealt primarily with
invasive techniques but slowly algorithms emerged which were able to decode the
motor neuron responses in monkeys in real-time and translate them into robotic
activity [17], [40].
9
Despite these achievements, research is beginning to veer away from invasive BCIs
due to the costly and dangerous nature of the surgeries required for such systems.
susceptible to noise, have worse signal resolution due to distance from the brain, and
have difficulty recording the inner workings of the brain. However more sophisti-
cated systems are constantly emerging to combat these difficulties and non-invasive
techniques have the advantage of lower cost, greater portability, and the fact that
scalp surface.
Electroencephalography (EEG) is the recording of electrical activity along the
scalp. EEG measures voltage fluctuations resulting from ionic current flows within
10
the neurons of the brain. The brain’s electrical charge is maintained by billions
of neurons. Neurons are electrically charged by membrane transport proteins that
a brain-computer interface is the extensive training required before users can work
the technology.
BCIs equipped with EEG for data collection are easy to setup, can be deployed
in numerous environments, are preferred for their lack of risk and are inexpensive.
Other techniques used to map brain activation are functional magnetic resonance
imaging (fMRI), positron emission tomography (PET) and magnetoencephalogra-
phy (MEG). fMRI measures the changes in blood flow level (blood oxygen level) in
the brain. In PET a tracer isotope is injected into the subject that emits gamma
rays when it annihilates, these gamma rays are further traced by placing the subject
under a scanner. These approaches have relatively poor time resolution but excellent
spacial resolution when compared to EEG. MEG is an imaging technique that mea-
sures the magnetic field produced by electrical activity; EEG can be simultaneously
recorded along with MEG.
In the training phase the subjects perform different tasks (imagine moving hands,
feet, doing simple math, etc) and the BCI is trained to recognize one task from
another. In the testing phase (classifying phase) the already trained BCI is applied
on new data sample to determine the intended class label (intended task).
11
An accurate recognition system must be able to accommodate variations intro-
duced by the different brain patterns of individuals as well as variations in the brain
patterns caused by the use of different mind state like being tired or having less
concentration. This coupled with numerous challenges like the quality of Electroen-
cephalography (EEG) signals, its non-stationary nature, common EEG artifacts and
the feature dimensionality presents a significant challenge to EEG-based BCI sys-
tems.
Though the idea of using EEG waves as input to BCIs has existed since the initial
conception of BCIs, actual working BCIs based on EEG input have only recently
appeared [38]. Most EEG-BCI systems follow a similar paradigm of reading in and
analyzing EEG data, translating that data into device output, and giving some
sort of feedback to the user. Figure 2.1 illustrates the high level block diagram a
genral BCI system. However implementing this model can be extremely challenging.
The primary difficulty in creating an EEG-based BCI is the feature extraction and
classification of EEG data that must be done in real-time if it is to have any use.
Feature extraction deals with separating useful EEG data from noise and sim-
plifying that data so that classification, the problem of trying to decide what the
extracted data represents, can occur. There is no best way of extracting features
from EEG data and modern BCIs often use several types of feature extraction in-
cluding wavelet transforms, Fourier transforms, and various other types of filters.
The major features that EEG-BCI systems rely on are event-related potentials
(ERPs) and event-related changes in specific frequency bands. The P300 wave is one
of the most often used ERPs in BCIs and is utilized because it is easily detectable
12
Figure 2.1: BCI-Components - A Brain-Computer Interface Basic Components
find natural clusters of EEG segments that are indicative of certain kinds of mental
activities with varying degrees of success [20], [26].
Feedback is essential in BCI systems as it allows users to understand what brain-
waves they just produced and to learn behavior that can be effectively classified and
controlled. Feedback can be in the form of visual or auditory cues and even haptic
sensations, and ongoing research is still attempting to figure out the optimal form
feedback should take [16].
EEG-BCIs can also be classified as either synchronous or asynchronous. The
computer drives synchronous systems by giving the user a cue to perform a certain
mental action and then recording the user’s EEG patterns in a fixed time-window.
Asynchronous systems, on the other hand, are driven by the user and operate by pas-
sively and continuously monitoring the user’s EEG data and attempting to classify
it on the fly. Synchronous protocols are far easier to construct and have historically
13
acter selection process was time consuming and not perfectly accurate [1].
By 2008, researchers collaborating from Switzerland, Belgium, and Spain created
a feasible asynchronous BCI that controlled a motorized wheelchair with a high
degree of accuracy though again the system was not perfect [12].
Today, the 2010 DARPA budget has allocated $4 million to develop an EEG-
based program called Silent Talk which will allow user-to-user communication on the
battlefield without the use of vocalized speech through analysis of neural signals [6]
.
BCI systems
Recognition of intended commands from Brain signals presents all the classic chal-
lenges associated with any pattern recognition problem. These challenges include
noise in the input data, variations and ambiguities in the input data, feature extrac-
tion, as well as overall recognizer performance. A Brain-Computer Interface (BCI)
system designed to meet specific customer performance requirements must deal with
all of these challenges.
Figure 2.1 illustrates the basic components found in a typical BCI system. Figure
2.2 shows these components in more details.
Two major problems opposing BCI developers are the non-stationarity and nat-
ural variability of EEG brain signals. Data from the same experimental pattern but
recorded on different days (or even in different sessions on the same day) are likely to
exhibit significant differences. Besides the variability in neuronal activity, the user’s
14
Figure 2.2: BCI Components Details - A Brain-Computer Interface Basic Compo-
nents
15
current mental state may affect the measured signals as well: Stress, excessive work-
load, boredom, or frustration may cause temporary distortions of EEG activity. User
adaptation to the BCI, as well as the changing impedance of EEG sensors during
recording, contribute to making the recorded signal statistically non-stationary.
Pattern recognition and machine learning algorithms play a crucial role in rec-
ognizing patterns in noisy EEG signals and translating them into control signals.
parameters learned offline suboptimal. Therefore there are some research activities
being done to develop online adaptation methods [34], [39] as well as research on
extracting features that remain stable over users and time [33].
The translation of brain activities into control commands for external devices
involves a series of signal processing and pattern recognition stages such as: signal
acquisition, preprocessing, feature extraction and classification. This is for generat-
ing a control signal which is followed by a feedback to the user by the application
being controlled. This section introduces each of these stages.
16
from electrodes placed on the scalp. The measured signals are thought to reflect
the temporal and spatial summation of post-synaptic potentials from large groups
Hence, because of small amplitude, EEG signals are very vulnerable to noise and
artifacts. The most frequent artifacts are muscle activities (electromyogram (EMG))
and eye movements (electrooculogram (EOG)) generated by the user and signal
contamination due to nearby electrical devices (e.g., 60-Hz power line interference).
Additional noise sources include changing electrode impedance and the user’s varying
psychological states due to boredom, distraction, stress, or frustration (e.g., caused
by BCI mistranslation). Some of these noises and artifacts are minimized in invasive
techniques like ECoG, because signals are measured directly from the brain surface.
However their invasive nature has a medical risk and is not possible at any time.
One of the major types of EEG signals have been used in BCIs are Event-Related
Potentials (ERPs). ERPs are electrical potential shifts which are time-locked to
perceptual, cognitive, and motor events; thus they represent the temporal signature
17
of macroscopic brain electrical activity. Typical ERPs include the P300, so named
because it is characterized by a positive potential shift occurring about 300 ms after
Figure 2.3: ERP Components - A waveform showing several ERP components in-
cluding the P300 component - Image from [43]
ERP signals are aggregated waveform derived from different EEG measurements
over different trials on the same stimulus. Therefore time-locking and Signal to
Noise ratio reduction will derives ERP from EEG. Figure 2.4 shows the ERP wave
derived from multiple EEG waves. The picture on the left, picture (a), illustrates
a single trial EEG signal measured for a visual stimulus. And the picture on the
right, picture (b), shows the ERP signal obtained by averaging over 200 EEG trials
on the same stimulus.
2.2.2 Pre-processing
Preprocessing methods are typically applied to EEG signals to detect, reduce, or re-
move noise and artifacts. The goal is to increase the signal-to- noise ratio (SNR) and
isolate the signals of interest. One very common but also very subjective approach
is to visually screen and score the recordings. In this setting, experts manually mark
18
Figure 2.4: ERP vs. EEG - ERP wave form derived from EEG by averaging over
200 trials - Image reprinted from [37].
artifacts and exclude this data from further analysis. While this may work well for
offline analysis, visual inspection is not practical during online BCI use.
A number of online preprocessing methods have been utilized in BCIs to date.
These include notch filtering (e.g., to filter out 60-Hz power line interference), re-
gression analysis for the reduction of eye movement (EOG) artifacts, and spatial
filtering (e.g., bipolar, Laplacian, or common average referencing(CAR) ). Spatial
filtering aims at enhancing local EEG activity over single channels while reducing
The goal of feature extraction is to transform the acquired and preprocessed EEG
signal into a set of p features x = [x1 , ..., xp ]T ∈ X p that are suitable for subsequent
classification or regression. In some cases, the preprocessed signal may already be
in the form of an appropriate set of features (e.g., the outputs of CSP filters). More
typically, some form of time or frequency domain transform is used.
The most commonly used features for EEG are based on frequency domain.
Overt and imagined movements typically activate premotor and primary sensori-
19
motor areas, resulting in amplitude/power changes in the mu (7 ∼ 13Hz), central
beta (13 ∼ 30Hz) and gamma ( 30+Hz) rhythms in EEG. These changes can be
characterized using classical power spectral density estimation methods such as the
digital bandpass filter, the short-term Fourier transform, and wavelet transform. The
resulting feature vectors are usually high dimensional because the features are ex-
tracted from several EEG channels and from several time segments prior to, during,
and after the movement.
The computation of a large number of spectral components can be computa-
tionally demanding. Another common and often faster way to estimate the power
spectrum is to use adaptive auto-regressive (AAR) [31] models, although the model
parameters have to be selected a priori.
Features that are extracted in the time domain include AR parameters [31],
(smoothed) signal amplitude parameters [14], the fractal dimension (FD), and com-
mon spatial patterns (CSP).
Finally, features estimated in the phase domain have not received much attention
in EEG-based BCIs, although coherence and phase-locking values [25] have been
explored as potential features.
One of the simplest way of creating the feature space of the data for further
analysis is as illustrated in figure 2.5. In this method the signals measured by each
electrode is vectorized and concatenated by measurements of the other electrodes
during the specific time.
The feature space created by this method will be considerably high dimensional
and will face with a problem called curse of dimensionality [36]. Curse of dimen-
sionality is the phenomena in the high dimensional data analysis saying, in practice,
increasing dimensionality beyond a certain point in the presence of finite number of
training samples results in worse, rather than better performance. One of the main
reasons for this paradox is that since training sample size is always finite, so the
20
Figure 2.5: Multi-Sensor Feature Space: - The sensor time series are vectorized to
generate the feature space [24]
Principal component analysis (PCA) can be developed from many different points
of view, but it will be most useful in the context of dimensionality reduction view
PCA as an optimization problem.
In general in component analysis methods the features are combined in order
to reduce the dimension of the feature space. And basically they project the high
dimensional data (i.e EEG recordings) onto a lower dimentional space. Linear com-
binations are simple to compute and tractable. PCA is one of the classical and most
popular approaches for finding optimal transformation. PCA projects the data onto
a feature space that best represents the data.
21
In the other hands, PCA tends a linear transformation of a data set that maxi-
mizes the variance of the transformed variables subject to orthogonality constraints
Sw = λw (2.1)
In which S is the scatter matrix and µ is the sample mean of the data as follows:
n
1X
µ= xi (2.2)
n
i=1
n
(xi − µ)(xi − µ)T
X
S= (2.3)
i=1
Once the eigenvalues obtained, the transform matrix columns will be equal to
those eigenvectors corresponding to biggest eigenvalues. And the principal compo-
nents are equivalent to the normalized eigenvectors. The figure 2.6 illustrates the
linear transformation to the new feature space generated by principal components:
Linear Discriminant Analysis (LDA) searches for those vectors in the underlying
space that best discriminate or separate among classes (rather than those that
best describe(represent) the data which is done in PCA).
More formally, given a number of independent features relative to which the data
is described, LDA creates a linear combination of these which yields the largest mean
22
Figure 2.6: PCA - Principal Components Analysis - Image reprinted from [2]
Nj
c X
(xi − µj )(xi − µj )T
X
Sw = (2.4)
j=1 i=1
where xi is the ith sample of class j with sample mean µj and number of
samples as Nj , and c is the number of classes.
c
(µj − µ)(µi − µ)T
X
Sb = (2.5)
j=1
LDA tends to maximize the between class variance while minimizing the within-
class variance, therefore the transformation is found by solving the eigenvalue de-
composition problem as following:
23
−1 S w = λw
Sw (2.6)
b
Whereas PCA, extract uncorrelated signals that optimize some criterion, indepen-
dent component analysis (ICA) is defined as extracting independent signals from a
data set.
n
X
xj = aji si , (2.7)
i=1
Lets denote the matrix A the matrix with elements aij , therfore the above mixing
model can be written as x = As.
The starting point for ICA is the very simple assumption that the components
si are statistically independent. The task is to transform the observed data x,
using a linear static transformation W as:
s = Wx (2.8)
signals may be defined in many ways, but it is broadly defined either as minimization
of mutual information or maximization of non-Gaussianity.
The goal is now to estimate the mixing matrix A or the corresponding trans-
24
form matrix W , which is done by the optimization problem of maximizing the non-
Gaussianity of the source components si s or minimizing the mutual information.
2.2.4 Classification
Classification is the problem of assigning one of N labels to a new input signal, given
labelled training data of inputs and their corresponding output labels. Regression
is the problem of mapping input signals to a continuous output signal. Given the
limited spatial resolution of EEG, most BCIs based on EEG rely on classification
For classification of an input, the likelihood of the input within each class
is computed and the most likely class is selected (e.g., Hidden Markov Models
(HMMs), Bayesian classifiers).
25
• Nonlinear classifiers attempt to learn a nonlinear decision boundary be-
tween classes (e.g., k-nearest neighbours (k-NN), kernel SVMs, HMMs).
• Dynamic classifiers capture the temporal dynamics of the input classes and
use this information for classifying a new input time series of data (e.g., HMMs,
recurrent neural networks).
• Stable classifiers are those that remain robust to small variations in the
SVM is one the most important supervised learning algorithms used for classification
and also regression. SVM is a binary classifier and and therefore assigns class labels
-1 or 1 to the new data points. Given a set of training samples, each marked
examples are then mapped into that same space and predicted to belong to a category
based on which side of the gap they fall on.
More specifically, SVM constructs a hyperplane a high-dimensional space, which
can be used for classification or regression. The hyperplane can be define as a linear
26
g(x) = at x (2.9)
In which a is the weight vector and x is the pattern vector. Thus the discriminant
hyperplane should satisfy the criterion below for all data patterns xi :
zi g(xi ) ≥ 1 (2.10)
In this formula zi represents the label which can take only value 1 or -1.
Figure 2.7: SVM - The illustration of Support Vector Machine training - Image
reprinted from [2]
zi g(xi )
≥b (2.11)
kak
As it is shown in the figure 2.7, for each data point xi the distance from hyper-
plane is zi g(xi ), and assuming that a positive margin b exists, thus the goal is to
27
find the weight vector a such that it maximizes the margin b and minimizes the size
kak2 [7].
are non-linear statistical data modelling tools. They are usually used to model
complex relationships between inputs and outputs or to find patterns in data.
Figure 2.8: ANN - Artificial Neural Networks consist of different layers of node
learning the (non-linear) relationship between input (training patterns and output
class labels - Image reprinted from [2]
28
n
X
gj (x) = f ( wij xi + bij ) (2.12)
i=1
Where the subscript i indexes units on the input layer, j for the hidden; wij
denotes the input-to-hidden layer weights at the hidden unit j.
The function of the entire neural network is simply the computation of the out-
puts of all the neurons:
c
X c
X n
X
zk = f wjk gj (x) = f wjk f ( wij xi + bij ) (2.13)
j=1 j=1 i=1
Here the outputs, zk , are predicted class labels by the neural network classifier.
The function, f , is called the activation function. Majority of ANNs use sigmoid
functions, like logistic function( figure 2.9 ), as their activation function, which is
1
f (x) = (2.14)
1 + ex
Other sigmoid functions like hyperbolic tangent and arctangent are also used.
29
However the exact nature of the function has little effect on the abilities of the
neural network.
Training is the act of presenting the network with some sample data and modi-
fying the weights to better approximate the desired function. In order to do so, the
neural network is supplied with inputs and the desired outputs and response of the
network to the inputs is measured. Then the weights are modified to reduce the
c
1 X
E(w) = (yk − zk )2 (2.15)
2
k=1
Where w represents all weights in the network, and the yk is the desired output
and the zk is the network output. During the learning process the weights are
adjusted to reduce this measure of error.
In problems that have an inherent temporarility – that is, consist of a process that
unfolds in time – we may have states at time t that are influenced directly by a state
at t − 1. Hidden Markov models (HMMs) have found greatest use in such problems,
for instance speech recognition , gesture recognition or EEG signal processing .
30
While the notation and description is unavoidably more complicated than the simpler
models considered up to this point, we stress that the same underlying ideas are
exploited. Hidden Markov models have a number of parameters, whose values are
set so as to best explain training patterns for the known category. Later, a test
pattern is classified by the model that has the highest posterior probability, i.e.,
that best explains the test pattern.
More formally, it is assumed that there exists a set of states such as: {S1 , S2 , ..., SN },
in which the process moves from one state to another generating a sequence of states
as: Si1 , Si2 , ..., Sik , ....
To define a Markov model two probabilities have to be specified: the state tran-
sition probability aij = P (Si |Sj ), and the initial or prior probability of being in a
state πi = P (Si). Hence, the transition and prior matrices are defined as A = (aij )
and π = (πij ) respectively.
The fundamental assumption in the Markov Models is that the probability of
each subsequent state depends only on what was the previous state, therefore we
have:
In the Hidden Markov Model, it is assumed that states are not visible, but each
state randomly generates one of M observations (or visible states): {Z1 , Z2 , ...Zn , ..., ZN }.
To define Hidden Markov model, a matrix B of the observation probabilities
should also be defined as :
Where
31
bi (Zn ) = P (Zn |Si ) (2.18)
Figure 2.10: HMM - Hidden Markov Model of latent variables - Image reprinted
from [2]
Therefore like other learning algorithms, through learning process, given some
training observation sequences Z = {Z1 , Z2 , ...Zn , ..., ZN }, figure 2.10, and general
structure of HMM (numbers of hidden and visible states), determine HMM param-
eters, M = (A, B, π), that best fit training data. Once the parameters of the model
determined, a test pattern is classified by the model that has the highest posterior
probability.
32
Chapter 3
The Emotiv System, whose tag-line is “you think, therefore you can”, bills itself as a
“revolutionary new personal interface for human computer interaction”. It is based
on the EPOC headset for recording EEG measurements and a software suit which
over the EPOC headset API and detection libraries and come with up to 6 different
licences: Individual, Developer, Research, Enterprise, Educational Institutes.
This research originally used the Development SDK and later upgraded to the
Research Edition. The Research Edition includes the Emotiv Control Panel, Emo-
Composer (an emulator for simulating EEG signals), EmoKey (a tool for mapping
various events detected by the headset into keystrokes), the TestBench, and an
33
upgraded API that enables the capture of raw EEG data from each individual sen-
sor [8], [9].
Figure 3.1: Headset - Emotiv EPOC headset- Image reprinted from [9]
tion that communicates with the Emotiv headset, receives preprocessed EEG and
gyroscope data, manages user-specific or application-specific settings, performs post-
processing, and translates the Emotiv detection results into an easy-to-use structure
called an EmoState.
Every EmoState represents the current input from the headset including “facial,
emotional, and cognitive state” and, with the upgrade to the research edition, con-
tains electrode measurements for each contact. As it is illustrated in the figure 3.3,
utilizing the Emotiv API consists of connecting to the EmoEngine, detecting and
decoding new EmoStates, and calling code relevant to the new EmoState [9].
34
Figure 3.2: Sensors - The 14 EPOC Headset 14 contacts.In addition there is a
Common Mode Sense (CMS)electrode in the P3 location and a Driven Right Leg
(DRL) electrode in the P4 location which form a feedback loop for referencing the
other measurements - Image reprinted from [9]
Figure 3.3: Emotiv EPOC API - High-level View of the Utilization of the Emotiv
API - Image reprinted from [9]
35
3.1.1 Detecting Expressive Events
This section demonstrates how an application can use the Expressiv detection suite
to control an animated head model called BlueAvatar.
The model emulates the facial expressions made by the user wearing an Emotiv
headset. ExpressivDemo connects to Emotiv EmoEngine and retrieves EmoStates
The Expressiv state from the EmoEngine can be separated into three groups of
mutually exclusive facial expressions:
• Eye related actions: Blink, Wink left, Wink right, Look left, Look right
• Lower face actions: Smile, Smirk left, Smirk right, Clench, Laugh
The protocol that ExpressivDemo uses to control the BlueAvatar motion is very
simple. Each facial expression result will be translated to plain ASCII text, with the
letter prefix describing the type of expression, optionally followed by the amplitude
value if it is an upper or lower face action.
Multiple expressions can be sent to the head model at the same time in a comma
separated form. However, only one expression per Expressiv grouping is permit-
ted (the effects of sending smile and clench together or blinking while winking are
undefined by the BlueAvatar).
Table 3.1, below, excerpts the syntax of some of expressions supported by the
protocol. The prepared ASCII text is subsequently sent to the BlueAvatar via UDP
socket.
36
Expressive action type Corresponding ASCII Text (case sensitive) Amplitude value
Blink B n/a
Wink left l n/a
Wink right r n/a
Look left L n/a
Look right R n/a
Eyebrow b 0 to 100 integer
Smile S 0 to 100 integer
Clench G 0 to 100 integer
For example:
• Eyebrow with amplitude 0.6 and clench with amplitude 0.3: b60, G30
receiver is removed and then reinserted, ExpressivDemo will consider this as a new
Emotiv EPOC and still increases the sending UDP port by one.
In addition to translating Expressiv results into commands to the BlueAvatar,
the ExpressivDemo also implements a very simple command-line interpreter that
can be used to demonstrate the use of personalized, trained signatures with the
Expressiv suite.
Expressiv supports two types of ”signatures” that are used to classify input
from the Emotiv headset as indicating a particular facial expression. The default
signature is known as the universal signature, and it is designed to work well for
a large population of users for the supported facial expressions. If the application
37
or user requires more accuracy or customization, then you may decide to use a
trained signature. In this mode, Expressiv requires the user to train the system by
performing the desired action before it can be detected. As the user supplies more
training data, the accuracy of the Expressiv detection typically improves. If you
elect to use a trained signature, the system will only detect actions for which the
user has supplied training data. The user must provide training data for a neutral
expression and at least one other supported expression before the trained signature
can be activated.
Important note: not all Expressiv expressions can be trained. In particular, eye
and eyelid-related expressions (i.e. “blink”, “wink”, “look left”, and “look right”)
can not be trained. The API functions that configure the Expressiv detections
are prefixed with EE Expressiv. The trained sig command corresponds to the
function called EE ExpressivGetTrainedSignatureAvailable(), and the training exp
command corresponds to another function called EE ExpressivSetTrainingAction().
It will be useful to first get familiarized with the training procedure on the Ex-
pressiv tab in Emotiv Control Panel before attempting to use the Expressiv training
API functions.
This section demonstrates how the users conscious mental intention can be recog-
nized by the Cognitive detection and used to control the movement of a 3D virtual
object. It also shows the steps required to train the Cognitive suite to recognize
distinct mental actions for an individual user.
The design of the Cognitive Demo application is quite similar to the Expres-
sivDemo covered in previous section. In the last section, ExpressivDemo retrieves
EmoStates from Emotiv EmoEngine and uses the EmoState data describing the
users facial expressions to control an external avatar. In this section, it is explained
38
how information about the cognitive mental activity of the users is extracted. The
output of the Cognitive detection indicates whether users are mentally engaged in
one of the trained Cognitive actions (pushing, lifting, rotating, etc.) at any given
time. Based on the Cognitive results, corresponding commands are sent to a separate
application called EmoCube to control the movement of a 3D cube.
Commands are communicated to EmoCube via a UDP network connection. The
The Cognitiv detection suite requires a training process in order to recognize when
with the operation of the Cognitiv tab in Emotiv Control Panel before attempting
to use the Cognitiv API functions.
Cognitiv can be configured to recognize and distinguish between up to 4 distinct
actions at a given time. New users typically require practice in order to reliably
evoke and switch between the mental states used for training each Cognitiv action.
As such, it is imperative that a user first masters a single action before enabling two
39
concurrent actions, two actions before three, and so forth.
During the training update process, it is important to maintain the quality of the
EEG signal and the consistency of the mental imagery associated with the action
being trained. Users should refrain from moving and should relax their face and
neck in order to limit other potential sources of interference with their EEG signal.
Unlike Expressiv, the Cognitiv algorithm does not include a delay after receiving
the COG START training command before it starts recording new training data.
The above sequence diagram describes the process of carrying out Cognitiv train-
ing on a particular action. The cognitive-specific events are declared as enumerated
type EE CognitivEvent t in EDK.h. Note that this type differs from the EE Event t
40
type used by top-level EmoEngine Events. The code snippet in listing A.3 illus-
trates the procedure for extracting Cognitive-specific event information from the
EmoEngine event.
41
Before the start of a training session, the action type must be first set with the
API function EE CognitivSetTrainingAction(). In EmoStateDLL.h, the enumer-
ated type EE CognitivAction t defines all the Cognitiv actions that are currently
supported (COG PUSH, COG LIFT, etc.). If an action is not set before the start
of training, COG NEUTRAL will be used as the default.
EE CognitivSetTrainingControl() can then be called with argument COG START
state.
After approximately 8 seconds, two possible events will be sent from the Emo-
Engine: EE CognitivTrainingSucceeded: If the quality of the EEG signal during
the training session was sufficiently good to update the algorithms trained signa-
ture, EmoEngine will enter a waiting state to confirm the training update, which
will be explained below. EE CognitivTrainingFailed: If the quality of the EEG
signal during the training session was not good enough to update the trained sig-
nature then the Cognitiv training process will be reset automatically, and user
should be asked to start the training again. If the training session succeeded
(EE CognitivTrainingSucceeded was received) then the user should be asked whether
to accept or reject the session.
The user may wish to reject the training session if he feels that he was un-
42
able to evoke or maintain a consistent mental state for the entire duration of the
training period. The users response is then submitted to the EmoEngine through
If the training is accepted, EmoEngine will rebuild the users trained Cognitiv
signature, and an EE CognitivTrainingCompleted event will be sent out once the
calibration is done. Note that this signature building process may take up several
seconds depending on system resources, the number of actions being trained, and
the UDP port and select Start Server. Start a new instance of CognitivDemo, and
observe that when you use the Cognitiv control in the EmoComposer the EmoCube
responds accordingly.
Next, experiment with the training commands available in CognitivDemo to
better understand the Cognitive training procedure described above. Listing A.4
shows a sample CognitivDemo session that demonstrates how to train.
This section demonstrates how to extract live EEG data using the EmoEngine. Data
is read from the headset and sent to an output file for later analysis. This examples
only works with the SDK versions that allow raw EEG access (Research, Education
and Enterprise Plus).
43
The process starts in the same manner as the earlier one. A connection is
made to the EmoEngine through a call to EE EngineConnect(), or to EmoComposer
buffered data for access via the hDatahandle. All data captured since the last call
to DataUpdateHandle will be retrieved. Place a call to DataGetNumberOfSample()
to establish how much buffered data is currently available. The number of samples
can be used to set up a buffer for retrieval into your application as shown.
Finally, to transfer the data into a buffer in our application, we call the EE DataGet
function. To retrieve the buffer we need to choose from one of the available data
channels:
For example, to retrieve the first sample of data held in the sensor AF 3, place a
call to EE DataGet as follows:
You may retrieve all the samples held in the buffer using the bufferSizeInSample
parameter. Finally, we need to ensure correct clean up by disconnecting from the
EmoEngine and free all associated memory. EE EngineDisconnect();
EE EmoStateF ree(eState);
44
EE EmoEngineEventF ree(eEvent);
There is also an Emotiv useful tool which allows user to monitor the EEG signals
captured by each channel in real time. This tool called TestBench as shown in figures
3.5 and 3.6, can be used as the ground truth of raw data captured by SDK. Also it
is really useful to see the patterns (waveforms) associated with events in real-time,
figures 3.5 and 3.6. The testBench can also be used for recording the data in the
EDF format which is compatible with some of the tool for offline analysis of EEG
Figure 3.5: Right Wink - Right Wink wave form can be detected easily- Image
Captured from Emotiv TestBench Application
45
Figure 3.6: Head Movement - Head Movement is the cleanest signal captured by the
headset - Image Captured from Emotiv TestBench Application
46
Chapter 4
Experiments
thoughts
form.
Blink is one of the expressive events that is exploited and translated to the mouse
click. Therefore the user can dial the phone just by imagining the horizontal (i.e
47
Figure 4.1: Brain Keypad - A keypad dialled by brain commands
move left.right) or vertical( i.e up/down) movement to be able to move the cursor
over the desired number and by just doing a blink he/she can click on the number
and then move on over next digit (Figure 4.1).
It is really important to maintain the quality of the EEG signal and the consis-
tency of the mental imagery associated with the action being trained and performed.
This is one the issues during the experiment. The user has to refrain from moving
and should relax their face and neck in order to limit other potential sources of
interference and additive artifacts with the cognitive EEG signal.
It is possible to add extra actions (i.e having two distinct left and right actions in-
stead of just having one rounded horizontal action), however by adding more actions
the complexity of training and also the detection skill of system drops drastically
and a lot of extra trainings requires to boost up the detection rate of the system.
The reason for this phenomenon is that because this is classification problem, the
feature space will be divided to different areas containing training samples for differ-
ent actions. Therefore once new actions are added to the cognitive training (push,
pull, lift, spin), the software will often see that there is an overlap in detections and
48
as a result may downgrade the training on the existing actions. As the the actions
are retrained and more clarity developed in each, the synaptic pathways for each
will become more distinct along with the voltage traces for the EEG detections.
As discussed in chapter 3, in addition to the actions, there is also a background
state trained and called “neutral”. One of the most important things is that with a
well trained Neutral State, everything else benefits. One way to do this is to record a
neutral state while watching TV, playing a game or having a conversation. This will
create a very active neutral state for the detections to differentiate against and make
it simpler for the BCI to see the changes in state that indicate different cognitive
actions. As the system learns and refines the signatures for each of the actions, as
right direction to move on over the desired digit, that is once the detection is wrong
the user needs to try once more which causes further delays, each correct detection is
confirmed by a click that is associated with the blink. It worth mentioning that time
needed to detect the blink is also a portion of the time reported, whereas blink is an
expressive event and its detection is almost instantaneous and pretty accurate, time
corresponding to its detection is neglectable, therefore this time shows the average
detection time of cognitive events of movement directions.
The second delay can be improved by more training to increase the accuracy
(actions’ skill rate ) of system. However the first delay is an intrinsic property of
the Emotiv engine and cannot be improved that much by training.
49
Chapter 5
We explored the area of brain-computer interface and related fields such as signal
processing, cognitive science and machine learning. Different techniques, tools and
algorithm in each of these areas that are useful in the analysis of EEG signals are
studied and summarized. Algorithms like PCA and LDA for feature extraction and
interface. We understood how to train the engine by the desired tasks to build a
whole BCI system upon the trained engine. It is also carefully investigated how to
program by using APIs associated with the EPOC’s SDK to get the EEG raw data
for further mining by new learning algorithms.
However, this research direction has much to be extended further. As one aspect,
the analysis of Emotiv EPOC raw data based on the algorithms explained in the
50
chapter 2 has a high potential of further studies. This will help to generate a
robust and real brain-machine interface controlled based on current data acquisition
device. This work requires a long term data recording under the whole cognitive
and psychological circumstances to end up with accurate and clean data in advance
of applying any analysis.
From the machine learning and pattern recognition perspective, the mathemat-
ical part of this work has also a high potential and demand for extension in the
future. The algorithms introduced in the second part in the chapter 4 opens many
areas of study by looking at non-deterministic and uncertainty of information con-
cealed in the raw data. This part can be coupled by more advance feature extraction
techniques, like ICA [4] and non-linear feature learning [13], to cancel noises and get
cleaner signals in a smaller and more informative feature space.
In this work we showed that given data recording device, and statistical tools,
how to accomplish a brain-machine interface. Most of these works are not restricted
to BCI and can be used in a variety of sensing and sensory systems requiring signal
detection and estimation. But yes, further research on information fusion, feature
extraction, and supervised and unsupervised learning is needed. In addition non-
trivial signal recording as well as data aggregation and experimental studies need
to be considered to successful system development of this challenging and exciting
technology.
51
APPENDICES
52
Appendix A
Code Snippets
...
E E E x p r e s s i v A l g o t upperFaceType =
ES ExpressivGetUpper FaceAction ( e S t a t e ) ;
E E E x p r e s s i v A l g o t lowerFaceType =
ES ExpressivGetL owerFaceAction ( e S t a t e ) ;
f l o a t upperFaceAmp =
ES ExpressivGetUpperFaceActionPower ( e S t a t e ) ;
f l o a t lowerFaceAmp =
ES ExpressivGetLowerFaceActionPower ( e S t a t e ) ;
53
Listing A.2: Cognitiv Detection - [9]
{
std : : ostringstream os ;
E E C o g n i t i v A c t i o n t a ct i o nType ;
a ct i o nType = ES Co g ni t i vG et Cur r ent Act i o n ( e S t a t e ) ;
f l o a t a ct i o nPo wer ;
a ct i o nPo wer = ES CognitivGetCurrentActi onPower ( e S t a t e ) ;
o s << s t a t i c c a s t <int >( a ct i o nType ) << ” , ”
<< s t a t i c c a s t <int >( a ct i o nPo wer ∗ 1 0 0 . 0 f ) ;
so ck . SendBytes ( o s . s t r ( ) ) ;
}
...
}
...
}
54
Listing A.4: Cognitive Action(i.e. push and neutral) Training Session - [9]
CognitivDemo> s e t a c t i o n s 0 push l i f t
==> S e t t i n g C o g n i t i v a c t i v e a c t i o n s fo r u s e r 0 . . .
CognitivDemo> C o g n i t i v s i g n a t u r e fo r u s e r 0 UPDATED!
CognitivDemo> t r a i n i n g a c t i o n 0 push
==> S e t t i n g C o g n i t i v t r a i n i n g a c t i o n fo r u s e r 0 t o ” push ” . . .
CognitivDemo> t r a i n i n g s t a r t 0
==> S t a r t C o g n i t i v t r a i n i n g fo r u s e r 0 . . .
CognitivDemo> C o g n i t i v t r a i n i n g fo r u s e r 0 STARTED!
CognitivDemo> C o g n i t i v t r a i n i n g fo r u s e r 0 SUCCEEDED!
CognitivDemo> t r a i n i n g a c c e p t 0
==> Accept i ng C o g n i t i v t r a i n i n g fo r u s e r 0 . . .
CognitivDemo> C o g n i t i v t r a i n i n g fo r u s e r 0 COMPLETED!
CognitivDemo> t r a i n i n g a c t i o n 0 n e u t r a l
==> S e t t i n g C o g n i t i v t r a i n i n g a c t i o n fo r u s e r 0 t o ” n e u t r a l ”
CognitivDemo> t r a i n i n g s t a r t 0
==> S t a r t C o g n i t i v t r a i n i n g fo r u s e r 0 . . .
CognitivDemo> C o g n i t i v t r a i n i n g fo r u s e r 0 STARTED!
CognitivDemo> C o g n i t i v t r a i n i n g fo r u s e r 0 SUCCEEDED!
CognitivDemo> t r a i n i n g a c c e p t 0
==> Accept i ng C o g n i t i v t r a i n i n g fo r u s e r 0 . . .
CognitivDemo> C o g n i t i v t r a i n i n g fo r u s e r 0 COMPLETED!
CognitivDemo>
55
BIBLIOGRAPHY
56
BIBLIOGRAPHY
[1] N. Birbaumer et al. The thought translation device (ttd) for completely para-
lyzed patients. IEEE Transactions on Rehabilitation Engineering, 8(2):190–193,
June 2000.
[2] C.M. Bishop. Pattern recognition and machine learning. Information science
and statistics. Springer, 2006.
[3] Benedict Carey. Monkeys think moving artifiacl arm as own. The New York
Times, 29, May 2008.
[4] Pierre Comon. Independent component analysis, a new concept? Signal Pro-
cess., 36:287–314, April 1994.
[6] Katie Drummond. Pentagon preps soldier telepathy push. Wired Magazine, 14,
2009.
[7] R.O. Duda, P.E. Hart, and D.G. Stork. Pattern classification. Pattern Classi-
fication and Scene Analysis: Pattern Classification. Wiley, 2001.
[9] Emotiv. Software Development Kit(SDK) User Manual, beta release 1.0 edition,
2011.
[10] G.E. Fabiani, D.J. McFarland, J.R. Wolpaw, and G. Pfurtscheller. Conversion
of eeg activity into cursor movement by a brain-computer interface (bci). Neural
Systems and Rehabilitation Engineering, IEEE Transactions on, 12(3):331 –338,
sept. 2004.
57
[12] F. Galan et al. a brain-actuated wheelchair: Asynchronous and non-invasive
brain-computer interfaces for continuous control of robots. Clinical Neurophys-
iology. Volume, 119:2159–2169, 2008.
[13] D. Garrett, D.A. Peterson, C.W. Anderson, and M.H. Thaut. Comparison of lin-
ear, nonlinear, and feature selection methods for eeg signal classification. Neural
Systems and Rehabilitation Engineering, IEEE Transactions on, 11(2):141 –144,
june 2003.
[17] Philip R. Kennedy et al. activity of single action potentials in monkey motor
cortex during long-term task learning. Brain Research, 760:251–254, June 1997.
[22] A.M. Martinez and A.C. Kak. Pca versus lda. Pattern Analysis and Machine
Intelligence, IEEE Transactions on, 23(2):228 –233, feb 2001.
[23] David Millett. Hans berger: from psychic energy to the eeg. Perspectives in
Biology and Medicine, Johns Hopkins University Press, 44.(4):522–542, 2001.
58
[24] Mohammad-Mahdi Moazzami. EEG pattern recognition, toward brain-
computer interface, 2011.
[27] K.G. Oweiss. Statistical Signal Processing for Neuroscience and Neurotechnol-
ogy. Academic Press. Elsevier Science & Technology, 2010.
[28] Nunez PL and Srinivasan R. Electric Fields of the Brain: The Neurophysics of
EEG. Oxford University Press, 1981.
[29] D. Pyle. Data preparation for data mining. Number v. 1 in The Morgan
Kaufmann Series in Data Management Systems. Morgan Kaufmann Publishers,
1999.
[30] A. James Rowan. Primer of EEG. Elsevier Science, Philadelphia, PA, 2003.
[31] Alois Schlgl. The electroencephalogram and the adaptive autoregressive model:
theory and applications. Berichte aus der medizinischen Informatik und Bioin-
formatik. Shaker, 2000.
[33] P. Shenoy, K.J. Miller, J.G. Ojemann, and R.P.N. Rao. Generalized features
for electrocorticographic bcis. Biomedical Engineering, IEEE Transactions on,
55(1):273 –280, jan. 2008.
[35] B. E. Swartz and E. S. Goldensohn. Timeline of the history of eeg and asso-
ciated fields. Electroencephalography and clinical Neurophysiology, 106(2):173–
176, February 1998.
[36] P.N. Tan, M. Steinbach, and V. Kumar. Introduction to data mining. Pearson
International Edition. Pearson Addison Wesley, 2006.
59
[37] Lucy Troup. Electroencephalogram (eeg) and event-related potentials (erp),
January 2008. Presentation on CSU CSU Symposium on Imaging.
[41] Wikipedia. Parietal lobe — wikipedia, the free encyclopedia. [accessed 2-July-
2011].
60