0% found this document useful (0 votes)
29 views

5 BSP Wavelet

This document presents a framework for emotion recognition using EEG signals. The framework consists of four phases: 1) a preprocessing phase using multi-scale principal component analysis for noise reduction, 2) a feature extraction phase using tunable Q wavelet transform, 3) a feature dimension reduction phase using six statistical methods, and 4) a classification phase using a rotation forest ensemble classifier with algorithms like k-NN, SVM, ANN, random forest, and decision trees. The proposed framework achieves over 93% classification accuracy when using rotation forest ensemble with SVM for classification of EEG signals into three emotion states.

Uploaded by

Kaushik Das
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views

5 BSP Wavelet

This document presents a framework for emotion recognition using EEG signals. The framework consists of four phases: 1) a preprocessing phase using multi-scale principal component analysis for noise reduction, 2) a feature extraction phase using tunable Q wavelet transform, 3) a feature dimension reduction phase using six statistical methods, and 4) a classification phase using a rotation forest ensemble classifier with algorithms like k-NN, SVM, ANN, random forest, and decision trees. The proposed framework achieves over 93% classification accuracy when using rotation forest ensemble with SVM for classification of EEG signals into three emotion states.

Uploaded by

Kaushik Das
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Biomedical Signal Processing and Control 68 (2021) 102648

Contents lists available at ScienceDirect

Biomedical Signal Processing and Control


journal homepage: www.elsevier.com/locate/bspc

EEG-based emotion recognition using tunable Q wavelet transform and


rotation forest ensemble classifier
Abdulhamit Subasi a, b, *, Turker Tuncer c, Sengul Dogan c, Dahiru Tanko c, Unal Sakoglu d
a
Institute of Biomedicine, Faculty of Medicine, University of Turku, Turku, Finland
b
Department of Computer Science, College of Engineering, Effat University, Jeddah, 21478, Saudi Arabia
c
Department of Digital Forensics Engineering, College of Technology, Firat University, Elazig, 23119, Turkey
d
Computer Engineering, College of Science and Engineering, University of Houston – Clear Lake, Houston, TX, 77058, USA

A R T I C L E I N F O A B S T R A C T

Keywords: Emotion recognition by artificial intelligence (AI) is a challenging task. A wide variety of research has been done,
Electroencephalogram (EEG) which demonstrated the utility of audio, imagery, and electroencephalography (EEG) data for automatic emotion
Emotion recognition (ER) recognition. This paper presents a new automated emotion recognition framework, which utilizes electroen­
Tunable Q wavelet transform (TQWT)
cephalography (EEG) signals. The proposed method is lightweight, and it consists of four major phases, which
Machine learning
Rotation forest
include: a reprocessing phase, a feature extraction phase, a feature dimension reduction phase, and a classifi­
cation phase. A discrete wavelet transforms (DWT) based noise reduction method, which is hereby named multi
scale principal component analysis (MSPCA), is utilized during the pre-processing phase, where a Symlets-4 filter
is utilized for noise reduction. A tunable Q wavelet transform (TQWT) is utilized as feature extractor. Six
different statistical methods are used for dimension reduction. In the classification step, rotation forest ensemble
(RFE) classifier is utilized with different classification algorithms such as k-Nearest Neighbor (k-NN), support
vector machine (SVM), artificial neural network (ANN), random forest (RF), and four different types of the
decision tree (DT) algorithms. The proposed framework achieves over 93 % classification accuracy with RFE +
SVM. The results clearly show that the proposed TQWT and RFE based emotion recognition framework is an
effective approach for emotion recognition using EEG signals.

1. Introduction and collaboration exist in many domains such as health, therapy,


gaming, to name a few. Researchers are constantly trying to increase the
Emotions are among the most distinctive features of humans; they flexibility and efficiency of the interaction between computers and
affect a person’s behavior and actions [1,2]. Understanding and humans, and they strive to achieve high levels of satisfaction among
analyzing human emotions is important part of human life. Recently, users. Therefore, HCI systems require the ability to achieve a thorough
there has been an increased interest in automatic emotion classification understanding of different human emotions and emotional expression.
by machine learning and artificial intelligence, since this could be used Human thoughts and emotions can be expressed through verbal or
in human-computer interfaces (HCI) with a variety of applications nonverbal expressions, therefore, HCI systems need to understand,
[3–5]. These studies have shown that successful understanding of discern and analyze nonverbal expressions of humans. Recently, HCI
human emotions can lead to successful interactions between humans systems have shown promise in helping understand the emotional
and computers, and potentially artificial emotional intelligence could be behavior of a person, which is called “emotional computing” [9].
implemented by computer systems [6,7]. Electroencephalography (EEG), which can measure the neural ac­
In order for an artificial emotional intelligent system to be successful, tivity in the brain with the use of contact electrodes that are placed on
the system needs to possess a good knowledge about the human the scalp, has emerged as an important technology for HCI systems to
emotional understanding and the relationship between the affective utilize. EEG experiments have been used for decades to detect electrical
expression and emotional expression [8]. Human-machine interfaces signals from brain cortex while participants undertake different tasks or

* Corresponding author at: Institute of Biomedicine, Faculty of Medicine, University of Turku, 20520, Turku, Finland.
E-mail addresses: [email protected], [email protected] (A. Subasi), [email protected] (T. Tuncer), [email protected] (S. Dogan),
[email protected] (U. Sakoglu).

https://doi.org/10.1016/j.bspc.2021.102648
Received 30 October 2020; Received in revised form 4 April 2021; Accepted 10 April 2021
Available online 16 April 2021
1746-8094/© 2021 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
A. Subasi et al. Biomedical Signal Processing and Control 68 (2021) 102648

Fig. 1. EEG based Emotion Recognition Framework for HCI.

view different stimuli. EEG signals are usually highly-varying and noisy averaging classifier outputs [18], there are numerous researches on
voltage signals, hence the features extracted from EEG usually vary optimizing the recognition rate of the emotion recognition systems in
dramatically. A great advantage of the EEG, though, is its temporal terms of the classification accuracy and training time. Alickovic and
resolution: EEG’s temporal resolution is much finer/faster than the Subasi [19] used an ensemble model to improve the classification per­
speed of emotional changes, therefore, EEG can potentially capture, formance utilizing SVM-based ensembles. Nevertheless, these algo­
track and discern between emotional changes. Due to the fuzzy rithms achieved minor improvements [20]. Subasi et al. [21] suggested
boundaries between different emotions, EEG-based emotion recognition a signal recognition system by using bagging ensemble models having
(ER) is still very difficult. This is why there has not been much research diverse classification models in order to accomplish better recognition
done on ER using ensemble classifiers. Within this article, we offer a accuracy. Subasi et al. [22] also employed Adaboost ensemble classifier
short summary of the related study on emotional models, the role of in healthcare application.
movie within emotional induction and different methods for EEG-based Yang et al. [23] suggested a hierarchical structure of the network
emotion classification. We utilize EEG signals and apply a novel signal with other network branches to differentiate 3 human affective states or
processing and analysis methods to evaluate the output from EEG data in emotions: 1) positive; 2) negative; and 3) neutral. Every branch inside
classifying three emotion states. We use the TQWT technique to identify the network, which consists a considerably large number of nodes that
neural signs and steady trends for diverse emotions and assess the are hidden, may be practical as an autonomous invisible layer for rep­
steadiness of our models of emotion recognition. Our approach is similar resenting features. The experimental findings from using two separate
to a recently developed method which combined feature extraction EEG datasets indicate that a positive outcome is achieved by employ­
method to recognize six basic emotional states, including irritation, ment of both single and multiple modalities of the proposed technique.
objection, terror, sadness, happiness, and surprise [10]. Y.-J. Liu et al. [24] developed a diverse collection of 16 emotional film
A framework of emotion recognition system is presented in Fig. 1, clips, chosen from over 1000 film excerpts. Based on the emotional
which establishes the methodological framework in this study. In the classes convinced by these film clips, they suggested an emotional
methodology presented in Fig. 1, a multiscale principal component recognition system induced by real-time video to recognize the
analysis (MSPCA) is utilized for removing the various types of the arti­ emotional states of a person through brain wave analysis. Thirty subjects
facts and disturbances after acquisition and segmentation of the signal participated and watched 16 structured film clips characterizing
[11]. In the second stage, useful features are extracted, and they are used emotional interactions in real-life and emphasizing seven distinct emo­
to train the classifier using TQWT. Subsequently, dimension reduction is tions and neutrality. These findings show the advantage in terms of
employed to decrease unnecessary features in order to accomplish better classification accuracy over current high performing, real-time ER sys­
recognition performance [12]. The framework, using ensemble classi­ tems from EEG signals and the potential to identify related affective
fiers, can consequently achieve improved classification accuracy. states close to the 2-dimensional valence-arousal domain. Various other
The research motivation of this study is threefold: employment of feature selection and feature reduction methods have been proposed in
MSPCA for noise removal, TQWT [13] for feature extraction, and the literature [25–30]. Similar studies presented in this field in the
employment of rotational forest ensemble classifier for classification. literature are presented in Table 1 below.
Even though these methods are not novel on their own, the combination Our study presented in this paper employs a rotation forest ensemble
as a framework represents a novel framework. Current studies are classifier on emotion classification. To achieve the stated aims, a new
drifting towards the developing ensemble classifiers [14]. For example, emotion classification framework which relies on rotation forest
Tsai [15] identified the reliability of the ensemble classifiers on the ensemble classifier is introduced and it aims at improving the ER ac­
mixture of different classifiers to achieve a higher performance by curacy. Hence, the contribution of this research to studies on emotion
removing each “oversight” in a single classifier [16]. Diverse harmo­ recognition is employing TQWT feature extraction technique combined
nized models are used to build homogenous classifier ensembles in with rotation forest ensemble (RFE) classifier in emotion classification.
biomedical signal classification and the learning algorithm utilizing the Prior to this work, RFE models have been rarely used for EEG signal-
harmonized model achieved a higher classification accuracy [17]. Since based ER research. Moreover, usage of the TQWT-based feature
ensemble techniques diminish the effect of variation in the signal by extraction technique improved the accuracy of the suggested model. In

2
A. Subasi et al. Biomedical Signal Processing and Control 68 (2021) 102648

Table 1
Review of existing emotion recognition techniques from physiological signals.
Problem Method Dataset Evaluation Criteria

[31] Emotion recognition from EEG Fast Fourier transform, Bayes’ theorem and supervised learning DEAP [32] Accuracy
[9] Emotion recognition from multimodal Ensemble deep learning model DEAP [32] multimedia Accuracy, Boxplot
physiological signals database analysis,
[33] Emotion recognition from EEG Differential entropy, rational asymmetry, power spectral density, DEAP [32] and SEED Accuracy
differential causality differential asymmetry, asymmetry [34] datasets
[35] Emotional Dual-tree complex wavelet packet transform, SVD, SVM, F-ratio DEAP [32] Accuracy
Feature Extraction
[36] EEG signal classification Wavelet decomposition, PCA and SVM Collected dataset Accuracy
[37] EEG emotion recognition The recalibrated speech affective space model Collected dataset Accuracy
[38] Emotion classification and recognition The wavelet transform DEAP [32] emotion Accuracy, Euclidean
database distances
[39] Emotional state recognition Multivariate synchrosqueezing transform DEAP [32] Accuracy
[40] Emotion The deep belief network Collected dataset Accuracy
recognition
[41] EEG signal classification Circular back propagation and deep Kohonen neural networks DEAP [32] Accuracy, sensitivity,
specificity
[42] EEG-based emotion classification Machine learning DEAP [32] Accuracy
[43] Human emotion recognition Deep Belief Network, Fine Gaussian SVM DEAP [32] Accuracy, Confusion
matrix
[44] Emotion and personality recognition ASCERTAIN framework Collected dataset Accuracy

this paper, RFE classifier is proposed to achieve emotional EEG signal 2.2. Signal denoising with multi-scale PCA
classification with high accuracy. RFE classifiers can suitably better
acquire the essential features of EEG signals. Novelties of the presented PCA combines the variables as a linear weighted sum of transforms.
work are given as below. The direction on the hyperplane which gives the highest achievable
residual variance in the studied instances characterized by the principal
- A new emotion classification/recognition model which combines components, whilst keeping orthonormality. Multi-scale PCA (MSPCA)
MSPCA denoising, TQWT-based feature extraction with rotation is a combination of the principal component analysis and wavelets, and
forest ensemble classifier is presented to provide a high classification it eliminates the cross-correlation among instances [19,46]. In the
performance from EEG data. implementation, Symlets four wavelet was utilized as a primary wavelet
- This model is both highly accurate and computationally simple, with 5 level decomposition.
when compared with various computationally demanding deep-
learning based models which are used for emotion recognition 2.3. Feature extraction with tunable Q wavelet transform (TQWT)
from EEG data.
Reducing the input parameters to a classification algorithm, i.e. the
The paper is structured as follows: In Section 2, we elaborate on the number of features, to a number less than the number of samples, is
details of the dataset, our classification model architecture and the important for a classification algorithm to perform successfully. For
overall framework. The results of the experiment are given in Section 3. example, in electromyography (EMG), an electrical measurement tech­
Section 4 presents the conclusions and discussions. nique similar to EEG, small number of carefully extracted features can
help diagnose neuromuscular disorders. Wavelets are among the widely
2. Materials and methods used techniques to reduce number of features in EEG and EMG. Using
the TQWT, which simplify the signal into a series of simple functions,
2.1. Participants and dataset namely wavelets, we can obtain a signal decomposition with a high time
resolution. The wavelets were computed from a single basis function ψ
The publicly available SEED dataset [34] is used in this study. Fifteen by expansion and translations of the basis function [47]. In general, the
test participants/subjects (seven males and eight females) with mean CWT (continuous wavelet transform) [48,49] for a continuous signal x
age of 23.3 and standard deviation (SD) of 2.4 participated in the ex­ (t) is expressed as
periments. The EEG dataset is composed of signals acquired from the
∫∞
subjects while the subjects were viewing emotional video tapes. To 1 (t − τ)
CWTx (τ, a) = x(at)√̅̅̅ ψ dt, (1)
explore neural signs and notable reactions across different individuals a a
and different EEG sessions, participants were asked to carry out the trials − ∞

for three sessions each. This resulted in total of 45 experiment sessions in


where ψ(t) is the primary wavelet function, a is the scale factor which
this dataset. The time interval between each session for each subject was
translates the wavelet function across x(t), and τ is a variable that has the
a week or more. The facial expressions of the subjects were recorded
role to tune the time scale of the wavelet function, ψ [48–50].
during the EEG recording sessions (“facial videos”), concurrently. EEG
The TQWT is an imperative instrument for the moving signal
signals were acquired employing an ESI NeuroScan System employing a
breakdown or analysis. A TQWT has three main parameters, namely r,
sampling frequency of 1000 Hz from 62-channel active AgCl electrode
Q, and j which are tunable. Q represents the Q-factor; r represents the
cap in accordance with the standard 10–20 system. Selected emotional
oversampling rate; and j represents the levels of decomposition. The
video clips were utilized as stimuli in the experiments for negative,
amount of the wavelet oscillations are adjusted by Q, whereas r controls
positive and normal emotions. The time interval of every video clip is
the unnecessary ringing to define the wavelet temporal localization
around four minutes. The EEG data were sampled down to 200 Hz. A
while conserving its form [13,51]. Hypothetically, the appropriate
lowpass frequency filter of 0–75 Hz was used [23,33,45]. Sample
wavelet transform Q-factor’s value depends on the anticipated signal
emotional EEG signals are given in Fig. 2.
oscillatory behavior. Hence, the wavelet transform must possess a
comparatively high Q-factor while breaking down and studying the
oscillatory signals such as speech, electrocardiograph (ECG), EMG, EEG

3
A. Subasi et al. Biomedical Signal Processing and Control 68 (2021) 102648

Fig. 2. Emotional EEG signals.

etc. Most of the time the wavelet transforms show an insignificant ability minimized, which is computed from the sub-bands of the signal
of tuning the Q-factor. Since they are limited to be used with certain decomposition. Specifically, the following six statistical features were
applications, TQWT is proposed to overcome this problem. The exact computed:
rebuilding oversampled filter banks, with real scaling factors, are uti­
lized to implement the TQWT. According to Selesnick [13] an effective 1. Mean absolute value (MAV) of each sub-band,
implementation of the TQWT can be accomplished with a good 2. Average power of each sub-band,
over-sampling ratio. The TQWT has a significant similarity with the 3. Standard deviation of each sub-band,
rational-dilation wavelet transform (RADWT) [52]. Similar to the 4. Absolute mean values of adjacent sub-bands’ ratios,
RADWT, TQWT is discrete-time and moderately over-sampled to keep 5. Skewness of each sub-band,
the perfect reconstruction capability. As compared to the RADWT, the 6. Kurtosis of each sub-band.
TQWT is computationally more efficient because of its intelligent con­
struction which is based on the radix-2 fast Fourier transforms (FFTs).
Furthermore, it is easily configurable as a function of the intended 2.5. Rotation forest ensemble (RFE)
application since the configuration can be done by tuning its three pa­
rameters, Q, r and j [13]. One ensemble learning approach that has the basic objective of
constructing diverse but precise classifiers is the rotation forest
ensemble (RFE). A bagging approach alongside random sub-space
2.4. Dimension reduction approach are combined with PCA to build an ensemble of decision
trees. The input variables are spread in a random manner into k disjoint
One method of reducing EEG data dimension is to use the 1 st, 2nd, subsets in each iteration. To generate a linear combination of the subsets
3rd and 4th order statistics of the sub-bands, and the feature set being variables which are rotational movement of the foremost axes, PCA is

4
A. Subasi et al. Biomedical Signal Processing and Control 68 (2021) 102648

Table 2
EEG signal classification accuracy for three different emotional states without MSPCA Denoising.

utilized for every subset in turn. To evaluate the values for the extracted 3. The extracted vectors with coefficients should be arranged in a
features, k sets of principal components (PC) are utilized; at each iter­ scattered rotation matrix Ri. To measure the training example for Di
ation they provide the feedback to the tree learner. Owing to the pres­ classifier, the columns of Ri are reorganized to match the original
ervation of all the components acquired on each subclass, the number of features. Designate the reorganized rotation matrix, which is of size
the generated attributes is as many as the original ones. PCA is utilized N×n [54].
for the training examples from a selected sub-set of class values at
random in order to avoid the creation of similar coefficients as a result of 3. Results
selecting same feature sub-set in different iterations. Although the
values of the extracted features sent into the decision tree learning 3.1. Performance evaluation
model are determined from all the examples in the training set. A small
amount of the dataset can be produced in every iteration before For performance evaluation, we use basic performance measures
implementing PC transformations, in order to further increase diversity. such as overall accuracy, F-measure, kappa statistic (KS) and area under
Experiments suggest that rotation forest can offer a comparable output the ROC curve (AUC). True Negative (TN) and True Positive (TP) are the
to random forests, with a smaller number of trees. A recent study of correct predictions while a False Positive (FP) just like the name implies,
diversity which is measured by the Kappa statistics that is employed in is falsely predicted as positive (whereas it is actually negative) and a
evaluating the agreement amongst classifiers against error for pairs of False Negative (FN) is falsely classified as negative (whereas it is actually
elements of an ensemble, indicates a marginal improvement in positive) [53]. The ROC curve is a graphical tool for evaluating classifier
diverseness and a reduction in error in rotation forests compared to efficiency. ROC curves reflect a classifier’s output without taking into
bagging, this seems to translate into considerably improved results for account the costs of error or the class distribution. In the ROC curve, the
the ensemble in general [53]. TP rate is denoted by the vertical axis while FP rate is denoted by the
We denote the training data matrix with X, and we assume L number horizontal axis. If the class distributions and costs are not known, the
of classifiers in the classifier ensemble D1;. . . ; DL. F represents the area under the ROC curve is convenient, and one model is selected to
feature set. The number L should be set beforehand as in most ensemble denote all cases. The KS is an evaluation metric which takes into account
methods. We implement the following procedures in order to establish the desired figure by extracting it from the accomplishments of the
the training example for Di: classifier, expressing the result as a percentage of the number of the
classifiers. Therefore, KS reflects the relationship amongst the noticed
1. Distribute F to K arbitrary separate subsets (K is set at the beginning) classes and the expected classes, while adapting to a relationship that
in order to obtain the maximum chance for the highest diversity. For happens by chance. Nevertheless, factors like the basic success rate are
a given number of samples N, all feature subset includes M = N/K not taken into account [53]. Cohen [55] described the kappa statistic as
features (assuming K to be a factor of N for the sake of simplicity; if an agreement index as follows,
not, every feature subset includes roughly M = N/K features each).
P0 − Pe
2. represent the jth subset of features for the training set of classifier Di K= (2)
1 − Pe
by Fi,j, pick arbitrarily subset of classes that are not empty for each
subset and make a bootstrap of objects with the %75 of the dataset. where P0 is the observed agreement and defined as
Do PCA by making use of the M features in Fi,j only, and the chosen
subset of training data X. Save the coefficients of the PC, each of size P0 =
TN + TP
(3)
M×1. There is possibility of some eigenvalues being 0 (zero), so we TN + TP + FP + FN
might not get all M vectors. Therefore, Mj ≤ M. We run a PCA The probability of random agreement is measured by Pe [10]. Total
(principal component analysis) on a subset of classes so as to refrain random agreement probability is the probability that they agree on
from identical coefficients in the case that we have the exact feature either “Yes” or “No”, i.e.:
subset for various distinct classifiers.
Pe= PYES + PNO (4)

Table 3
F-measure, ROC area (AUC) and kappa statistic for EEG signal classification without MSPCA Denoising.

5
A. Subasi et al. Biomedical Signal Processing and Control 68 (2021) 102648

Table 4
EEG signal classification accuracy for three different emotional states with MSPCA Denoising.

Table 5
F-measure, ROC area (AUC) and kappa statistic for EEG signal classification with MSPCA Denoising.

where ensemble technique and the worst one is the LAD tree. To systematically
assess the effectiveness of the suggested technique, the emotion recog­
FP + TP FN + TP
PYES = ∗ (5) nition accuracy, F-measure, AUC and Kappa statistics are used. To show
TN + TP + FP + FN TN + TP + FP + FN
success of the suggested technique, different well-known methods are
FN + TN FP + TN employed for comparison purposes. This paper is one of the first appli­
PNO = ∗ (6) cation which utilizes combined TQWT and RFE classifier framework for
TP + TN + FP + FN TN + TP + FP + FN
the EEG based emotion recognition. There are several studies which
used various feature extraction and classification approaches. The ac­
3.2. Experimental results
curacy of the suggested approach is also compared with seven widely
used state-of-art methods in Table 6. It can be seen from the table that
Table 2 presents classification accuracy results using eight different
the suggested approach in this study outperforms the previous studies
classifiers e.g., Artificial Neural Network, k-NN, SVM, RF, C4.5, CART,
for EEG-based emotion recognition.
REP tree, and LAD tree. The left half of the table presents single classifier
As shown in Table 6, the proposed framework outperforms in terms
results whereas the right half presents the RFE results. Table 3 presents
of emotion recognition accuracy. These results clearly show the effec­
the other aforementioned classification performance results, i.e., F-
tiveness of the proposed framework in EEG based emotion recognition
measure, ROC area, and Kappa values. The classifiers are run several
when compared to other existing methods. Also, the approach achieved
times to achieve the highest performance with trial-and-error method.
approximately 18 % higher recognition rate than the deep learning
The best performance is achieved with RFE + SVM. In the imple­
method. The merits of the method are given as below.
mentation of the random forest ensemble classifier, the default values
are used in WEKA.1 In the the implementation of the SVM, the PUK
• A novel lightweight approach is used because whole components of
kernel with a C value of 100 is used. As shown from the results in
the proposed have basic mathematical background.
Tables 2–5, three main findings can be observed: a) the model with the
• The proposed approach can easily be employed to tackle signal
highest performance (accuracy) uses the SVM, and the one with the
processing hiccups because the proposed method is very simple.
lowest performance uses the LAD tree; b) the RFE always outperforms
• The proposed feature extraction method is very effective because
the single classification method; c) The MSPCA denoising increased the
high classification rates are achieved by using the suggested TQWT-
performance of the classifiers.
based feature extraction method. This situation clearly shows that
the features extracted are distinctive.
3.3. Discussion • A highly accurate EEG based emotion recognition framework is
proposed. The comparisons are also shown the success of the pro­
This research presents a novel electroencephalography (EEG) based posed approach.
ER framework. The suggested approach consists of four major stages,
namely MSPCA denoising, TQWT feature extraction, dimension reduc­ Demerits of the newly proposed method are;
tion and classification with RFE. Eight different conventional classifiers
models are used in the classification phase and a comprehensive • The method can be tested on bigger and heterogeneous datasets.
benchmark is obtained. According to the obtained results from • In this dataset, we classified positive, negative and neutral emotions.
Tables 2–5, the classifier with the best result is the SVM with RFE Variable emotions for instance surprise, anger, sadness, disgust, etc.
can be used to test the proposed TQWT based method.

1
https://www.cs.waikato.ac.nz/ml/weka/

6
A. Subasi et al. Biomedical Signal Processing and Control 68 (2021) 102648

Table 6
Comparison of the classification accuracies achieved by previous studies and the proposed study.
The study reference Feature Extraction Method Classifier Classification Accuracy

Yan et al method [56] Sparse Learning SVM 69.00


Li et al method [57] Convolutional and LSTM Recurrent Neural SVM 75.21
Networks
Wu et al method [58] Fast Fourier Transform SVM 76.34
Chai et al method [59] Adaptive Subspace Feature Matching SVM 81.09
Alazrai et al method [60] Quadratic SVM 83.1
Time-Frequency Distribution
Zhao et al method [61] Support Vector Machine SVM 86.11
Liu et al method [24] Short-Time Fourier Transform SVM 92.26
The Proposed Framework MSPCA þ TQWT Rotation Forest with SVM 93.1

4. Conclusions References

This study presents a novel TQWT and rotation forest ensemble [1] P.C. Petrantonakis, L.J. Hadjileontiadis, Emotion recognition from EEG using
higher order crossings, IEEE Trans. Inf. Technol. Biomed. 14 (2009) 186–197.
classifier-based emotion recognition framework by using EEG signals. [2] M. Murugappan, R. Nagarajan, S. Yaacob, Comparison of different wavelet features
The proposed framework consists of MSPCA-based denoising, TQWT- from EEG signals for classifying human emotions, 2009 IEEE Symposium on
based feature extraction, dimension reduction with statistical values Industrial Electronics & Applications: IEEE (2009) 836–841.
[3] C. Qing, R. Qiao, X. Xu, Y. Cheng, Interpretable emotion recognition using EEG
and classification by using eight conventional classifiers widely signals, IEEE Access 7 (2019) 94160–94170.
considered as benchmarks. The proposed TQWT-based framework [4] F. Afza, M.A. Khan, M. Sharif, S. Kadry, G. Manogaran, T. Saba, et al. A framework
achieved 93.1 % classification accuracy using RFE + SVM ensemble of human action recognition using length control features fusion and weighted
entropy-variances based feature selection. Image Vis. Comput. 106:104090.
classifier. In summary, in this paper, a novel and highly accurate EEG [5] M.A. Khan, S. Kadry, P. Parwekar, R. Damaševičius, A. Mehmood, J.A. Khan, et al.,
signal processing method for emotion recognition is presented. The Human gait analysis for osteoarthritis prediction: a framework of deep learning
proposed technique is lightweight and its mathematical models are and kernel extreme learning machine, Complex Intell. Syst. (2021).
[6] Y. Dasdemir, E. Yildirim, S. Yildirim, Analysis of functional brain connections for
simple. Since it is automated, there is no meta-heuristic optimization
positive–negative emotions using phase locking value, Cogn. Neurodyn. 11 (2017)
method involved in order to increase classification accuracy. 487–500.
In the future studies, our novel TQWT-based RFE emotion recogni­ [7] A. Goshvarpour, A. Goshvarpour, EEG spectral powers and source localization in
tion framework can be utilized to analyze other EEG datasets from depressing, sad, and fun music videos focusing on gender differences, Cogn.
Neurodyn. 13 (2019) 161–173.
different experiments, and other biomedical signals such as ECG and [8] S.N. Daimi, G. Saha, Classification of emotions induced by music videos and
EMG. Testing and validating the proposed method on bigger and het­ correlation with participants’ rating, Expert Syst. Appl. 41 (2014) 6057–6065.
erogeneous datasets is of interest. In this study, we only classified pos­ [9] Z. Yin, M. Zhao, Y. Wang, J. Yang, J. Zhang, Recognition of emotions using
multimodal physiological signals and an ensemble deep learning model, Comput.
itive, negative and neutral emotions. Using a wider variety of emotions Methods Programs Biomed. 140 (2017) 93–110.
such as surprise, anger, sadness, disgust, etc. in order to test the pro­ [10] Y. Zhang, X. Ji, S. Zhang, An approach to EEG-based emotion recognition using
posed TQWT-based RFE emotion recognition framework would also be combined feature extraction method, Neurosci. Lett. 633 (2016) 152–157.
[11] R. Yuvaraj, M. Murugappan, Hemispheric asymmetry non-linear analysis of EEG
of great interest. during emotional responses from idiopathic Parkinson’s disease patients, Cogn.
Neurodyn. 10 (2016) 225–234.
Funding [12] A. Ghaemi, E. Rashedi, A.M. Pourrahimi, M. Kamandar, F. Rahdari, Automatic
channel selection in EEG signals for classification of left or right hand movement in
Brain Computer Interfaces using improved binary gravitation search algorithm,
This work was supported by Effat University with the Decision Biomed. Signal Process. Control 33 (2017) 109–118.
Number of UC#7/28 Feb. 2018/10.2-44i (to Prof. Subasi), Jeddah, [13] I.W. Selesnick, Wavelet transform with tunable Q-factor, IEEE Trans. Signal
Process. 59 (2011) 3560–3575.
Saudi Arabia.
[14] L. da Silva-Sauer, L. Valero-Aguayo, A. de la Torre-Luque, R. Ron-Angevin,
S. Varona-Moya, Concentration on performance with P300-based BCI systems: a
CRediT authorship contribution statement matter of interface features, Appl. Ergon. 52 (2016) 325–332.
[15] C.-F. Tsai, Combining cluster analysis with classifier ensembles to predict financial
distress, Inf. Fusion 16 (2014) 46–58.
All algorithm codes are written and run by Abdulhamit Subasi. [16] A. Gicić, A. Subasi, Credit scoring for a microcredit data set using the synthetic
Part of Introduction and results are written by Turker Tuncer. minority oversampling technique and ensemble classifiers, Expert. Syst. 36 (2019),
Part of Introduction and Conclusion are written by Sengul Dogan. e12363.
[17] B. Blankertz, K.-R. Muller, D.J. Krusienski, G. Schalk, J.R. Wolpaw, A. Schlogl, et
Part of Introduction, Methods and results are written by Dahiru al., The BCI competition III: validating alternative approaches to actual BCI
Tanko. problems, IEEE Trans. Neural Syst. Rehabil. Eng. 14 (2006) 153–159.
Part of Introduction, Methods, Results and Discussion are written by [18] A. Rakotomamonjy, V. Guigue, BCI competition III: dataset II-ensemble of SVMs for
BCI P300 speller, IEEE Trans. Biomed. Eng. 55 (2008) 1147–1154.
Abdulhamit Subasi. [19] E. Alickovic, A. Subasi, Effect of multiscale PCA de-noising in ECG beat
The whole manuscript revised by Abdulhamit Subasi and Unal classification for diagnosis of cardiovascular diseases, Circ. Syst. Signal Process. 34
Sakoglu. (2015) 513–533.
[20] Y.-R. Lee, H.-N. Kim, A data partitioning method for increasing ensemble diversity
of an eSVM-based P300 speller, Biomed. Signal Process. Control 39 (2018) 53–63.
Declaration of Competing Interest [21] A. Subasi, E. Yaman, Y. Somaily, H.A. Alynabawi, F. Alobaidi, S. Altheibani,
Automated EMG signal classification for diagnosis of neuromuscular disorders
using DWT and bagging, Procedia Comput. Sci. 140 (2018) 230–237.
The authors declare no conflict of interest.
[22] A. Subasi, D.H. Dammas, R.D. Alghamdi, R.A. Makawi, E.A. Albiety, T. Brahimi, et
al., Sensor based human activity recognition using adaboost ensemble classifier,
Appendix A. Supplementary data Procedia Comput. Sci. 140 (2018) 104–111.
[23] Y. Yang, Q.J. Wu, W.-L. Zheng, B.-L. Lu, EEG-based emotion recognition using
hierarchical network with subnetwork nodes, IEEE Trans. Cogn. Dev. Syst. 10
Supplementary material related to this article can be found, in the (2017) 408–419.
online version, at https://doi.org/10.1016/j.bspc.2021.102648. [24] Y.-J. Liu, M. Yu, G. Zhao, J. Song, Y. Ge, Y. Shi, Real-time movie-induced discrete
emotion recognition from EEG signals, IEEE Trans. Affect. Comput. 9 (2017)
550–562.

7
A. Subasi et al. Biomedical Signal Processing and Control 68 (2021) 102648

[25] F. Afza, M.A. Khan, M. Sharif, T. Saba, A. Rehman, M.Y. Javed, Skin lesion [42] D.D. Chakladar, S. Chakraborty, EEG based emotion classification using
classification: an optimized framework of optimal color features selection, 2020 “Correlation based Subset Selection”, Biol. Inspired Cogn. Archit. 24 (2018)
2nd International Conference on Computer and Information Sciences (ICCIS): IEEE 98–106.
(2020) 1–6. [43] M.M. Hassan, M.G.R. Alam, M.Z. Uddin, S. Huda, A. Almogren, G. Fortino, Human
[26] M.A. Khan, M.S. Sarfraz, M. Alhaisoni, A.A. Albesher, S. Wang, I. Ashraf, emotion recognition using deep belief network architecture, Inf. Fusion 51 (2019)
StomachNet: optimal deep learning features fusion for stomach abnormalities 10–18.
classification, IEEE Access 8 (2020) 197969–197981. [44] R. Subramanian, J. Wache, M.K. Abadi, R.L. Vieriu, S. Winkler, N. Sebe,
[27] A. Rehman, M.A. Khan, T. Saba, Z. Mehmood, U. Tariq, N. Ayesha, Microscopic ASCERTAIN: Emotion and personality recognition using commercial sensors, IEEE
brain tumor detection and classification using 3D CNN and feature selection Trans. Affect. Comput. 9 (2016) 147–160.
architecture, Microsc. Res. Tech. 84 (2021) 133–149. [45] W.-L. Zheng, B.-L. Lu, Investigating critical frequency bands and channels for EEG-
[28] M.A. Khan, M. Qasim, H.M.J. Lodhi, M. Nazir, K. Javed, S. Rubab, et al., based emotion recognition with deep neural networks, IEEE Trans. Auton. Ment.
Automated design for recognition of blood cells diseases from hematopathology Dev. 7 (2015) 162–175.
using classical features selection and ELM, Microsc. Res. Tech. 84 (2021) 202–216. [46] B.R. Bakshi, Multiscale PCA with application to multivariate statistical process
[29] H. Arshad, M.A. Khan, M.I. Sharif, M. Yasmin, J.M.R. Tavares, Y.D. Zhang, et al., monitoring, AIChE J. 44 (1998) 1596–1610.
A multilevel paradigm for deep convolutional neural network features selection [47] M. Vetterli, C. Herley, Wavelets and filter banks: theory and design, IEEE Trans.
with an application to human gait recognition, Expert. Syst. (2020), e12541. Signal Process. 40 (1992) 2207–2232.
[30] M.A. Khan, Y.-D. Zhang, S.A. Khan, M. Attique, A. Rehman, S. Seo, A resource [48] I. Daubechies, The wavelet transform, time-frequency localization and signal
conscious human action recognition framework using 26-layered deep analysis, IEEE Trans. Inf. Theory 36 (1990) 961–1005.
convolutional neural network, Multimed. Tools Appl. (2020) 1–23. [49] O. Rioul, M. Vetterli, Wavelets and signal processing, IEEE Signal Process. Mag. 8
[31] H.J. Yoon, S.Y. Chung, EEG-based emotion estimation using Bayesian weighted- (1991) 14–38.
log-posterior function and perceptron convergence algorithm, Comput. Biol. Med. [50] N.V. Thakor, B. Gramatikov, D. Sherman, Wavelet (time-scale) analysis in
43 (2013) 2230–2237. biomedical signal processing. Medical Devices and Systems, CRC Press, 2006,
[32] S. Koelstra, C. Muhl, M. Soleymani, J.-S. Lee, A. Yazdani, T. Ebrahimi, et al., Deap: pp. 113–138.
a database for emotion analysis; using physiological signals, IEEE Trans. Affect. [51] S. Patidar, R.B. Pachori, Classification of cardiac sound signals using constrained
Comput. 3 (2011) 18–31. tunable-Q wavelet transform, Expert Syst. Appl. 41 (2014) 7161–7170.
[33] W.-L. Zheng, J.-Y. Zhu, B.-L. Lu, Identifying stable patterns over time for emotion [52] I. Bayram, I.W. Selesnick, Frequency-domain design of overcomplete rational-
recognition from EEG, IEEE Trans. Affect. Comput. (2017). dilation wavelet transforms, IEEE Trans. Signal Process. 57 (2009) 2957–2972.
[34] http://bcmi.sjtu.edu.cn/-seed/. [53] I.H. Witten, E. Frank, M.A. Hall, C.J. Pal, Data Mining: Practical Machine Learning
[35] D.S. Naser, G. Saha, Recognition of emotions induced by music videos using DT- Tools and Techniques, Morgan Kaufmann, 2016.
CWPT, 2013 Indian Conference on Medical Informatics and Telemedicine (ICMIT): [54] J.J. Rodriguez, L.I. Kuncheva, C.J. Alonso, Rotation forest: a new classifier
IEEE (2013) 53–57. ensemble method, IEEE Trans. Pattern Anal. Mach. Intell. 28 (2006) 1619–1630.
[36] D. Iacoviello, A. Petracca, M. Spezialetti, G. Placidi, A real-time classification [55] J. Cohen, A coefficient of agreement for nominal scales, Educ. Psychol. Meas. 20
algorithm for EEG-based BCI driven by self-induced emotions, Comput. Methods (1960) 37–46.
Programs Biomed. 122 (2015) 293–303. [56] Y. Yan, C. Li, S. Meng, Emotion recognition based on sparse learning feature
[37] M. Othman, A. Wahab, I. Karim, M.A. Dzulkifli, I.F.T. Alshaikli, EEG emotion selection method for social communication, Signal Image Video Process. (2019)
recognition based on the dimensional models of emotions, Procedia-Social Behav. 1–5.
Sci. 97 (2013) 30–37. [57] Y. Li, J. Huang, H. Zhou, N. Zhong, Human emotion recognition with
[38] G.K. Verma, U.S. Tiwary, Multimodal fusion framework: a multiresolution electroencephalographic multidimensional features by hybrid deep neural
approach for emotion classification and recognition from physiological signals, networks, Appl. Sci. 7 (2017) 1060.
NeuroImage 102 (2014) 162–172. [58] S. Wu, X. Xu, L. Shu, B. Hu, Estimation of valence of emotion using two frontal EEG
[39] A. Mert, A. Akan, Emotion recognition based on time–frequency distribution of channels, 2017 IEEE International Conference on Bioinformatics and Biomedicine
EEG signals using multivariate synchrosqueezing transform, Digit. Signal Process. (BIBM): IEEE (2017) 1127–1130.
81 (2018) 106–115. [59] X. Chai, Q. Wang, Y. Zhao, Y. Li, D. Liu, X. Liu, et al., A fast, efficient domain
[40] S.-K. Kim, H.-B. Kang, An analysis of smartphone overuse recognition in terms of adaptation technique for cross-domain electroencephalography (EEG)-based
emotions using brainwaves and deep learning, Neurocomputing 275 (2018) emotion recognition, Sensors 17 (2017) 1014.
1393–1406. [60] R. Alazrai, R. Homoud, H. Alwanni, M. Daoud, EEG-based emotion recognition
[41] D.J. Hemanth, J. Anitha, Brain signal based human emotion analysis by circular using quadratic time-frequency distribution, Sensors 18 (2018) 2739.
back propagation and Deep Kohonen Neural Networks, Comput. Electr. Eng. 68 [61] G. Zhao, Y. Ge, B. Shen, X. Wei, H. Wang, Emotion analysis for personality
(2018) 170–180. inference from EEG signals, IEEE Trans. Affect. Comput. 9 (2017) 362–371.

You might also like