Above 90 Accuracy
Above 90 Accuracy
Received June 1, 2019, accepted June 19, 2019, date of publication July 16, 2019, date of current version August 12, 2019.
Digital Object Identifier 10.1109/ACCESS.2019.2929266
ABSTRACT Epilepsy is a common neurological disease that can cause seizures and loss of consciousness
and can have a severe negative impact on long-term cognitive function. Reducing the severity of impact
requires early diagnosis and treatment. Epilepsy is traditionally diagnosed using electroencephalogra-
phy (EEG) performed by trained physicians or technicians but this process is time-consuming and prone to
interference, which can negatively impact accuracy. This paper develops a model for epilepsy diagnosis using
discrete wavelet transform to analyze sub-bands within the EEG parameter and select EEG characteristics
for epilepsy detection. The minimize entropy principle approach is used to build fuzzy membership functions
of the characteristics of each brain wave and are then used as the basis for the construction of an associative
Petri net model. Using our APN model, the associative Petri net approach provides diagnosis accuracy rates
of 93.8%, outperforming similar approaches using decision tree, support vector machine, neural network,
Bayes net, naïve Bayes, and tree augmented naïve Bayes. Thus, the proposed approach shows promise for
fast, accurate, and objective diagnosis of epilepsy in clinical settings.
VOLUME 7, 2019 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see http://creativecommons.org/licenses/by/4.0/ 103255
H.-S. Chiang et al.: Wavelet-Based EEG Processing for Epilepsy Detection Using Fuzzy Entropy and APN
amplitude of 100∼200µV. Sharp waves are triangular waves minimize entropy principle approach, and associative Petri
which rise quickly and fall more slowly, with an amplitude net.
of more than 200µV. These represent transient electrical dis-
charge originating from synchronized neurons and are asso- A. DISCRETE WAVELET TRANSFORM
ciated with cerebral blood-flow and cerebral metabolism that Wavelet transform (WT) is mainly used for signal pre-
may affect cognition [5]. Seizures typically originate in the processing, noise reduction, and feature extraction. WT can
brain’s temporal lobe, and spike amplitude during measure- analyze EEG signals of different scales and capture more
ment will be limited by the anterior temporal electrode [6]. details than short-time Fourier transfer (STFT). Discrete
During seizures, EEG electrodes attached to the scalp provide wavelet transform (DWT) is a wavelet transform that can be
reliable brain information to understand seizures [7], which used in the frequency and time domains to divide original
can also be used as a basis for early detection of epilepsy [8]. data into consistent data and highly variable data to respec-
tively run high-frequency and low-frequency filtering on the
B. RELATED RESEARCH ON EPILEPSY DETECTION original series. The series generated by the low-frequency
In the feature extraction of epilepsy frequency domain, there filter retains the consistent data of the original series and the
are literatures proposed that decomposing brainwaves into series generated by the high-frequency filter retains the highly
several sub-bands provides more information than using the variable data of the original series [25].
original brainwave signal [9]. Brainwave signal is usually
decomposed into four sub-bands: δ band, θ band, α band,
and β band.
Most features extracted from frequency domain are invalid
and the accuracy is generally not high. Nevertheless, there
is exception; Übeyli and Güler (2007) used power spectral
density (PSD) as a feature and classified by mixture of
experts (ME) and modified mixture of experts (MME) to
obtain an accuracy rate of 95.53% and 98.6% [10]. FIGURE 1. Process of discrete wavelet transform.
The use of discrete wavelet transform (DWT) in the time
domain is the most practical method for EEG signal clas- The architecture of DWT is shown in Fig. 1: x[n] is the
sification [10]–[16]. In order to reduce the dimension of discrete input signal with length N; g[n] is the low pass filter
the eigenvector and the computational complexity, statistical that filters out the high frequency of the input signal and
features such as maximum (MAX), minimum (Min), mean outputs the low frequency; h[n] is the high pass filter that
and standard deviation (SD) can be used to represent the filters out low frequency and outputs high frequency; ↓ Q
signals [10], [11], [17]. Other studies [18] used different fea- is the downsampling filter; and a indicates the ath -layer in the
tures to cut continuous energy signals into window functions. architecture.
Guo et al. (2011) used the mean, SD, and curve length of
asymmetric frequency bands as features [12]. The study of B. MINIMIZE ENTROPY PRINCIPLE APPROACH
scholars Acharya et al. (2011) used the principal component MEPA uses the concept of entropy to minimize the level of
of wavelet packet decomposition (WPD) as features [19]. confusion in the data and then establish a fuzzy membership
The study of Chen (2014) used the coefficients of dual tree function, membership degree and linguistic value. Entropy
complex wavelet transform as features [15]. indicates the distribution uniformity of any kind of energy
For the classification methods, the more common and in space, with a higher distribution uniformity indicating
accurate methods include the neural network theory and greater entropy. The probability distribution of entropy is
the probabilistic neural network(PNN), with an accuracy used to measure the distribution of uncertainty. MEPA divides
rate over 93% [20]–[23]; the k-nearest neighbors algorithm data into different segments via interval segmentation and
(KNN), with an accuracy rate of 99.3% [12]; multilayer then evaluates the degree of clutter of messages in every
perceptron network (MLPNN), with an accuracy rate over specific data segment based on one key objective: to minimize
89% [17], [18], [20]–[24]; artificial neural network referred entropy or data randomness. This assessment can identify the
to as the fuzzy inference system (ANFIS), with an accuracy interval segment that produces the minimum degree of data
rate of 98.68% [11]; support vector machine (SVM), with randomness.
an accuracy rate about 99.2% [21], [24] and optimum path Suppose we want to find the threshold segmentation line
forest (OPF) with an accuracy rate of 89.2% [16]. xi to divide the data set into two regions [a, a + x] and
[a + x, b] in the data collection interval [a, b]. To find the
III. RESEARCH METHODOLOGY most appropriate threshold segmentation line, MEPA will
This study uses discrete wavelet transform to extract features establish a threshold segmentation line for every different
via high-frequency and low-frequency filters, and uses a data type in the data collection interval [a, b]; therefore,
variety of classification methods to compare its accuracy rate a set of threshold segmentation lines X , xi will be obtained.
for epilepsy detection including discrete wavelet transform, Next, use Eqs. (1), (2), (3), and (4) to calculate the entropy
value at the established threshold xi for each different data consequences and the uncertainty factor of reasoning process,
type [26]. The threshold with the minimum entropy value will in an APN a logical approach for processing uncertainty is
be the determined threshold used to divide the entire data set incorporated to the reasoning mechanism to produce five dif-
into two regions p and q. The induction is performed using ferent types of associative production rules (APR). Accord-
the entropy minimization principle; we can find intervals in ing to different conditions, APR comprises ‘‘and’’ or ‘‘or’’
which the distribution of samples of any class is as relatively computations and is also called a complex APR. This study
uniform as possible and obtain the optimal clusters within the mainly uses three types of APRs and their calculation are
interval [26]. described as follows [27]:
Type 1: IF dj THEN dk (CF = cjk ). This form of APR is
S(x) = p(x)Sp (x) + q(x)Sq (x) (1)
calculated as formula (5):
Sp (x) = −[p1 (x) + lnp1 (x) + p2 (x) + lnp2 (x)] (2) ∗
Sq (x) = −[q1 (x) + lnq1 (x) + q2 (x) + lnq2 (x)] (3) α (pk ) = α pj cjk when sjk ≥ τjk , cjk ≥ γjk (5)
p(x) + q(x) = 1 (4) Type 2: IF dj1 or dj2 or . . ..or djn THEN dk (CF = cji ). This
form of APR is calculated as formula (6):
pk (x) and qk (x) respectively denote the conditional proba-
∗ ∗ ∗
bility of k-type samples in [a, a+x] and [a+x, b]. In the set of α (pk ) = Max α pj1 cj1 , α pj2 cj2 , . . . α pjn cjn
threshold segmentation lines, the segmentation line with the
when sji ≥ τji , cji ≥ γji , i = 1, 2, . . . n (6)
minimum entropy value is the optimum threshold. The value
estimates of pk (x), qk (x), p(x) and q(x) are descripted in the Type 3: IF dj THEN dk1 or dk2 or. . ..or dkn (CF = cji ). This
literatures [26]. form of APR is calculated as formula (7):
∗ ∗
C. ASSOCIATIVE PETRI NET α (pk1 ) = α pj1 cj1 , α (pk2 ) = α pj2 cj2 ,
∗
Associative Petri Net (APN) integrates the Apriori algorithm . . . α (pkn ) = α pjn cjn when sji ≥ τji ,
to Petri net that becoming a new type of Petri network struc- cji ≥ γji , i = 1, 2, . . . n (7)
ture. It can be constructed in the original knowledge-based
system to generate new knowledge rules to form a net- All types of APRs can be presented mathematically and
work structure that can describe the reasoning process. Then graphically. By connecting the associated places and giving
the causal relationship between the input/output places is transitions an appropriate CF, an APN reasoning model that
deduced using the unique transition reasoning mechanism of expresses the specialized domain knowledge can be derived.
the associative production rule (APR) generated by APN [27].
APN contains three types of nodes: (1) places, which are 2) REASONING ALGORITHM
circles and describe a certain state; (2) squares, which rep- Each place in an APN model is represented by a triple (p\x ,
resent the threshold of association between the antecedent α(px ), IRS(px )), where px ∈ P, P = {p1 , p2 , . . . . . . .pn } is
and consequence, and are used to evaluate whether there a finite set of places. The px represents a place (node) in an
is an association between two place nodes; (3) transitions, APN and dx denote the proposition of px . The degree of truth
bars, which contain an associative function G, as referred of proposition dx is defined as α(px ); the threshold of degree
to as certainty function (CF), that transform the association of truth of each proposition is given as λ. If α(px ) ≥ λx ,
through which different antecedents impact consequences then the proposition dx exists. IRS(px ) and RS(px ) denote
into appropriate representations. When the conditions are the immediate reachability set and reachability set of px ,
satisfied, the antecedent is transformed into consequence via respectively. Taking into account the degree of association
a transition mechanism. The main concept of APN lies in of every antecedent proposition and consequence proposi-
the discussion of dynamic processes. Therefore, its imple- tion, this study adopted the Apriori algorithm in association
mentation rule emphasizes its ability to enable and fire the rule [29] to find the associative degree between places px and
transition; that is, firing transitions that are enabled. During py . Let sxy is the support degree and cxy denote conference
the transition state, inference ability is triggered by different degree between places px and py . The thresholds of support
conditions to further explore the influence and correlation of and confidence degree are given with τxy and γxy . When the
antecedent and consequence. The APN can be defined as a associative degree satisfies their minimum threshold τxy and
13-tuple, APN = (P, T , S, C, D, 3, 0, I , O, α, β, G, W ) and γxy , they are considered interesting; these threshold values
a detailed description is in the literatures [27] and [28]. can be defined by user-experience or experts in this domain.
If the support and confidence of values are higher than thresh-
1) ASSOCIATIVE PRODUCTION RULE (APR) old values (sxy ≥ τxy and cxy ≥ γxy ), the transition txy is
The reasoning mechanism of APN uses different types of enabled to fire. Moreover, an appropriate certainty function
APR and, based on different input antecedents, puts the and corresponding confidence value (CF = cxy ) is given,
association function between places and transitions into con- otherwise the association does not exist and the correspond-
sideration to reason out the possible value of consequences ing confidence value is zero (CF = 0). When a transition
via the unique reasoning algorithm. Given the antecedents or ti fires, the tokens in the input places pass all squares and
move to output places. The degree of truth of proposition dy A2 sub-band with lower frequency were divided via high-pass
is calculated by α(px )∗ cxy . and low-pass filter. The A2 sub-band was then used for the
If there is an APN network structure, a reasoning algo- third discrete wavelet transformation. Similarly, the A3 sub-
rithm is adopted to generate all reasoning paths from starting band was used for the fourth discrete wavelet transformation
place ps to goal place pg . Through our proposed a reasoning to divide the high-frequency D4 sub-band and low-frequency
algorithm [27], the degree of truth of the goal proposition A4 sub-band. The dataset was divided into D1, D2, D3, D4,
can be predicted under different antecedents. An example and A4 sub-bands.
of epilepsy diagnosis will be used to illustrate the reasoning
algorithm in this study. 2) FEATURE EXTRACTION
According to Übeyli et al., to reduce the dimensionality of
IV. EXPERIMENT DESIGN AND PERFORMANCE the feature eigenvector and the computational complexity,
EVALUATION statistical features such as maximum (Max), minimum (Min),
A. DATA SET DESCRIPTION mean, and standard deviation (SD) can be used to express
The dataset used in this study is a public dataset the signals [20]–[24]. In this study, the maximum, minimum,
published by the Epilepsy Center of Department of mean, and standard deviation of each sub-band generated
Epileptology, University of Bonn (http://epileptologie- after discrete wavelet transformations were used as the fea-
bonn.de/cms/front_content.php?idcat = 193) [30], and tures to classify epileptic states. A total of 20 features are
has been widely used in studies related to EEG sig- summarized in Table 1.
nals [10], [11], [18], [24]. Class A is the brainwave mea-
TABLE 1. Features of sub-bands for epilepsy detection.
sured from normal subjects awake with eyes open. Class
B is the brainwave measured from normal subjects awake
with eyes closed. Class C is the brainwave measured in
the regions around the hippocampal formation of epileptic
patients during a partial seizure. Class D is the brainwave
measured in the hippocampal formation of epileptic patients
during a partial seizure. Class E is the collection of data
measured centralizing in the hippocampal formation during a
clinician-confirmed generalized seizure. Each class contains
100 samples, for a total of 500 samples each lasting 23.6 sec-
onds with a sampling frequency range of 0.53∼40Hz. The
sampling rate per second is 173.61Hz, and the resolution is
12 bits. This study classified Class A and Class B as normal,
with a total of 200 samples; and Class C, Class D, and Class
E were classified as epilepsy, with a total of 300 samples.
3) FUZZY MEMBERSHIP FUNCTION
In this study, the MEPA is used to establish the fuzzy mem-
B. CONSTRUCTION OF EPILEPSY DIAGNOSIS MODEL bership function of every feature in each sub-band. First,
In this study, the epilepsy diagnosis model was constructed the entropy of dataset was calculated; the minimum was the
in two stages. First, the dataset was discretized by dis- optimal segmentation position and used to divide the dataset
crete wavelet transform (DWT) and important features were into two datasets. The entropy values of these two datasets
extracted. The second stage was the diagnosis of epilepsy, were respectively calculated, and then the minimum entropy
using the minimize entropy principle approach (MEPA) to was used as the segmentation position. The entropy value
produce fuzzy membership functions and rules, and integrate at the segmentation position was used to establish the fuzzy
the associative Petri net (APN) for classification. membership function of the feature. Taking the Max of the
D1 sub-band as an example, the fuzzy membership function
1) DATA PROCESSING of the feature from low to high was divided into D1_MaxL ,
This study ran discrete wavelet transformation on the D1_MaxM and D1_MaxH . Figure 2 shows the fuzzy mem-
500 samples using MATLAB software. The waveform used bership function of Max in the D1 sub-band. The equations
was the db2, which was relatively smooth and the downsam- for the low, middle, and high semantic membership functions
pling parameter was set to 2 [11]. Discrete wavelet transfor- are shown below.
mation discretized the data. The process is shown in Fig. 1.
In the first discrete wavelet transform, the D1 sub-band 4) ASSOCIATIVE PETRI NET MODEL
with higher frequency and A1 the sub-band with lower fre- Using the fuzzy membership functions constructed in section
quency were divided via high-pass and low-pass filter. Next, 4.2.3, we incorporate the linguistic value and membership
the A1 sub-band was used for the second discrete wavelet degree into the propositions of input place and degree of truth
transform. The D2 sub-band with higher frequency and the in the associative Petri net. We establish the associating Petri
the EEG characteristics to assess the likelihood of epilepsy. TABLE 4. Evaluation of various classification methods.
The APN reasoning process is as follows:
Set p3 and p11 as the starting points and make the inference:
Step 1: moving from p3 and p11 to p14
α(p14 ) = Max{α(p3 )∗ c3,14 , α(p11 )∗ c11,14 } =
Max{0.5926∗ 0.6291, 0.5167∗ 0.6274} =
Max{0.3728,0.3242} = 0.3728
Step 2: moving from p2 to p17
α(p17 ) = α(p2 )∗ c2,17 = {0.5926∗ 0.9605}
= 0.5692
1 for testing. The cross-validation was repeated for 10 times
Step 3: moving from p14 and p13 to p10
and each subsample was validated. The results are shown
α(p10 ) = Max{α(p14 )∗ c14,10 , α(p13 )∗ c13,10 } =
in Table 4. Precise rate, F-measure, G-mean and accuracy
Max{0.3728∗ 0.9783, 0.9246∗ 0.9483} =
of the APN diagnosis model used in this study outperformed
Max{0.3647,0.8768} = 0.8768
other machine learning methods.
Step 4: moving from p17 and p9 to p6
This section compares the common classification methods
α(p6 ) = Max{α(p17 )∗ c17,6 , α(p9 )∗ c9,6 } =
used for epilepsy diagnosis. During the comparison process,
Max{0.5692∗ 0.8345, 0.8599∗ 0.8916} =
True Positives (TP: the number of normal samples classified
Max{0.4750,0.7667} = 0.7667
as normal), True Negatives (TN: number of epilepsy sam-
Step 5: moving from p10 to p12
ples classified as epilepsy), False Negatives (FN: number of
α(p12 ) = α(p10 )∗ c10,12 = {0.8768∗ 0.9780}
epilepsy samples classified as normal), and False Positives
= 0.8575
(FP: number of normal sample classified as epilepsy) are
Step 6: moving from p1 to p18
calculated and evaluated by the following indicators:
α(p18 ) = α(p1 )∗ c1,18
(1) Precision Rate: TP/(TP + FP), the correct proportion
= {0.8406∗ 0.9855}
of normal sample classification.
= 0.8284
(2) Recall Rate: TP/(TP + FN), the proportion of normal
Step 7: moving from p16 to p19
samples in all samples classified as normal.
α(p19 ) = α(p16 )∗ c16,19
(3) F-measure: 2∗ TP/(2∗ TP + FP + FN), a measure of
= {0.5329∗ 0.7801} = 0.4157
the quality for classification
√ system.
Step 8: moving from p6 , p12 , and p18 to p5
(4) G-mean: (TP/(TP + FN) × TN/(TN + FP)), assess-
α(p5 ) = Max{α(p6 )∗ c6,5 , α(p12 )∗ c12,5 ,
ment for the average classification accuracy of all classes.
α(p18 )∗ c18,5 } = Max{0.7667∗ 0.9556,0.8575
∗ 1,0.8284 ∗ 0.8063} = (5) AUC: the area under the ROC curve, where a larger
value indicates that at least one class is correctly classified.
Max{0.7327,0.8575,0.6679} = 0.8575
(6) Accuracy: the correct number of epilepsy and normal
Step 9: moving from p19 and p20 to p8
classification/total number of samples.
α(p8 ) = Max{α(p19 )∗ c19,8 , α(p20 )∗ c20,8 } =
This study combined MEPA and APN to perform epilepsy
Max{0.4157∗ 0.7065, 1∗ 0.8462} =
diagnosis, and can effectively and accurately determine
Max{0.2937,0.8462} = 0.8462
whether the patients suffer from epilepsy. The experimental
Step 10: moving from p5 and p8 to p4
results of the APN model proposed in this study showed
α(p4 ) = Max{α(p5 )∗ c5,4 , α(p8 )∗ c8,4 } =
a precision rate of 99% and negative predictive value of
Max{0.8575∗ 0.9417, 0.8462∗ 0.9742} =
90.33% where the negative predictive value was calculated by
Max{0.8075,0.8244} = 0.8244
TN/(TN + FN). Compared with other methods, the precision
Therefore, after reasoning we can obtain the propositions rate of the APN model was the best, indicating that the prob-
of p4 and the degree of truth. ability of misclassification of the APN model in classifying
normal participants was lower than in other methods. This
C. EVALUATION OF EPILEPSY DIAGNOSIS MODEL improved accuracy will help reduce medical costs due to
The associative Petri net (APN) used in this study was com- diagnostic errors, such as follow-up testing. The negative
pared with other common classification methods, including predictive value of the APN model was lower than DT, NN,
decision tree (DT), neural network (NN), support vector and TAN, indicating a lower probability of misclassification
machine (SVM), Bayes net (BN), tree augmented Naïve of epilepsy.
Bayes (TAN), and Naïve Bayes (NB). The decision tree
adopted the C4.5 algorithm and set the minimum number of 1) COMPARISON WITH PAST WORK
leaf nodes of each branch at 2; the pruning threshold value This study used APN to classify Class A and Class B as nor-
was 0.25 and seed was 1; the SVM kernel used polynomial mal participants and Class C, Class D, and Class E as epilepsy
functions. Each method ran cross-validation 10 times. The patients (two classes) with an accuracy rate of 93.8%. Nunes
data were divided into 10 subsamples, 9 for training and et al. (2014) [16] classified Class A and Class B as normal
subjects, Class C and Class D as patients with partial seizure, information gain to sort and rank the features and finally
and Class E as patients with generalized seizure, a total using the APN for classification, we obtained an accuracy rate
of three classes. The accuracy rates using OPF, Bayesian, of 98.6%, which improved on previous results.
SVM-RBF and ANN-MLP were respectively 89.2%, 87.4%, Since samples of patients with epilepsy were difficult to
84.4%, and 70.6% [16]. The data classification of Nunes’ obtain, this study was limited to use of public datasets for
study was similar to this study, but the present study produced testing. Although the accuracy rate was good, it may be
more accurate results. biased in clinical diagnosis. Future studies should cooperate
To explore the accuracy of the classification of epilepsy with doctors to collect patient samples of different severity to
patients with generalized seizures, Guo et al. (2010a, develop a diagnosis system based on this diagnosis model,
2010b) [31], [32] classified Class A, Class B, Class C, and combining mobile devices and a simple EEG App as the
Class D into one class and Class E to another. Using the objective basis for clinical diagnosis to reduce costs and time
MLPNN and ANN for classification, they achieved respective needed to achieve an accurate diagnosis.
accuracy rates of 97.77% and 98.27%. To compare against
Guo et al. [31], [32], we classified Class A, Class B, Class REFERENCES
C, and Class D into one class and Class E to another; using [1] National Institute of Neurological Disorders and Stroke, ‘‘EEG monitor-
the APN for classification and information gain to filter vari- ing,’’ in The Epilepsies and Seizures: Hope Through Research. Rockville,
MD, USA: Bethesda, 2004, p. 8.
ables, with an obtained accuracy of 98.6%, thus improving [2] S. Noachtar and J. Rémi, ‘‘The role of EEG in epilepsy: A critical review,’’
on the results of Guo et al. [31], [32]. Comparison results are Epilepsy Behav., vol. 15, pp. 22–33, May 2009.
summarized in Table 5. [3] R. Kennett, ‘‘Modern electroencephalography,’’ J. Neurol., vol. 259, no. 4,
pp. 783–789, Apr. 2012.
TABLE 5. Comparison of prior studies for epilepsy detection.
[4] M. de Curtis, J. G. R. Jefferys, and M. Avoli, ‘‘Interictal epileptiform
discharges in partial epilepsy: Complex neurobiological mechanisms based
on experimental and clinical evidence,’’ in Jasper’s Basic Mechanisms of
the Epilepsies, 4th ed. Bethesda, MD, USA: NCBI, 2012.
[5] E. Rodin, T. Constantino, S. Rampp, and P. K. Wong, ‘‘Spikes and
epilepsy,’’ Clin. EEG Neurosci., vol. 40, no. 4, pp. 288–299, 2009.
[6] B. A. Dworetzky and C. Reinsberger, ‘‘The role of the interictal EEG
in selecting candidates for resective epilepsy surgery,’’ Epilepsy Behav.,
vol. 20, no. 2, pp. 167–171, Feb. 2011.
[7] M. Sammaritano, A. de Lotbiniére, F. Andermann, A. Olivier, P. Gloor,
and L. F. Quesney, ‘‘False lateralization by surface EEG of seizure onset
in patients with temporal lobe epilepsy and gross focal cerebral lesions,’’
Ann. Neurol., vol. 21, no. 4, pp. 361–369, Apr. 1987.
[8] M. Hildebrandt, R. Schulz, M. Hoppe, T. May, and A. Ebner, ‘‘Postopera-
tive routine EEG correlates with long-term seizure outcome after epilepsy
surgery,’’ Seizure, vol. 14, no. 7, pp. 446–451, Oct. 2005.
[9] H. Adeli, S. Ghosh-Dastidar, and N. Dadmehr, ‘‘A wavelet-chaos method-
ology for analysis of EEGs and EEG subbands to detect seizure and
epilepsy,’’ IEEE Trans. Biomed. Eng., vol. 54, no. 2, pp. 205–211,
Feb. 2007.
[10] E. D. Übeyli and İ. Güler, ‘‘Features extracted by eigenvector methods for
detecting variability of EEG signals,’’ Pattern Recognit. Let., vol. 28, no. 5,
V. CONCLUSION pp. 592–603, Apr. 2007.
This study constructed an epilepsy diagnosis model. First, [11] İ. Güler and E. D. Übeyli, ‘‘Adaptive neuro-fuzzy inference system for
classification of EEG signals using wavelet coefficients,’’ J. Neurosci.
high-pass and low-pass filters were used to generate D1∼D4 Methods, vol. 148, no. 2, pp. 113–121, 2005.
and A4 sub-bands and extracted the Max, Min, mean and [12] L. Guo, D. Rivero, J. Dorado, C. R. Munteanu, and A. Pazos, ‘‘Automatic
SD of the features. Then, the information gain was used to feature extraction using genetic programming: An application to epileptic
EEG classification,’’ Expert Syst. Appl., vol. 38, no. 8, pp. 10425–10436,
filter out variables and MEPA was adopted to construct the Aug. 2011.
fuzzy membership function of each feature to understand the [13] U. R. Acharya, S. V. Sree, A. P. C. Alvin, and J. S. Suri, ‘‘Use of principal
relationship between each feature and seizures. The infor- component analysis for automatic classification of epileptic EEG activities
in wavelet framework,’’ Expert Syst. Appl., vol. 39, no. 10, pp. 9072–9078,
mation gain was used to sort and rank the features. Finally, Aug. 2012.
the APN was used for classification, achieving an accuracy [14] U. R. Acharya, S. V. Sree, A. P. C. Alvin, R. Yanti, and J. S. Suri,
rate of 93.8% thus outperforming other common machine ‘‘Application of non-linear and wavelet based features for the automated
identification of epileptic EEG signals,’’ Int. J. Neural Syst., vol. 22, no. 2,
learning classification methods. The diagnosis model estab-
Apr. 2012, Art. no. 1250002.
lished in this study (1) can quickly classify normal and [15] G. Chen, ‘‘Automatic EEG seizure detection using dual-tree com-
epilepsy in the initial diagnosis and serve as an objective plex wavelet-Fourier features,’’ Expert Syst. Appl., vol. 41, no. 5,
indicator for epilepsy diagnosis; (2) the APN model can pp. 2391–2394, 2014.
[16] T. M. Nunes, A. L. V. Coelho, C. A. M. Lima, J. P. Papa, and
graphically represent the classification rules, and can easily V. H. C. de Albuquerque, ‘‘EEG signal classification for epilepsy diagno-
be transformed into a rule base for an expert system; (3) sis via optimum path forest—A systematic assessment,’’ Neurocomputing,
In addition to classifying into Class AB and Class CDE vol. 136, pp. 103–123, Jul. 2014.
[17] N. F. Güler, E. D. Übeyli, and I. Güler, ‘‘Recurrent neural networks
for further exploration, this study also classified into Class employing Lyapunov exponents for EEG signals classification,’’ Expert
ABCD and Class E to compare with past studies. Using the Syst. Appl., vol. 29, no. 3, pp. 506–514, Oct. 2005.
[18] A. T. Tzallas, M. G. Tsipouras, and D. I. Fotiadis, ‘‘Epileptic seizure detec- HSIU-SEN CHIANG received the Ph.D. degree.
tion in EEGs using time–frequency analysis,’’ IEEE Trans. Inf. Technol. He is currently a Professor with the Department
Biomed., vol. 13, no. 5, pp. 703–710, Sep. 2009. of Information Management, National Taichung
[19] U. R. Acharya, S. V. Sree, and J. S. Suri, ‘‘Automatic detection of epileptic University of Science and Technology, Taiwan.
EEG signals using higher order cumulant features,’’ Int. J. Neural Syst., His current research interests include data mining,
vol. 21, no. 5, pp. 403–414, Oct. 2011. petri net, biomedical science, and internet mar-
[20] E. D. Übeyli, ‘‘Combined neural network model employing wavelet coeffi- keting. Dr. Chiang’s research is published or is
cients for EEG signals classification,’’ Digit. Signal Process., vol. 19, no. 2,
forthcoming in Applied Soft Computing, Informa-
pp. 297–308, Mar. 2009.
tion Fusion, Bioinformatics, IEEE TRANSACTIONS
[21] E. D. Übeyli, ‘‘Decision support systems for time-varying biomedical
ON INFORMATION TECHNOLOGY IN BIOMEDICINE, IEEE
signals: EEG signals classification,’’ Expert Syst. Appl., vol. 36, no. 2, Part
1, pp. 2275–2284, Mar. 2009. TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, Journal of Medical
[22] E. D. Übeyli, ‘‘Probabilistic neural networks combined with wavelet Systems, Journal of Medical and Biological Engineering, Expert Systems
coefficients for analysis of electroencephalogram signals,’’ Expert Syst., with Applications, and a number of national, and international conference
vol. 26, no. 2, pp. 147–159, May 2009. proceedings.
[23] E. D. Übeyli, ‘‘Lyapunov exponents/probabilistic neural networks for anal-
ysis of EEG signals,’’ Expert Syst. Appl., vol. 37, no. 2, pp. 985–992,
MU-YEN CHEN received the Ph.D. degree.
Mar. 2010.
[24] E. D. Übeyli, ‘‘Analysis of EEG signals by combining eigenvector methods He is currently a Professor with the Department
and multiclass support vector machines,’’ Comput. Biol. Med., vol. 38, of Information Management, National Taichung
no. 1, pp. 14–22, Jan. 2008. University of Science and Technology, Taiwan.
[25] I. Daubechies, ‘‘The wavelet transform, time-frequency localization and His current research interests include artificial
signal analysis,’’ IEEE Trans. Inf. Theory, vol. 36, no. 5, pp. 961–1005, intelligent, soft computing, bio-inspired com-
Sep. 1990. puting, financial engineering, and data mining.
[26] T. J. Ross, ‘‘Develpoment of membership functions,’’ in Fuzzy Logic Dr. Chen’s research is published or is forthcom-
with Engineering Applications, 3rd ed. Hoboken, NJ, USA: Wiley, 2010, ing in Information Sciences, Applied Soft Com-
pp. 200–205. puting, Neurocomputing, Neural Computing and
[27] D.-H. Shih, H.-S. Chiang, and B. Lin, ‘‘A generalized associative Petri Applications, Journal of Educational Technology and Society, Journal of
net for reasoning,’’ IEEE Trans. Knowl. Data Eng., vol. 19, no. 9, Information Science, The Electronic Library, Computers and Mathematics
pp. 1241–1251, Sep. 2007. with Applications, Quantitative Finance, Expert Systems with Applications,
[28] H.-S. Chiang and Z. W. Wu, ‘‘Online incremental learning for sleep Soft Computing, and a number of national, and international conference
quality assessment using associative Petri net,’’ Appl. Soft Comput., vol. 68,
proceedings. He has served as an Editor-in-Chief and an Associate Editor for
pp. 774–783, Jul. 2018.
international journals (e.g. International Journal of Big Data and Analytics
[29] R. Agrawal, T. Imielinski, and A. Swami, ‘‘Database mining: A per-
formance perspective,’’ IEEE Trans. Knowl. Data Eng., vol. 5, no. 6, in Healthcare, IEEE ACCESS, Journal of Information Processing Systems,
pp. 914–925, Dec. 1993. International Journal of Social and Humanistic Computing) while he is an
[30] R. G. Andrzejak, K. Lehnertz, F. Mormann, C. Rieke, P. David, and editorial board member on several SCI journals.
C. E. Elger, ‘‘Indications of nonlinear deterministic and finite-dimensional
structures in time series of brain electrical activity: Dependence on
YU-JHIH HUANG received the M.S. degree
recording region and brain state,’’ Phys. Rev. E, Stat. Phys. Plas-
mas Fluids Relat. Interdiscip. Top., vol. 64, no. 6, p. 061907, from the Department of Information Manage-
2001. ment, National Taichung University of Science and
[31] L. Guo, D. Rivero, and A. Pazos, ‘‘Epileptic seizure detection using Technology, Taiwan. His current research interests
multiwavelet transform based approximate entropy and artificial neural include data mining, petri net, biomedical science,
networks,’’ J. Neurosci. Methods, vol. 193, no. 1, pp. 156–163, Oct. 2010. and internet marketing.
[32] L. Guo, D. Rivero, J. Dorado, J. R. Rabuñal, and A. Pazos, ‘‘Automatic
epileptic seizure detection in EEGs based on line length feature and artifi-
cial neural networks,’’ J. Neurosci. Methods, vol. 191, no. 1, pp. 101–109,
2010.