Moth-Flame Optimization-Bat Optimization: Map-Reduce Framework For Big Data Clustering Using The Moth-Flame Bat Optimization and Sparse Fuzzy C-Means
Moth-Flame Optimization-Bat Optimization: Map-Reduce Framework For Big Data Clustering Using The Moth-Flame Bat Optimization and Sparse Fuzzy C-Means
ORIGINAL ARTICLE
Abstract
The technical advancements in big data have become popular and most desirable among users for storing, pro-
cessing, and handling huge data sets. However, clustering using these big data sets has become a major chal-
lenge in big data analysis. The conventional clustering algorithms used scalable solutions for managing huge
data sets. Thus, this study proposes a technique for big data clustering using the spark architecture. The proposed
technique undergoes two steps for clustering the big data, involving feature selection and clustering, performed
in the initial cluster nodes of spark architecture. At first, the initial cluster nodes read the big data from various
distributed systems, and the optimal features are selected and placed in the feature vector based on the pro-
posed moth-flame optimization-based bat (MFO-Bat) algorithm, which is designed by integrating MFO and
Bat algorithms. Then, the selected features are fed to the final cluster nodes of spark, which uses the sparse-
fuzzy C-means method for performing optimal clustering. The performance of proposed MFO-Bat outperformed
other existing methods with a maximal classification accuracy of 95.806%, Dice coefficient of 99.181%, and Jac-
card coefficient of 98.376%, respectively.
Keywords: big data; big data clustering; fuzzy; optimization algorithm; spark architecture
1
VNRVJIET, Hyderabad, India.
2
Department of CSE and NSS Coordinator, JNTUA University, Ananthapuramu, India.
*Address correspondence to: Vasavi Ravuri, VNRVJIET, Pragathi Nagar, Hyderabad, Telangana 500090, India, E-mail: [email protected]
1
2 RAVURI AND VASUNDRA
are dissimilar to each other. The clustering is applicable the uniqueness of multiple sources and dimensions,
in various scientific fields.1 structured, unstructured, or semistructured heteroge-
Various techniques are devised for big data cluster- neous data. Hence, the storage of huge amounts of
ing,8 which are considered an active research field.9 data in the relational database becomes complicat-
Big data clustering has been widely studied in many ed.10,14 From data sets obtained by acquisition devices,
areas, such as medicine and chemistry.10 The signifi- only a small amount of data are important. There exist
cant features are utilized to construct the clusters, two types of data heterogeneity, involving syntactic
whereas the insignificant features are not helpful for heterogeneity and conceptual heterogeneity. Syntactic
constructing the clusters.8 Many features influence heterogeneity has occurred when two data are not ar-
the performance of the inductive learning algorithm.11 ticulated in the same language. Likewise, conceptual
Insignificant features are noisy and can be eliminated heterogeneity, which is also named as semantic hetero-
Downloaded by UPPSALA UNIVERSITETSBIBLIOTEK from www.liebertpub.com at 05/31/20. For personal use only.
for reducing the size of data for yielding improved clus- geneity or logical mismatch, represents the differences
tering. This clustering also minimizes the noise and is in modeling the domain interest.4,14 Moreover, termi-
beneficial for storing a large amount of data and pro- nological heterogeneity depicts the dissimilarities in
cessing those stored data. Feature selection is one of names when it refers to the same entities with differ-
the important tasks in data mining, which removes ent sources of data. Moreover, semiotic heterogeneity
the unrelated and inconsistent features and enhances is also termed as pragmatic heterogeneity, used for
the performance of learning. Various clustering algo- representing the interpretation of peoples with differ-
rithms are devised for clustering the unsupervised ent entities.13
data for initiating the classification.12 The feature ex- The primary intention of this research is to develop a
traction methods, such as principal component analy- technique for clustering big data sets using the spark
sis, singular value decomposition, or Karhunen/Loeve architecture. The method consists of two phases,
transformation, and the dimensionality reduction namely, feature selection, and clustering. The initial
methods are used for clustering the data. In Dash cluster nodes read the big data from various distributed
and Liu,8 the CLIQUE algorithm was devised, in systems and form a feature vector based on the pro-
which each dimension is divided into user-defined di- posed moth-flame optimization-based bat (MFO-Bat)
visions and starts by determining the dense regions algorithm that selects the optimal features for cluster-
in dimensional data. In this method, k-dimensional ing. The proposed MFO-Bat is designed by integrating
dense regions are determined using the candidate gen- the MFO algorithm and Bat optimization algorithm to
eration algorithm named Apriori. The method is re- acquire the advantage of the MFO and Bat optimiza-
sponsible for clustering the complete data. The tion algorithms for selecting the optimal features. The
projected clustering is to determine the regions of in- selected feature is then provided to the final cluster
terest in subspaces of high-dimensional data. This nodes of spark in which the clustering is performed
method determines the clusters and chooses features using the sparse-fuzzy C-means (FCM) algorithm.
for each cluster. Moreover, this method investigates Thus, optimal clustering is carried out on the final clus-
the features by adapting a restriction of the lowest ter nodes of spark using the available data.
and the highest number of features.4,8 The major contributions of the proposed method
The issues of big data algorithms focus on designing used for big data clustering are as follows:
the algorithm, addressing the complexities elevated by
Proposed MFO-Bat for feature selection: Combin-
big data volumes, complex, and distributed data. The
ing MFO with Bat optimization algorithm to
challenge consists of the following phases, namely het-
design a novel algorithm, MFO-Bat, to select sig-
erogeneous, sparse, uncertain, and incomplete phases.
nificant features for big data clustering. The pro-
Different data are preprocessed using different data fu-
posed MFO-Bat is utilized in the slave nodes for
sion methodologies.13 The heterogeneous data refer to
selecting the optimal features of big data and
any data that has a high variability of data formats.
then introducing sparse FCM for clustering the
They are perhaps indefinite and pose low qualities
big data.
due to missing values, high data redundancy, and un-
truthfulness. It is very complicated to combine hetero- The organization of the article is as follows. The
geneous data for meeting the demands of business Introduction section explains the introductory part
information.13 Multiple heterogeneous big data pose based on big data clustering. The literature survey
MAP-REDUCE FRAMEWORK FOR BIG DATA CLUSTERING 3
using different methods for big data clustering along and changes over time, and another was about all
with the challenges is given in the Literature Review nodes that tend to be homogenous. Also, the fuzzy
section. The Proposed Big Data Clustering Based logic-based clustering algorithm was heuristic in na-
on the Spark Architecture section describes the pro- ture, which may lead to clustering failure. Zhang
posed MFO-Bat technique developed for big data et al.18 designed an algorithm named secure weighted
clustering. In the Results and Discussion section, possibilistic C-means algorithm (SWPCM) on the
the results and the comparative analysis are per- basis of the background verification (BGV) encryption
formed to evaluate the performance of the proposed scheme for clustering big data in the cloud environ-
technique. Finally, the article is concluded in the ment. Here, the BGV was utilized for encrypting the
Conclusion section. raw data for preserving the privacy in cloud infrastruc-
ture. Moreover, the Taylor theorem was utilized for ap-
Downloaded by UPPSALA UNIVERSITETSBIBLIOTEK from www.liebertpub.com at 05/31/20. For personal use only.
improved the performance while processing the huge ing zero weights to the noisy features. Hence, the clus-
data sets, but the method failed to consider multilevel tering results cannot be affected by noisy objects.
queues for scheduling the jobs using huge data sets.
Among the conventional techniques, evolution-
Chormunge and Jena22 developed a method named
ary techniques are effectively utilized in selecting
correlation-based feature selection with clustering for
the optimal features. However, the extreme incre-
solving the dimensionality problem, while integrating
ment of the individual size limits the applicability
the clustering with correlation measure, and for pro-
and, thereby, not able to offer a preprocessed data
ducing good feature subset. Initially, irrelevant features
set in a specific amount of time, while addressing
were removed using the K-means clustering method,
huge problems. In the existing works, there are
and then, the nonredundant features were chosen as
no standard methods for addressing the prob-
a correlation measure from each cluster. The method
lems of feature space with evolutionary big data
Downloaded by UPPSALA UNIVERSITETSBIBLIOTEK from www.liebertpub.com at 05/31/20. For personal use only.
FIG. 1. Block diagram of big data clustering using the proposed MFO-Bat algorithm-based spark architecture.
FCM, fuzzy C-means; MFO-Bat, moth-flame optimization-based bat.
used to process huge data processing tasks using gen- namely a master node and slave node. The master node
eral purpose programming language on the big data. is responsible for managing and distributing the task
The spark supports several interactive data analysis obtained from the requests of the user by partitioning
tools, machine-learning algorithms, and reuses the the obtained tasks into different subtasks for each
data in parallel operation, while maintaining the scal- slave node. These subtasks are processed by the slave
ability. The spark architecture poses two main modules, nodes for processing the request. In this model, assume
6 RAVURI AND VASUNDRA
the size of master node is m · n, which is divided into Algorithm 1. Pseudocode for Selecting the Features
four slave nodes, each of size p · q. In each slave
Procedure Parallel_Feature selection (Solution M)
node, the feature selection process is carried out {
using the proposed MFO-Bat for selecting the opti- Master:
Call proposed MFO-Bat algorithm to select the optimal feature
mal features. The proposed MFO-Bat is designed by Each block acquires the block of features of the data set
integrating the MFO and Bat optimization algorithms Slave (Parallel):
for effective feature selection. Each extracted feature Perform feature selection of each cluster nodes using proposed
MFO-Bat.
is of size u · v and these features are combined for ini-
Master:
tiating the clustering process, which poses size r · s: M = Merging optimal features from all slaves by calling proposed
The clustering is carried out using the sparse-FCM MFO-Bat algorithm
Return M
algorithm and each cluster is of size u · v. The sparse-
}
Downloaded by UPPSALA UNIVERSITETSBIBLIOTEK from www.liebertpub.com at 05/31/20. For personal use only.
features using the proposed MFO-Bat algorithm. The tance between two classes. Thus, the fitness of the solu-
steps involved in the proposed MFO-Bat for the effec- tion is given by the following:
tive feature selection are described in this section and
K = fBD g, (6)
enlisted as follows:
Step 1: Initialization. The first step is to initialize the where K represents the fitness function and BD indi-
solution space with the position of moths. The solution cates the Bhattacharyya distance between two classes.
of MFO is in the form of a vector. Thus, the solution Thus, the formula for Bhattacharyya distance between
space of moth is given by the following: two classes is given by the following:
( !) ( 2 )
2 3 1 1 r2x1 r2x2 1 l x1 lx2
R1, 1 R1, 2 . . . R1, h BD (x1 , x2 ) = ln þ 2 þ2 þ ,
6 R2, 1 R2, 2 . . . R2, h 7 4 4 r2x2 r x1 4 r2x1 þ r2x2
6 7
Downloaded by UPPSALA UNIVERSITETSBIBLIOTEK from www.liebertpub.com at 05/31/20. For personal use only.
R=6. 7, (2)
4 .. 5 (7)
Rg, 1 Rg, 2 . . . Rg, h
where BD (x1 , x2 ) is the Bhattacharyya distance between
where g represents the total number of moths, where two classes x1 and x2 , the variance of (x1 )th class and
1 c g, and h indicates the total number of dimen- (x2 )th is given by rx1 and rx2 , and the mean of (x1 )th
sions. Once the solution space of moth is initiated, the class and (x2 )th is given by lx1 and lx2 .
array of storing the obtained value of fitness is com-
puted for a random solution and is stored in a matrix Step 3: Update solution based on proposed MFO-Bat algo-
given by the following: rithm. The MFO algorithm updates the solution
2 3 space on the basis of flame intensity. The alterations
F1 in the flame intensity make the moth movement in
6 F2 7
6 7 one direction. Thus, the update solution of MFO is
F=6
6 7,
7 (3)
4 5 given by the following:
Fg
Rc = Zc :euv :Cos(2pv) þ Sd , (8)
where g indicates the total number of moths. Similarly,
the solution space of flame is given by the following: where Zc is the distance between cth moth and d th
2 3 flame, u represents the constant for describing the
S1, 1 S1, 2 ... S1, h shape of the logarithmic spiral, v specifies the random
6 S2, 1 S2, 2 ... S2, h 7
6
S=6.
7
7, (4) number in the range [1,1], and Sd represents the d th
4 .. 5 flame. Here, the solution update of the Bat optimiza-
Sg, 1 Sg, 2 ... Sg, h tion algorithm is used to formulate the update equa-
tion of the proposed MFO-Bat algorithm. The
where Sg, h indicates the g th moth in hth dimension. equation for position update of bat is based on the fol-
Once the solution space of flame is initiated, the lowing equation:
array of storing the obtained value of fitness is com-
puted for a random solution and is stored in a matrix P¢ = P þ aT y , (9)
given by the following:
2 3 where a represent a random number between [1,1],
F1¢
6 F2¢ 7 Ty is the average loudness of bat, P¢ is the new solution
6 7 for each bat, and P is the old solution of each bat. After
F¢ = 6
6 7,
7 (5)
4 5 rearranging the above equation, the value of the ran-
Fg¢ dom number is given by the following:
same value for a particular iteration, which is given by sparse FCM is used to find the cluster centroids and
a = v. Thus, the above equation becomes is elaborated in the following section: Algorithm 3 illus-
trates the algorithmic steps of parallelized clustering.
P¢ P
v= (11)
Ty Algorithm 3. Procedure for Performing Parallelized Clustering
Return H
Step 4: Determination of the best solution. The best so- }
lution R is obtained using the fitness function. Thus,
the fitness for calculating the best solution is derived Sparse-FCM method for clustering huge data. In this
using Equation (6). section, the sparse FCM29 is used for clustering the
huge data. Numerous data pose the cluster structure,
Step 5: Termination. The algorithm is terminated, which considers limited relevant features rather than
when the maximum iteration tmax limit is crossed, the whole feature set. However, for huge data, identify-
and finally, at the end of an iteration, the algorithm de- ing the significant feature and determining the cluster
termines the best solution. Thus, the features selected structure becomes complex. For solving these issues,
using the proposed MFO-Bat is given by the following: sparse FCM is being used for initiating the clustering
process using the selected features from the previous
A = fA1 , A2 , , Af g, (13)
step. The sparse FCM uses the sparse regularization
where f is the total number of features, and A represents a for assigning zero weights to the noisy features to clus-
feature vector of size r · s. The selected features are sub- ter huge data. Here, the similarity is represented using a
jected as an input to the final cluster nodes of the spark distance measure and tries to determine the collection
that is provided with the sparse-FCM method. The opti- of clusters that reduces the intracluster distances and
mal clustering is performed in final cluster nodes of spark increases the intercluster distance. The data to be clus-
such that they form optimal clusters. Algorithm 2 speci- tered are represented as data points, and the set of data
fies the steps of the proposed MFO-Bat algorithm. is termed as a data set.
is represented using the Euclidean distance between Algorithm 4: Sparse-Fuzzy C-Means Algorithm (Continued)
the data and the cluster centroid and is given as i) Initialize x as, xe1 = xe2 = . . . = xeb = p1ffiffib
follows: ii) Update partition matrix Pkt
iii) Update cluster centers I:
iv) F ix the value of I1 , I2 , , IE and calculate Gl.
n q
+ jxl xel j
Bkt = + xl ðVkl Vtl Þ2 (15) l=1
q < 10 4
t=1 + jxel j
l=1
v) Repeat steps ii, iii, and iv step until the stopping criteria are gratified.
where n denotes the number of clusters, weight of j q
+ jxl xel j
objects, l=1
q < 10 4
+ jxel j
l=1
The result from the sparse FCM is the clustered data, which is of size u · v:
Step 3: Update cluster center O. Let x and < be fixed q
f
and eðOÞ is minimized if max + xl : Gl such that kxk22 1, kxkf ‘ and obtain x .
Downloaded by UPPSALA UNIVERSITETSBIBLIOTEK from www.liebertpub.com at 05/31/20. For personal use only.
x
l=1
8
> 0 ; if xl = 0
>
>
>
>
< n b Results and Discussion
+ P :V
Otl = i = 1 kt kl (16) This section illustrates the results produced by the pro-
> n
> ; if xl 6¼ 0
>
> posed method for big data clustering, and the effective-
>
: + Pkt b
E; Number of clusters
Competing methods
Ci, j ; Data matrix The proposed method of big data clustering is com-
Procedure sparse FCM (E, Ci, j ) pared with the existing methods, such as SWPCM,18
// select first centroid
// Clusters I1 , I2 , , IE and xe ABC,19 MFO,27 and KMHMR,21 to prove the effective-
(continued) ness of the proposed method. Thus, the existing
10 RAVURI AND VASUNDRA
methods are compared with the proposed MFO-Bat Analysis based on population size. Figure 2 illustrates
algorithm based on performance metrics. the performance analysis of proposed MFO-Bat with
varying population sizes ranging from 10 to 50. The
Performance metric analysis performed with respect to the classification
The analysis of the existing methods with respect to the accuracy is depicted in Figure 2a. When the total
proposed method is done in terms of the Jaccard coef- slaves are 2, then the classification accuracies com-
ficient, Dice coefficient, and classification accuracy. puted by the proposed MFO-Bat with population
sizes 10, 20, 30, 40, and 50 are 57.202%, 67.766%,
i. Jaccard coefficient: The Jaccard coefficient is to 70.775%, 71.153%, and 85.905%, respectively. Simi-
measure the similarity between two different sets larly, for 10 slaves, the classification accuracies mea-
of data, and the value of the Jaccard coefficient sured by the proposed MFO-Bat with population
ranges from 0% to 100%. The data are said to be
Downloaded by UPPSALA UNIVERSITETSBIBLIOTEK from www.liebertpub.com at 05/31/20. For personal use only.
FIG. 2. Performance analysis of proposed MFO-Bat in terms of population size. (a) Classification accuracy.
(b) Dice coefficient. (c) Jaccard coefficient.
The analysis in terms of Dice coefficient with varying 99.519%, and 99.840%, respectively. The analysis on
feature sizes is depicted in Figure 3b. When the total the basis of the Jaccard coefficient with varying feature
slaves are 2, the corresponding Dice coefficient values sizes is depicted in Figure 3c. When the total slaves are
measured by the proposed MFO-Bat with feature 2, the corresponding values of Jaccard coefficient com-
sizes 8, 10, 12, 14, and 16 are 24.638%, 74.083%, puted by the proposed MFO-Bat with feature sizes 8,
87.144%, 90.724%, and 90.752%, respectively. Likewise, 10, 12, 14, and 16 are 17.849%, 58.835%, 77.218%,
for 10 slaves, the corresponding Dice coefficient values 83.024%, and 83.070%, respectively. Similarly, for 10
measured by proposed MFO-Bat with feature sizes 8, slaves, the corresponding values of the Jaccard coeffi-
10, 12, 14, and 16 are 84.818%, 99.429%, 99.476%, cient computed by the proposed MFO-Bat with feature
12 RAVURI AND VASUNDRA
Downloaded by UPPSALA UNIVERSITETSBIBLIOTEK from www.liebertpub.com at 05/31/20. For personal use only.
FIG. 3. Performance analysis of proposed MFO-Bat in terms of feature size. (a) Classification accuracy. (b) Dice
coefficient. (c) Jaccard coefficient.
sizes 8, 10, 12, 14, and 16 are 74.263%, 98.865%, Figure 4 depicts the comparative analysis of existing
98.958%, 99.043%, and 99.682%, respectively. From SWPCM, ABC, MFO, and KMHMR, and proposed
the above analysis, it is noted that the performance of MFO-Bat with respect to classification accuracy, Dice
the proposed MFO-Bat increases with the increase in coefficient, and Jaccard coefficient. The analysis of
feature size. existing and proposed methods in terms of classifica-
tion accuracy is depicted in Figure 4a. When the total
Comparative analysis slaves are 2, then the corresponding classification accu-
This section presents the comparative analysis of the racies measured by the existing SWPCM, ABC, MFO,
proposed MFO-Bat with respect to the existing meth- and KMHMR are 66.573%, 67.053%, 67.055%, and
odologies on the basis of performance metrics, namely 69.957%, whereas the proposed MFO-Bat acquired
accuracy, Jaccard coefficient, and Dice coefficient. the classification accuracy of 84.855%. Similarly, for
MAP-REDUCE FRAMEWORK FOR BIG DATA CLUSTERING 13
Downloaded by UPPSALA UNIVERSITETSBIBLIOTEK from www.liebertpub.com at 05/31/20. For personal use only.
FIG. 4. Comparative analysis. (a) Classification accuracy. (b) Dice coefficient. (c) Jaccard coefficient. ABC,
artificial bee colony; KMHMR, K-Means Hadoop MapReduce; SWPCM, secure weighted possibilistic C-means
algorithm.
10 slaves, the corresponding classification accuracy Dice coefficient values computed by the existing
values computed by the existing SWPCM, ABC, SWPCM, ABC, MFO, and KMHMR and the proposed
MFO, and KMHMR and the proposed MFO-Bat are MFO-Bat are 70.456%, 71.951%, 71.955%, 72.647%,
67.055%, 67.084%, 89.491%, 92.740%, and 95.806%, re- and 85.462%, respectively. Likewise, for 10 slaves, the
spectively. From the above data, the proposed MFO- corresponding Dice coefficient values computed by
Bat shows maximum classification accuracy compared the existing SWPCM, ABC, MFO, and KMHMR and
with the existing methods. The analysis of the existing the proposed MFO-Bat are 71.951%, 71.960%, 90.103%,
SWPCM, ABC, MFO, and KMHMR and the proposed 93.282%, and 99.181%, respectively. The analysis based
MFO-Bat based on the Dice coefficient is depicted in on the Jaccard coefficient is depicted in Figure 4c.
Figure 4b. When total slaves are 2, the corresponding When the total slaves are 2, the corresponding Jaccard
14 RAVURI AND VASUNDRA
Table 1. Comparative discussion fed to the final cluster nodes of spark, which uses the
Proposed sparse-FCM method for the clustering process. The
Methods SWPCM ABC MFO KMHMR MFO-Bat optimal clustering is performed at the final cluster
Classification accuracy 67.055 67.084 89.491 92.740 95.806 nodes of spark to obtain optimal clusters of different
(%) data. The experimentation of the proposed MFO-Bat
Dice coefficient (%) 71.951 71.960 90.103 93.282 99.181
Jaccard coefficient (%) 60.501 60.528 82.719 87.509 98.376
is performed, and it confirms that the proposed
method outperforms the existing methods with a max-
ABC, artificial bee colony; MFO-Bat, moth-flame optimization-based imal classification accuracy of 95.806%, maximal Dice
bat; KMHMR, K-Means Hadoop MapReduce; SWPCM, secure weighted
possibilistic C-means algorithm. coefficient of 99.181%, and maximum Jaccard coeffi-
cient of 98.376% respectively. The proposed method
handles big data with a large sample size and offers
coefficient values computed by the existing SWPCM,
Downloaded by UPPSALA UNIVERSITETSBIBLIOTEK from www.liebertpub.com at 05/31/20. For personal use only.
16. Sassi Hidri M, Zoghlami MA, Ben Ayed R. Speeding up the large-scale con- 28. Yang X-S. Bat algorithm for multi-objective optimization. Int J Bioinspired
sensus fuzzy clustering for handling Big Data. Fuzzy Sets Syst. 2017;1:1–25. Computation 2012;3:267–274.
17. Wang Q, Guo S, Hu J, Yang Y. Spectral partitioning and fuzzy C-means 29. Chang X, Wang Q, Liu Y, Wang Y. Sparse regularization in fuzzy c-means
based clustering algorithm for big data wireless sensor networks. for high-dimensional data clustering. IEEE Trans Cybernet 2017;47:
J Wireless Com Network 2018;54:11. 2616–2627.
18. Zhang Q, Yang LT, Castiglione A, et al. Secure weighted possibilistic c-means 30. Global Terrorism Database. Available online at https://www.kaggle.com/
algorithm on cloud for clustering big data. Inf Sci. 2019;479:515–525. bstaff/global-terrorism-database (last accessed February 2019).
19. Ilango SS, Vimal S, Kaliappan M, Subbulakshmi P. Optimization using ar-
tificial bee colony based clustering approach for big data. Cluster
Comput. 2018;22:12169–12177.
20. Bijari K, Zare H, Veisi H, Bobarshad H. Memory-enriched big bang–big Cite this article as: Ravuri V, Vasundra S (2020) Moth-flame
crunch optimization algorithm for data clustering. Neural Comput Appl. optimization-bat optimization: map-reduce framework for big data
2018;29:111–121. clustering using the moth-flame bat optimization and sparse fuzzy C
21. Sreedhar C, Kasiviswanath N, Chenna Reddy P. Clustering large datasets means. Big Data 3:X, 1–15, DOI: 10.1089/big.2019.0125.
using K-means modified inter and intra clustering (KM-I2C) in Hadoop.
J Big Data. 2017;4.
Downloaded by UPPSALA UNIVERSITETSBIBLIOTEK from www.liebertpub.com at 05/31/20. For personal use only.