Comput. Methods Appl. Mech. Engrg.: Saeed Gholizadeh, Eysa Salajegheh
Comput. Methods Appl. Mech. Engrg.: Saeed Gholizadeh, Eysa Salajegheh
a r t i c l e i n f o a b s t r a c t
Article history: This paper proposes a new metamodeling framework that reduces the computational burden of the
Received 9 June 2008 structural optimization against the time history loading. In order to achieve this, two strategies are
Received in revised form 25 April 2009 adopted. In the first strategy, a novel metamodel consisting of adaptive neuro-fuzzy inference system
Accepted 28 April 2009
(ANFIS), subtractive algorithm (SA), self organizing map (SOM) and a set of radial basis function (RBF)
Available online 6 May 2009
networks is proposed to accurately predict the time history responses of structures. The metamodel pro-
posed is called fuzzy self-organizing radial basis function (FSORBF) networks. In this study, the most
Keywords:
influential natural periods on the dynamic behavior of structures are treated as the inputs of the neural
Earthquake
Particle swarm optimization
networks. In order to find the most influential natural periods from all the involved ones, ANFIS is
Neural network employed. To train the FSORBF, the input–output samples are classified by a hybrid algorithm consisting
Adaptive neuro-fuzzy inference system of SA and SOM clusterings, and then a RBF network is trained for each cluster by using the data located. In
Subtractive algorithm the second strategy, particle swarm optimization (PSO) is employed to find the optimum design. Two
Self-organizing map building frame examples are presented to illustrate the effectiveness and practicality of the proposed
Radial basis function methodology. A plane steel shear frame and a realistic steel space frame are designed for optimal weight
using exact and approximate time history analyses. The numerical results demonstrate the efficiency and
computational advantages of the proposed methodology.
Ó 2009 Elsevier B.V. All rights reserved.
0045-7825/$ - see front matter Ó 2009 Elsevier B.V. All rights reserved.
doi:10.1016/j.cma.2009.04.010
S. Gholizadeh, E. Salajegheh / Comput. Methods Appl. Mech. Engrg. 198 (2009) 2936–2949 2937
where T is time interval over which the constraints need to be have a mean of 1. Also, in the field of structural optimization Perez
imposed. and Behdinan [24] have demonstrated that the best results can be
Because the total time interval is divided into ngp subintervals, found by supposing c1 ¼ c2 ¼ 2:0. Therefore in the present paper it
the constraint (9) is replaced by the constraints at the ngp þ 1 time is assumed that c1 ¼ c2 ¼ 2:0 and proper results have been
grid points as: observed.
_ j Þ; Zðt
g i ðX; Zðtj Þ; Zðt € j Þ; t j Þ 6 0; j ¼ 0; . . . ; ngp : ð10Þ The particle size is a problem dependent parameter. However,
in [25] it is mentioned that the typical range for the number of par-
The above constraint function can be evaluated at each time ticles is 20–40. In this work, a particle size of 30 is chosen and
grid point after the structure has been analyzed and stresses and proper results are found. As it is mentioned in the literature the
displacements have been evaluated at each time point. maximum number of iterations is a problem dependent parameter.
In this paper we have found satisfactory results by assuming the
3. Swarm intelligence for structural optimization for time value of 100 for this parameter.
history loading A successful application of the binary model of PSO to time his-
tory optimization was reported in [26]. Here, the real valued model
The objective function of constrained structural optimization of PSO is employed. In this model, the decimal values of the design
problems is defined as follows: variables are used in the optimization process instead of their bin-
( ary codes. In this case, the length of the particles is shortened and
f ðXÞ e;
if X 2 D
f ðXÞ ¼ ð11Þ therefore the convergence of the algorithm can be achieved with
f ðXÞ þ fp ðXÞ; otherwise; lower effort and higher speed.
e are the penalty functions and the feasible
where fp ðX; tÞ and D Despite the efficiency of the PSO, the computational burden of
search space, respectively. the optimization process due to implementing the time history
If the time history load is an earthquake excitation, a simple analyses is very high. Incorporating of neural networks in the opti-
form of penalty function is employed as: mization process to predict the desired structural response can
" #
ngp
X X
ne X
ns substantially reduce the computing effort.
fp ðXÞ ¼ rP ðmaxðg qi ðX; tj Þ; 0ÞÞ2 þ ðmaxðg di ðX; t j Þ; 0ÞÞ2 ;
j¼0 i¼1 i¼1
4. Self-organizing radial basis function metamodel
ð12Þ
where rp is an adjusting coefficient. Conventional RBF networks are widely used in the field of civil
The particle swarm optimization (PSO) has been inspired by the and structural engineering due to their fast training, generality and
social behavior of animals such as fish schooling, insect swarming simplicity [27–30]. Activation function of RBF neurons is as
and bird flocking. The PSO has been proposed by Kennedy and follows:
Eberhart [20,21] in the mid 1990s while attempting to simulate
the graceful motion of bird swarms as a part of a socio-cognitive
uðYÞ ¼ expððY CÞT ðY CÞ=ð2r2 ÞÞ; ð16Þ
study. The PSO involves a number of particles, which are randomly where Y; C, and r are the input vector, weight vector and receptive
initialized in the search space of an objective function. These par- field radius of RBF neurons, respectively.
ticles are referred to as swarm. Each particle of the swarm repre- In conventional RBF network all the hidden layer neurons have
sents a potential solution of the optimization problem. The equal radius of receptive field ðrÞ. In this case, the input space is
particles fly through the search space and their positions are up- covered by the RBF neurons with the same r [13]. Thus, it is prob-
dated based on the best positions of individual particles in each able that some parts of the input space are not properly covered by
iteration. The objective function is evaluated for each particle at the neurons and the performance generality of the RBF network
each grid point and the fitness values of particles are obtained to over these parts is poor. In order to improve performance general-
determine the best position in the search space [22]. In iteration ity of the RBF network, more RBF neurons with smaller r may be
k, the swarm is updated using the following equations: assigned to the hidden layer. In this case, the input space is
smoothly covered but, the computational burden of the training
V kþ1 ¼ xk V ki þ c1 r 1 ðPki X ki Þ þ c2 r2 ðPkg X ki Þ; ð13Þ
i due to employing many RBF neurons is high. On the other hand,
X kþ1
i ¼ X ki þ V kþ1
i ; ð14Þ if we could employ some RBF neurons with various r values, in-
whereX i and V i represent the current position and the velocity of stead of a single value, the input space will be covered smoothly
the ith particle, respectively; Pi is the best previous position of the by a small number of RBF neurons. In this case, not only the accu-
ith particle (called pbest) and Pg is the best global position among racy of the prediction task will be improved all over the input space
all the particles in the swarm (called gbest); r 1 and r2 are two uni- but also the computational burden of the training will be consider-
form random sequences generated from interval [0,1]; c1 and c2 ably reduced. To implement this simple strategy, it is necessary to
are the cognitive and social scaling parameters, respectively. Each classify the input space (input samples). Then it is possible to as-
component of V i is constrained to a maximum value defined as sign a RBF network with specific r to each class. In this study,
V max and a minimum value defined as V min . The inertia weight used based on the mentioned simple strategy, an efficient alternative,
i i
to discount the previous velocity of particle preserved is expressed so-called fuzzy self-organizing radial basis function (FSORBF)
by x. metamodel, is proposed. The FSORBF comprises two processing
Due to the importance of x in achieving efficient search behav- units: clustering and predicting. In the clustering unit an efficient
ior the optimal updating criterion is taken as follows: hybrid algorithm is employed to cluster the training data. The algo-
rithm is a combination of the ANFIS, the SA and the SOM network.
xmax xmin
x ¼ xmax k; ð15Þ In the prediction unit a set of RBF networks with various r values
kmax
are employed to achieve the prediction task.
where xmax and xmin are the maximum and minimum values of x,
respectively. Also, kmax , and k are the number of maximum itera- 4.1. Clustering unit
tions and the number of present iteration, respectively.
Shi and Eberhart [23], proposed that c1 and c2 parameters to be The main duty of the clustering units is to recognize and cluster
selected such that c1 ¼ c2 ¼ 2:0 to allow the product c1 r 1 or c2 r2 to the similar training samples. Adopting a similarity criterion has a
S. Gholizadeh, E. Salajegheh / Comput. Methods Appl. Mech. Engrg. 198 (2009) 2936–2949 2939
key role in computational performance of this unit. Based on our The mathematical expression of the potential energy reveals
computational experiments, in the case of structural dynamic anal- that the effects of all the time history responses on the dynamic
ysis the best criterion is the natural periods or frequencies of the behavior of the structures are collected in the potential energy.
structures. It is well known that the natural periods are fundamen- To implement the ANFIS, a number of structures are randomly
tal parameters affecting the dynamic behavior of structures. The generated and their natural periods and time history response, for
natural periods are extracted from the free vibration of an un- a given time history loading, are evaluated by conventional FE
damped system: analysis. The natural periods of the generated
structures andtheir
ngp
€ þ KZðtÞ ¼ 0 ) ½K c2 MDi ¼ 0 ) jK c2 Mj ¼ 0
M ZðtÞ maximum value of the potential energy Em ¼ max½ES ðX; t j Þ are
i i j¼0
) tpi ¼ 2pðci Þ1 ; i ¼ 1; . . . ; nm ; ð17Þ treated as inputs and output of the ANFIS, respectively. The pro-
vided data is divided to training and testing sets on a random basis.
where, ci ; tpi , and Di are the ith mode angular natural frequency, Then, an exhaustive search is performed within the involved inputs
period, and modal vector, respectively. to select the set of inputs that have the most influence on the out-
The number of the natural modes involved is expressed by nm . put. Essentially, exhaustive search technique builds an ANFIS model
Eq. (17) indicates that the structures with similar stiffness and for each combination of input vector components and trains it for a
mass matrices have equal periods. Therefore, such structures yield little epoch and reports the performance achieved [15]. The funda-
the same time history structural behavior. mental steps of exhaustive search by ANFIS to determine the ni
Thus, in the clustering unit the generated structures, defined by influential periods from nm candidates is as follows:
their design variables vector X, are clustered based on their natural
periods. Therefore, in this unit the input vector is the natural peri- Step (1) The number of influential natural periods is initialized
ods of the structure and the output is an integer presenting the cor- ðni ¼ 1Þ.
responding cluster number: Step (2) For each combination of ni variables from nm candidates
ðnm Þ!
one ANFIS model is built, this leads to na ¼ ðni Þ!ðn
Icu ¼ ftp1 ; tp2 ; . . . ; tpnm gT ; ð18Þ m ni Þ!
ANFIS models, and the RMSE for training and testing sets
Ocu ¼ i; i ¼ 1; . . . ; ms ; ð19Þ are calculated as:
vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
where Icu ; Ocu , and ms are the input, output, and the number of clus- u
u1 X N
ters of the clustering unit. RMSE ¼ t ðEm Em 2
pri Þ ; ð21Þ
In fact, the dynamic behavior of the real-scaled structures is
N i¼1 i
usually affected by many of their natural periods. Therefore, in or-
where Em m
i and Epri are the actual and the predicted values
der to efficiently cluster the structures with low computing effort,
of the output. N is the number of the samples. Minimal
it is necessary to identify the most influential natural periods
training and testing errors of combination sets implies
which seriously affect the dynamic behavior of the structures. AN-
the most influential periods seriously affecting the maxi-
FIS offers an efficient computational tool for this purpose. After
mum value of the potential energy and therefore corre-
determining the most influential natural periods, SOM network is
sponds to optimal combination.
employed to cluster the structures based on the important periods.
Step (3) Find and save the optimal combination in current
One of the most important parameters affecting the performance
iteration.
of the SOM network is the number of the SOM neurons. It is neces-
Step (4) If in the current iteration, the training and testing RMSEs
sary to note that each SOM neuron corresponds to a data cluster.
of the optimal combination are less than those of the pre-
Here, the SA is used to determine the optimal number of the
vious iteration, replace ni ¼ ni þ 1 and go to step (2),
SOM neurons (clusters).
otherwise terminate the process.
ð23Þ
where ES and Eei are the potential energy of structure and an ele-
ment, respectively. Also, ri ; ei , and V ei are the internal stress vector, where r a is radii, a constant which defines the neighborhood radius
strain vector and volume of the ith element, respectively. of each point.
2940 S. Gholizadeh, E. Salajegheh / Comput. Methods Appl. Mech. Engrg. 198 (2009) 2936–2949
Thus, points with a dense neighborhood will have higher asso- Whereas an input vector is presented to the network, the output
ciated potential. After computing the potential for each point, the value of the ith neuron is obtained by computing the Euclidean dis-
one with the highest potential is selected as the first cluster center. tance between its weight vector and the input vector as follows
Next, the potential of all the remaining points is reduced. Defining [12]:
I1
cuI as the first group center and denoting its potential as P 1 , the po- !0:5
ni
X
tential of the remaining points is reduced as:
oi ¼ ðtpj wij Þ2 ; i ¼ 1; . . . ; ms ð26Þ
2
Pi Pi P1 exp bjIicuI I1
cuI j ; b ¼ 4=r 2b ; ð24Þ j¼1
where the constant r b > 0 defines the neighborhood radius with if the neuron i satisfies the Eq. (26) then it is the winner.
sensible reductions in its potential. oi ¼ minðoi Þ; i ¼ 1; . . . ; ms ; ð27Þ
In this way, points close to the center selected will have their i
potentials reduced in a more significant manner, and so the prob- where wij is the weight of SOM layer from input i to neuron j; tpj is
ability of being chosen as centers diminishes. This procedure has jth component of the input vector.
the advantage of avoiding the concentration of identical clusters However, instead of updating only the winning neuron, all neu-
in denser zones. Thus, the r b value is selected in order to be slightly rons within a certain neighborhood of the winning neuron are up-
higher than r a , so as to avoid closely spaced clusters. Typically, dated, using the Kohonen rule. Specifically, all such neurons are
rb ¼ 1:5r a [34]. adjusted as [12]:
After performing potential reduction for all the points, the one
with highest potential is selected as the second center, after what
wij ðk þ 1Þ ¼ wij ðkÞ þ a½tpj ðkÞ wij ðkÞ; ð28Þ
the potential of the remaining points is again reduced. By deter- where a is learning rate and k is discrete time.
mining the rth group, the potential is reduced: The neighborhood contains all of the neurons that lie within a
specific radius of the winning neuron. When an input vector is pre-
2
Pi Pi Pr exp bjIicuI Ir
cuI j : ð25Þ
sented, the weights of the winning neuron and its neighbors move
The procedure of center selection and potential reduction is re- toward the vector. n oN
peated until the stopping criterion is met. The pseudo-code of SA is In this case, by presenting the I Ncu ¼ IicuI to the SOM network
i¼1
as follows: they are divided into ms clusters:
8 n onC1
>
> Cluster 1 : I 1cu ¼ IjcuI
(a) if Pk>e up
P1 :accept
IkcuI as
the next cluster center and >
>
n oN >
< j¼1
continue I Ncu ¼ IicuI ! SOM network ! .. ;
(b) Otherwise, if Pk < edown P1 : reject IkcuI and finish the algorithm i¼1 >
> .
>
> n onCms
(c) Otherwise, let dmin be the shortest distance between IkcuI and > ms
: Cluster ms : I cu ¼ IcuI j
j¼1
all the centers already found
P X
ms
(d) if dmin
ra
þ Pk P 1: accept IkcuI as the next cluster center and nci ¼ N; ð29Þ
1
continue i¼1
(e) Otherwise, reject IkcuI and assign it the potential 0.0
(f) Select the point with higher potential as new IkcuI and repeat where nC1 ; nC2 and nCms are the number of data located in clusters 1,
the process 2, and ms , respectively.
In the above algorithm eup specifies a threshold above which the 4.2. Predicting unit
point is selected as a center with no doubts and edown specifies the
threshold below which the point is definitely rejected. Typically, In this unit a set of RBF networks are employed to predict the
eup =0.5, and edown ¼ 0:15 [34]. The third case is where the point is structural time history response.
characterized by a good trade-off between having a sufficiently One of the most common strategies in incorporating of neural
high potential and being distant enough from the clusters deter- networks into the optimization process is to use the design vari-
mined before. As it can be understood from the description of ables vector as the input vector of neural networks. As the most
the algorithm, the number of the clusters to obtain is not pre-spec- influential natural periods of the structures are extracted for the
ified. However, it is important to note that the parameter ra (radii) sample classification purpose, they can be employed again in the
is directly related to the number of centers found. Thus a small r a training process of the RBF networks as the inputs. Based on our
will lead to a high number of centers and a bigger r a will lead to a computational experiments using the most influential natural peri-
smaller ones. Therefore, in practice it is necessary to test several ods as the inputs, instead of the design variables vector (cross-sec-
values for ra and select the most adequate according to the results tional area assignments), result in better accuracy. In this paper,
obtained [34]. It is mentioned in [15] that good values for radii are therefore the most influential natural periods of the structures
usually between 0.2 and 0.5. It means that ra 2 ½0:2; 0:5. Therefore, are taken into account as the inputs of the RBF networks. In other
in the present paper various values belonging to the interval of words, the input vector of the predicting unit is the most influen-
[0.2, 0.5] are examined and the number of the clusters is deter- tial natural periods determined by ANFIS ðIcuI ¼ ftp1 ; tp2 ; . . . ;
mined according to the results. For continue, let the number of tpni gT Þ. Also, the output vector of the predicting unit or the outputs
the clusters found be ms . of the FSORBF metamodel is as follows:
4.1.3. Clustering the data by SOM network Opu ¼ fd1 ðX; tÞ; . . . ; dns ðX; tÞ; q1 ðX; tÞ; . . . ; qne ðX; tÞgT : ð30Þ
After determining the number of SOM neurons ðms Þ, a SOM net- Let the training set employed in this unit be as: ¼ I Ncu ¼ I Npu
work is trained to classify the samples generated. The SOM learn to n oN n oN
i N i
classify input vectors according to how they are grouped in the in- IcuI ; Opu ¼ Opu . After clustering the inputs in the cluster-
i¼1 i¼1
put space. The neurons in the layer of a SOM network are arranged ing unit and also separating their corresponding outputs, a RBF
originally in physical positions according to a specific topology. A network can be assigned to each data cluster found and trained
SOM network identifies a winning neuron by a simple procedure. using the data located, i.e.:
S. Gholizadeh, E. Salajegheh / Comput. Methods Appl. Mech. Engrg. 198 (2009) 2936–2949 2941
8 n on n onc1 ! !1
>
> I j c1
; Ojpu ! RBF network 1 X
r X
r
>
> pu
R2 ¼ 1 ðki ~ki Þ2 ðki kÞ2 ; ð33Þ
>
> j¼1 j¼1
>
> n onc2 n onc2
>
> i¼1 i¼1
>
< I jpu ; Ojpu ! RBF network 2
j¼1 j¼1
ð31Þ where k is the mean of exact vectors component.
>
> .. .. The performance generality of the trained parallel RBF networks
>
>
>
>. . are examined in terms of RRMSE and R2 through the test data. The
>
> n oncms n oncms
>
>
>
: I jpu ; Ojpu ! RBF network ms : weak RBF networks are identified and their performance generality
j¼1 j¼1 is improved by generating new data, adding them to the old train-
ing set, and retraining the networks.
It should be noted that in the conventional RBF network I Npu and
The training flow of the FSORBF metamodel is shown in Fig. 1.
ONpuare used to train the network.
In order to evaluate the accuracy of approximate structural
5. Structural optimization by PSO incorporating FSORBF
responses predicted by the RBF and FSORBF networks, two evalua-
tion metrics are used: the relative root mean square (RRMSE) error
Incorporating the FSORBF metamodel into the optimization
and R-square (R2) statistic measurement [35]. The RRMSE error
process necessitates that the values of the most influential natural
between the exact and predicted responses is as:
periods to be computed. Evaluating the periods by analytic meth-
2 ! !1 30:5 ods can impose additional computational burden to the process.
1 X r
1X r
In order to eliminate the computing effort the wavelet radial basis
RRMSE ¼ 4 ðki ~ki Þ2 ðki Þ2 5 ; ð32Þ
r 1 i¼1 r i¼1 function (WRBF) network proposed in [14] is employed to effec-
tively predict the periods of the structures. In the WRBF network
the activation function of the hidden layer neurons is the cosine-
where ki and ~ki are the ith component of the exact and predicted re- Gaussian Morlet [36] wavelet function:
sponses, respectively. The vectors dimension is expressed by r.
To measure how successful fitting is achieved between exact 2 !
1 Zb Zb
and approximate responses, the R2 statistic measurement is em- uWRBF ðZÞ ¼ pffiffiffi cos x0 exp 0:5 ; ð34Þ
a a a
ployed. Statistically, the R2 is the square of the correlation between
the predicted and the exact responses. It is defined as follows:
where Z, a, and b are the input vector, the dilation factor, and the
translation factor, respectively.
As it is demonstrated in [14] the optimal values of x0 ; a, and b
are 4, 4.5, and 0, respectively. Therefore:
Selecting a number of sample structures on random basis:
uWRBF ðZÞ ¼ 0:4714 cosð0:889ZÞ expð0:0247Z 2 Þ: ð35Þ
X
The input and output of the WRBF are the cross-sectional area
assignments and the most influential natural periods of the struc-
Evaluating the natural periods and time history responses tures, respectively.
by conventional FE analysis, namely:
The errors between exact and approximate periods, ertp , are cal-
A {tp1 ,..., tp nm }T
culated using the following equation:
{ q1 ( X , t ),..., q ne ( X , t ), d 1 ( X , t ),..., d ns ( X , t )}T
jtpap ex
i tpi j
ertp
i ¼ ex 100; i ¼ 1; . . . ; ni ; ð36Þ
tpi
Evaluating the maximum value of the potential energy:
where tpap and tpex represent the approximate and exact periods,
Em respectively.
The fundamental steps of the optimization by PSO using the
FSORBF are as follows:
Determining the most influential periods by ANFIS:
{tp1 ,..., tp ni }T Step 1. Initializing a population of particles with random positions
and velocities in the search space.
Step 2. Updating the velocity of each particle, according to Eq.
Finding the optimal number of the clusters by SA: (13).
Step 3. Updating the position of each particle, according to Eq.
B ms
(14).
Step 4. Evaluating the values of the most influential periods of
Dividing the data into ms clusters by SOM network each particle by the trained WRBF.
Step 5. Evaluating the time history response of each particle by
the trained FSORBF at time grid points.
Step 6. Evaluating the fitness value of each particle according to
Cluster 1: Cluster i: Cluster ms: the predicted response.
j
{I cuI }njC=11 , {O puj }njC=11
… j
{I cuI }njC=1i , {O puj }njCi=1
… j
{I cuI
n s
} jCm j nCm s
Step 7. Checking the convergence conditions. If convergence is
=1 , {O pu } j =1
met then terminate the process, otherwise go to Step 2.
[email protected]
input).
3 3
(3) RBF + ANFIS + SA-SOM (FSORBF metamodel).
5 5
4 4
3 3
2 2
1 1
0 0
1 11 21 31 1 11 21 31
Test samples No. Test samples No.
5
mean error = 0.8548%
3rd period error (%)
4
0
1 11 21 31
Table 4a
The results of testing the RBF, RBF + ANFIS and FSORBF.
in Table 2. It is observed that, when ni ¼ 2, the testing RMSE is con- 6.1.4. Training the WRBF network to predict the values of the most
siderably reduced in comparison with the state of ni ¼ 1. But the influential natural periods
testing RMSE corresponding to ni ¼ 3 has a trivial difference with The WRBF network is trained using the training data including
that of the ni ¼ 2. Due to this slight difference, the exhaustive 70 samples. The input of the WRBF is the design variables vector of
search process is terminated and the first, second and third natural the structures (i.e. cross-sectional area assignments of the element
periods of the structures are captured as the optimal combination: groups) and its output is the vector containing the values of the
IcuI ¼ ftp1 ; tp2 ; tp3 gT . The time spent to exhaustive search by ANFIS most influential natural periods of the structures. The performance
is 0.03 min. generality of the WRBF is investigated through the testing data and
the results demonstrate the high accuracy of the network for pre-
6.1.3. Clustering of the inputs dicting the periods. The results of testing the WRBF network are
The nmost influential natural periods of the generated structures
oN
shown in Fig. 4. It is observed that the mean of the errors is about
1%. Thus, generality of the WRBF is good and it can be reliably em-
I Ncu ¼ IicuI are investigated by employing the SA to deter-
i¼1 ployed in the optimization process. The time spent for training the
mine the optimal number of the potential centers. The values of WRBF is 0.75 min.
0.20 to 0.50 with the step of 0.025 have been examined for r a As in the case of RBF, all the five natural periods are required,
and the results are given in Table 3. The results given indicate that, therefore another WRBF network is trained to predict all the five
as it is expected, by increasing the value of r a the number of the natural periods of each structures. The accuracy of this network
centers is decreased. According to the results, it is observed that is very good and its training time is equal to 0.9 min.
in the 30.77%, 53.84%, and 15.39% of all the cases the number of
the centers found are 3, 2, and 1, respectively. Therefore, in this 6.1.5. Training the RBF, RBF + ANFIS and FSORBF
example the number of the centers is considered to be 2. It means Three mentioned alternatives are trained to predict the time
that ms ¼ 2. The total time spent for executing the SA is about history displacement response. In the case of RBF, all the five
0.2 min. natural periods of each structure are considered as the input vector
Therefore, a SOM network with 2 neurons is employed to clas-
sify the inputs. The SOM network is trained and the numbers of the Table 4b
Mean R2 and RRMSE of test data for two clusters.
training data located in clusters 1 to 2 are 28 and 42, respectively.
By presenting the testing data (30 ones) to the trained SOM net- Outputs Cluster 1 Cluster 2
work it is observed that the numbers of the training data located R2 RRMSE R2 RRMSE
in clusters 1–2 are 12 and 18, respectively. The time spent for
u5 0.9996 0.0682 0.9936 0.0657
training of SOM network is 0.3 min.
2944 S. Gholizadeh, E. Salajegheh / Comput. Methods Appl. Mech. Engrg. 198 (2009) 2936–2949
Table 4c
The results of testing the RBF, RBF + ANFIS by increasing the number of the training samples.
Table 5a
Optimum designs (profiles no.) obtained by PSO employing the RBF network.
components. The size of the RBF is 5-70-750. In the case of To respond to this question we have examined another alterna-
RBF + ANFIS, the size of the network is 3-70-750. For training the tive to improve the performance generality of the RBF and
FSORBF, a RBF network is trained for each cluster to accurately pre- RBF + ANFIS networks. In this case we have retrained the RBF and
dict the time history displacement response. Therefore, in this RBF + ANFIS by increasing the number of the training samples in-
example the FSORBF includes two parallel RBF networks with the stead of employing the clustering based idea. The number of the
size of 3-28-750, and 3-42-750, respectively. The performance gen- training samples is increased to 105 and 140, respectively. It
eralities of the three networks are compared in Table 4a through should be noted that the 30 testing samples are fixed for all the
the 30 test samples. The time spent for training the conventional cases. The results are given in Table 4c.
RBF, RBF + ANFIS and two parallel RBF networks of the FSORBF It is observed that the performance generality of the FSORBF
are 2.00 min, 1.75 min and 1.25 min, respectively. As given in this trained with 70 training samples is even better than that of the
table, the performance generality of the FSORBF is better than RBF and RBF + ANFIS trained with 140 training samples. Also the
those of the other networks while the elapsed times of training elapsed time ðtdata generation þ ttraining Þ by FSORBF is about 50% time
in all the cases are nearly equal. The mean R2 and RRMSE of the re- elapsed by the networks. It means that to reach the same perfor-
sponses predicted by the FSORBF are given for all the clusters in Ta- mance generality, the computational burden of training of RBF
ble 4b. and RBF + ANFIS networks are about double of that of the FSORBF.
It is observed that the accuracy of the responses predicted in all Therefore, it is demonstrated that employing the clustering based
the clusters is good and therefore the FSORBF possesses proper idea leads to a better result in terms of network performance gen-
performance generality and it can be employed in the optimization erality and elapsed time.
process to predict the responses.
It is demonstrated that by using the FSORBF the response pre- 6.1.6. Optimization results
diction accuracy is considerably improved comparing the RBF The computational merits of the RBF, RBF + ANFIS and the
and RBF + ANFIS networks. Now this serious question may be aris- FSORBF are also examined in the optimization phase. In this case,
en: What would the results be if no clustering was done at all – if only all the networks, trained by 70 training samples, are incorporated
one network was trained for all the data set? into the optimization task. The results of optimization for 10
Table 5b
Optimum designs (profiles no.) obtained by PSO employing the RBF + ANFIS network.
Table 5c
Optimum designs (profiles no.) obtained by PSO employing the FSORBF metamodel.
Table 5d
Optimum designs obtained by PSO using exact and approximate analyses.
independent runs are given in Tables 5a, 5b and 5c, respectively. In are 2.00 + 0.90 = 2.90, 1.75 + 0.75 = 2.50 and 1.78 + 0.75 = 2.53 min,
the tables, the profiles’ No. listed in Table 1, are given. respectively.
The results given in Table 5a, indicate that the PSO algorithm
converges to feasible solutions only in runs 2, 5 and 8. In the rest
Fifth storey displacement (cm)
15 Actual RBF
runs, due to loss of accuracy of the conventional RBF network,
10
a R- square : 0.9853
the attained solutions are infeasible. Therefore the optimal design RRMSE: 0.0996
of this case in terms of profile No. is: ½8; 7; 3; 3; 3T with the average 5
The results given in Table 5b, indicate that the PSO algorithm -5
Actual m ax-point (2.30 s, -10.461 cm )
converges to feasible solutions only in 4, 8 and 9. In the rest runs, -10 RBF m ax-point (2.30 s, -10.498 cm )
due to lose of accuracy of the RBF + ANFIS network, the attained -15
0 5 10 15
solutions are infeasible. Therefore the optimal design of this case Time (s)
in terms of profile No. is: ½7; 5; 4; 3; 3T with the average number
Fifth storey displacement (cm)
In Table 5c, all the obtained solutions are feasible. Through the 10 b R- square : 0.9870
RRMSE: 0.1141
multiple runs, the PSO algorithm finds two different solutions. One 5
of the solutions is obtained 2 times while the other is attained 8 0
times. We have chosen the solution with more appearance times -5
as the optimal design of this case. Therefore the optimal design -10 Actual m ax-point (2.30 s, -10.522 cm )
of this case in terms of profile No. is: ½7; 5; 3; 3; 2T with the average -15
RBF+ANFIS m ax-point (2.30 s, -10.491 cm )
their corresponding actual ones in Fig. 5. The actual response is cal- Columns Columns Beams
culated with conventional FE analysis. 1 Box 200 200 12.5 9 Box 260 260 16.0 17 IPE 220
It is important to note that, in this example the time of optimi- 2 Box 200 200 14.2 10 Box 260 260 17.5 18 IPE 240
zation employing FSORBF, including data generation and the net- 3 Box 220 220 12.5 11 Box 280 280 14.2 19 IPE 270
works training is about 0.07 times of optimization by EA while, 4 Box 220 220 14.2 12 Box 280 280 16.0 20 IPE 300
5 Box 240 240 12.5 13 Box 280 280 17.5 21 IPE 330
the errors of approximation are low. 6 Box 240 240 14.2 14 Box 300 300 16.0 22 IPE 360
7 Box 240 240 16.0 15 Box 300 300 17.5 23 IPE 400
6.2. Example 2: 6-storey steel space frame 8 Box 260 260 14.2 16 Box 300 300 20.0 24 IPE 450
group 3
Table 7
The most influential natural periods.
[email protected] m
ð12Þ!
ni Optimal combination na ¼ ðni Þ!ð12ni Þ!
RMSE Time
(min.)
Training Testing
6.2.4. Training the WRBF network 6-20-7500, 6-27-7500, and 6-34-7500, respectively. In order to
The WRBF network is trained to predict the values of the most achieve a comparison between the RBF + ANFIS and the FSORBF,
influential natural periods of the structures. The results of testing a RBF + ANFIS network with the size of 6-100-7500 is also trained
the performance generality of the WRBF show that the mean of to predict the ten desired time history responses. The performance
the errors is about 0.1%. Thus, performance generality of the WRBF generalities of the networks are investigated through the testing
is excellent and it can be reliably employed in the optimization data and the results are given in Table 8a, in terms of mean R2
process. The total time spent for training the WRBF is 0.95 min. and mean RRMSE.
It can be observed that the FSORBF possesses a better perfor-
6.2.5. Training the RBF + ANFIS and FSORBF mance generality compared to the other network. Also, the perfor-
As the samples located in the clusters 1–4 are identified, now it mance generalities of the four parallel RBF networks set in the
is possible to train a RBF network for each cluster. Therefore, the predicting unit of the FSORBF metamodel are given in Table 8b,
predicting unit of the FSORBF includes four RBF networks. The size in terms of mean R2 and mean RRMSE.
of the RBF networks associated with clusters 1 to 4 are 6-19-7500, The results given in Table 8b indicate that all the parallel RBF
networks possess excellent performance generality and therefore
no further improvements are required. Therefore, the FSORBF
Table 8a metamodel can be reliably employed in the optimization process
The results of testing the RBF + ANFIS and FSORBF. to accurately predict the structural response. In this example, the
Structural response RBF + ANFIS FSORBF total time spent for training the RBF + ANFIS and four parallel
RBF networks of the FSORBF are 7.3 min and 5.4 min, respectively.
R2 RRMSE R2 RRMSE
d1 0.9903 0.0734 0.9994 0.0179 6.2.6. Optimization results
d2 0.9886 0.0721 0.9996 0.0130
d3 0.9880 0.0694 0.9998 0.0106
The results of optimization using exact and approximate analy-
d4 0.9883 0.0701 0.9997 0.0108 ses are given in Table 9.
d5 0.9896 0.0722 0.9997 0.0111 As given in this table, the optimum design obtained using exact
d6 0.9898 0.0742 0.9995 0.0140 analysis is better than the other solutions but it is very extensive in
q3 0.9432 0.0896 0.9821 0.0250
terms of the optimization overall time. The optimum design at-
q4 0.9554 0.0874 0.9925 0.0151
q5 0.9732 0.0970 0.9925 0.0177 tained using FSORBF is better than that obtained using the
q6 0.9899 0.0833 0.9998 0.0089 RBF + ANFIS network. In order to investigate the accuracy and fea-
Avr. 0.9796 0.0789 0.9965 0.0144 sibility of the optimum designs obtained by the networks, they are
analyzed using conventional FE method and their actual responses
are calculated. The actual and approximate responses are com-
pared in Table 10a in terms of R2 and RRMSE. Also, the maximum
values of the actual responses are compared with their allowable
Table 8b
values in Table 10b.
Mean R2 and RRMSE of test data for four clusters.
The results provided in Tables 10a and 10b indicate that the
Outputs RBF network 1 RBF network 2 RBF network 3 RBF network 4 optimal design obtained by the FSORBF has very high accuracy
R2 RRMSE R2 RRMSE R2 RRMSE R2 RRMSE and it is feasible also, this clearly indicates that drift constraints
d1 0.9999 0.0083 0.9998 0.0108 0.9993 0.0197 0.9985 0.0328 under the earthquake loading dominates the design. As given in
d2 0.9999 0.0069 0.9999 0.0074 0.9997 0.0130 0.9991 0.0246 Table 10b, the active constraints of the optimal design obtained
d3 0.9999 0.0053 0.9999 0.0067 0.9998 0.0118 0.9995 0.0184 by the RBF + ANFIS and the FSORBF are d3 and d5 , respectively.
d4 0.9999 0.0057 0.9999 0.0073 0.9998 0.0121 0.9995 0.0181
The responses mentioned are compared with their actual ones in
d5 0.9999 0.0068 0.9999 0.0081 0.9999 0.0099 0.9992 0.0197
d6 0.9999 0.0083 0.9999 0.0095 0.9998 0.0117 0.9986 0.0263
Fig. 7.
q3 0.9973 0.0231 0.9865 0.0232 0.9737 0.0229 0.9408 0.0307 All the comparisons achieved reveal that performance general-
q4 0.9993 0.0137 0.9884 0.0171 0.9945 0.0128 0.9877 0.0167 ity and the optimum design obtained by the FSORBF is significantly
q5 0.9978 0.0188 0.9915 0.0169 0.9913 0.0159 0.9897 0.0190 better than those of the RBF + ANFIS network. In this example the
q6 0.9998 0.0075 0.9999 0.0057 0.9999 0.0084 0.9997 0.0143
time of optimization employing neural networks, including data
Avr. 0.9994 0.0104 0.9966 0.0113 0.9958 0.0138 0.9912 0.0221 generation and networks training is about 0.08 times of exact opti-
mization while, the errors of approximation are negligible.
Table 9
Optimum designs obtained by PSO using exact and approximate analyses.
Table 10a (PSO) algorithm is developed. The PSO is a global and efficient
Mean R-square and mean RRMSE of optimum designs. search technique, but the computational work of the PSO due to
Outputs RBF + ANFIS FSORBF performing the exact time history analysis is very intensive. That
R 2
RRMSE R2 RRMSE is, in each design point of the desired earthquake the structure
should be analyzed to evaluate the necessary response. In order
d1 0.9956 0.0733 0.9992 0.0282
d2 0.9963 0.0759 0.9996 0.0199
to reduce the computing effort, a novel metamodel is proposed.
d3 0.9949 0.0723 0.9997 0.0187 The metamodel is a specific combination of adaptive neuro-fuzzy
d4 0.9934 0.0746 0.9996 0.0193 inference system (ANFIS), subtractive algorithm (SA), self organiz-
d5 0.9956 0.0681 0.9998 0.0103 ing map (SOM) and a set of radial basis function (RBF) networks
d6 0.9923 0.0682 0.9998 0.0104
employed to accurately predict the structural time history re-
q3 0.9442 0.0605 0.9985 0.0114
q4 0.9754 0.0341 0.9962 0.0133 sponse. The metamodel is called fuzzy self-organizing radial basis
q5 0.9980 0.0278 0.9973 0.0120 function (FSORBF) network. In the present paper, the conventional
q6 0.9963 0.0595 0.9998 0.0112 RBF network and the FSORBF are employed to approximate the
Avr. 0.9882 0.0614 0.9990 0.0155 necessary time history responses of the structures during the opti-
mization process. The numerical results of testing the network per-
formance generality, demonstrate the computational advantages
of the FSORBF compared to that of the conventional RBF network.
A simple method is employed to treat the dynamic constraints. In
Table 10b this method the time interval is divided into some subintervals and
Comparison of maximum value of responses of optimum designs with their allowable
the constraints are imposed at each time grid point. The numerical
values.
results of optimization show that in the proposed methods, the
Outputs Maximum values Allowable values time of optimization including sample generation and neural net-
EA RBF + ANFIS FSORBF work training is reduced to about 0.08 times that of required for
d1 0.00329 0.00240 0.00292 0.00500
exact optimization while, the errors are small. Finally, it is demon-
d2 0.00497 0.00381 0.00434 0.00500 strated that the best solution has been attained by using the
d3 0.00500 0.00502 0.00469 0.00500 FSORBF metamodel, in terms of the optimum weight, the accuracy,
d4 0.00454 0.00439 0.00393 0.00500 and degree of feasibility of the solutions and therefore it can be
d5 0.00473 0.00461 0.00501 0.00500
reliably incorporated into the optimization process of the realistic
d6 0.00279 0.00295 0.00323 0.00500
q3 0.80079 0.68549 0.78150 1.00 structures subjected to the time history loading.
q4 0.62897 0.56838 0.76622 1.00
q5 0.67012 0.73960 0.73708 1.00 Appendix A
q6 0.36059 0.34670 0.29188 1.00
0.0025
b The architecture of the ANFIS adapted to this study, shown in
0 Fig. A1, consists of five layers.
It performs different actions which are briefly explained below:
-0.0025
Layer 1: All the nodes in this layer are adaptive nodes. They
-0.005 Actual max-point (12.24 s, -0.00501) generate membership grades of the inputs. The outputs of this
FSORBF max-point (12.24 s, -0.00500)
-0.0075 layer O1Mji are given by
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 i
Fig. 7. (a) 3rd storey drift of optimum structure obtained by RBF + ANFIS (b) 5th j
where Mii ðji ¼ 1; 2Þ represent two MFs correspond to ith input tpi
storey drift of optimum structure obtained by FSORBF.
which can be triangular, trapezoidal, Gaussian functions or other
shapes. In this study, the generalized bell-shaped MFs defined be-
low are utilized
7. Conclusions 1
lMji tpi ; ajii ; bjii ; cjii ¼ ji ; i ¼ 1; 2; . . . ; nm ; ðA3Þ
A computationally efficient approach has been developed for i
tpi cji 2bi
1 þ ji i
the optimal design of structures subjected to time history loading a i
using discrete design variables. To achieve this, a serial integration
j j j
of an evolutionary algorithm and neural network techniques has where aii ; bii ; cii
are the parameters of the MFs, governing the bell-
been utilized. A real value coded particle swarm optimization shaped functions.
S. Gholizadeh, E. Salajegheh / Comput. Methods Appl. Mech. Engrg. 198 (2009) 2936–2949 2949
Layer 1 Layer 2 Layer 3 Layer 4 Layer 5 [4] F.Y. Kocer, J.S. Arora, Optimal design of latticed towers subjected to earthquake
loading, ASCE Journal of Structural Engineering 128 (2002) 197–204.
tp1 tpnm
[5] F.Y. Cheng, D. Li, J. Ger, Multiobjective optimization of dynamic structures, in:
M 11 ∏ N M. Elgaaly (Ed.), ASCE Structures 2000 Conference Proceedings, 2000.
tp1 tp1 tpnm [6] E. Salajegheh, A. Heidari, S. Saryazdi, Optimum design of structures against
earthquake by discrete wavelet transform, International Journal for Numerical
M 12 ∏ N Methods in Engineering 62 (2005) 2178–2192.
∑ [7] E. Salajegheh, A. Heidari, Optimum design of structures against earthquake by
wavelet neural network and filter banks, Earthquake Engineering and
tp1 tpnm
Structural Dynamics 34 (2005) 67–82.
M n1m ∏ N [8] H. Adeli, X. Jiang, Dynamic fuzzy wavelet neural network model for structural
tpn m
tp1 tpn m system identification, ASCE Journal of Structural Engineering 132 (2006) 102–
111.
M n2m ∏ N [9] Q.F. Wang, Numerical approximation of optimal control for distributed
diffusion Hopfield neural networks, International Journal for Numerical
Fig. A1. The architecture of the ANFIS. Methods in Engineering 69 (2007) 443–468.
[10] J.S.R. Jang, ANFIS: adaptive-network-based fuzzy inference systems, IEEE
Transactions on Systems Man and Cybernetics 23 (1993) 665–685.
[11] S.L. Chiu, Fuzzy model identification based on cluster estimation, Journal of
Layer 2: The nodes in this layer are fixed nodes labeled P indi- Intelligent and Fuzzy Systems 2 (1994) 267–278.
cating that they perform a simple multiplier. The outputs of this [12] T. Kohonen, Self-Organization and Associative Memory, second ed., Springer-
layer O2k are represented as: Verlag, Berlin, 1987.
[13] P.D. Wasserman, Advanced Methods in Neural Computing, New York,
O2k ¼ wk ¼ lMJ1 ðtp1 ÞlMJ2 ðtp2 Þ . . . lMJnm ðtpnm Þ; k ¼ 1; 2; . . . ; 2nm : Prentice-Hall Company, Van Nostrand Reinhold, 1993.
1 2 nm [14] S. Gholizadeh, E. Salajegheh, P. Torkzadeh, Structural optimization with
frequency constraints by genetic algorithm using wavelet radial basis
ðA4Þ
function neural network, Journal of Sound and Vibration 312 (2008) 316–331.
[15] The Language of Technical Computing. MATLAB, Math Works Inc., 2006.
Layer 3: The nodes in this layer are also fixed nodes labeled N,
[16] J. Kennedy, The particle swarm: social adaptation of knowledge, in:
indicating that they play a normalization role in the network. The Proceedings of the International Conference on Evolutionary Computation.
outputs of this layer O3k can be represented as: Piscataway, NJ: IEEE (1977), pp. 303–308.
[17] American Institute of Steel Construction. Manual of Steel Construction-
wk allowable Stress Design, 9th ed., Chicago, 1995.
O3k ¼ w
k ¼ ; k ¼ 1; 2; . . . ; 2nm : ðA5Þ
w1 þ w2 þ w2nm [18] J.S. Arora, Optimization of structures subjected to dynamic loads, in: C.T.
Leondes (Ed.), Structural dynamic systems computational techniques and
Layer 4: Each node in this layer is an adaptive node, whose out- optimization, Gordon and Breach Science Publishers, 1999.
[19] B.S. Kang, G.J. Park, J.S. Arora, A review of optimization of structures subjected
put is simply the product of the normalized firing strength and a
to transient loads, Structural and Multidisciplinary Optimization 31 (2006)
first-order polynomial (for a first-order Sugeno model). Thus, the 81–95.
outputs of this layer O4k are given by: [20] R.C. Eberhart, J. Kennedy, A new optimizer using particles swarm theory, in:
Sixth International Symposium on Micro Machine and Human Science,
O4k ¼ w
k fk ¼ w
k ðck0 þ ck1 tp1 þ cknm tpnm Þ; k ¼ 1; 2; . . . ; 2nm : Nagoya, Japan, 1995, pp. 39–43.
[21] J. Kennedy, R.C. Eberhart, Particle swarm optimization, in: Proceedings of the
ðA6Þ IEEE International Conference on Neural Networks, Perth, Australia, 1995, pp.
1942–1945.
Layer 5: The single node in this layer is a fixed node labeled R, [22] V. Bergh, A. Engelbrecht, Using neighbourhood with the guaranteed
which computes the overall output O5 as the summation of all convergence PSO, in: 2003 IEEE Swarm Intelligence Symposium, USA, 2003,
pp. 235–242.
incoming signals, i.e.
[23] Y. Shi, R.C. Eberhart, A modified particle swarm optimizer, in: Proceeding IEEE
2nm 2nm International Conference on Evolutionary Computation, 1997, pp. 303–308.
X X [24] R.E. Perez, K. Behdinan, Particle swarm approach for structural design
O5 ¼ k fk ¼
w k ðck0 þ ck1 tp1 þ þ cknm tpnm Þ
w
optimization, Computers and Structures 85 (2007) 1579–1588.
k¼1 k¼1
[25] R. Kathiravan, R. Ganguli, Strength design of composite beam using gradient
2nm
X and particle swarm optimization, Composite Structures 81 (2007) 471–479.
¼ k c k0 Þ þ ðw
ðw k ck1 Þtp1 þ þ ðw
k cknm Þtpnm : ðA7Þ [26] E. Salajegheh, S. Gholizadeh, M. Khatibinia, Optimal design of structures for
k¼1
earthquake loads by a hybrid RBF–BPSO method, Earthquake Engineering and
Engineering Vibration 7 (2008) 13–24.
It can be observed that the ANFIS architecture has two adaptive [27] M.Y. Rafiq, G. Bugmann, D.J. Easterbrook, Neural network design for
j j j engineering applications, Computers and Structures 79 (2001) 1541–1552.
layers: Layers 1 and 4. Layer 1 has modifiable parameters aii ; bii ; cii [28] A. Zhang, L. Zhang, RBF neural networks for the prediction of building
related to the input MFs. Layer 4 has modifiable parameters interference effects, Computers and Structures 82 (2004) 2333–2339.
ck0 ; ck1 ; . . . ; cknm pertaining to the first-order polynomial. The task [29] J. Deng, Structural reliability analysis for implicit performance function using
radial basis function network, International Journal of Solids and Structures 43
of the learning algorithm for this ANFIS architecture is to tune all
(2006) 3255–3291.
the modifiable parameters to make the ANFIS output match the [30] N. Roy, R. Ganguli, Filter design using radial basis function neural network and
training data. Adjusting these modifiable parameters is a two-step genetic algorithm for improved operational health monitoring, Applied Soft
Computing 6 (2006) 154–169.
process, which is known as the hybrid learning algorithm. The de-
[31] I.B. Topcu, M. Saridemir, Prediction of rubberized concrete properties using
tailed algorithm and mathematical background of the hybrid learn- artificial neural network and fuzzy logic, Construction and Building Material
ing algorithm can be found in [10]. 22 (2008) 532–540.
[32] E.H. Mamdani, S. Assilian, An experiment in linguistic synthesis with a fuzzy
logic controller, International Journal of Man–Machine Studies 7 (1975) 1–13.
References [33] M. Sugeno, Industrial applications of fuzzy control, Elsevier Science Pub. Co.,
1985.
[1] N.D. Lagaros, M. Fragiadakis, M. Papadrakakis, Y. Tsompanakis, Structural [34] R.P. Paiva, A. Dourado, Structure and parameter learning of neuro-fuzzy
optimization: a tool for evaluating dynamic design procedures, Engineering systems: a methodology and a comparative study, Journal of Intelligent and
Structures 28 (2006) 1623–1633. Fuzzy Systems 11 (2001) 147–161.
[2] X.K. Zou, C.M. Chan, An optimal resizing technique for dynamic drift design of [35] X. Jiang, S. Mahadevan, H. Adeli, Bayesian wavelet packet denoising for
concrete buildings subjected to response spectrum and time history loadings, structural system identification, Structural Control and Health Monitoring 14
Computers and Structures 83 (2005) 1689–1704. (2006) 333–356.
[3] F.Y. Kocer, J.S. Arora, Optimal design of H-frame transmission poles for [36] R.K. Martinet, J. Morlet, A. Grossmann, Analysis of sound patterns through
earthquake loading, ASCE Journal of Structural Engineering 125 (1999) 1299– wavelet transforms, International Journal of Pattern Recognition and Artificial
1308. Intelligence 1 (1987) 273–302.