SIMPLIF
SIMPLIF
ScienceDirect
Article history: Deterministic analysis does not provide a comprehensive model for concrete dam response
Received 9 February 2017 under multi-hazard risk. Thus, the use of probabilistic approach is usually recommended
Accepted 3 September 2017 which is problematic due to high computational demand. This paper presents a simplified
Available online reliability analysis framework for gravity dams subjected to flooding, earthquakes, and
aging. A group of time-variant degradation models are proposed for different random
Keywords: variables. Response of the dam is presented by explicit limit state functions. The probability
Concrete dams of failure is directly computed by either classical Monte Carlo simulation or the refined
Classification importance sampling technique. Next, three machine learning techniques (i.e., K-nearest
Seismic neighbor, support vector machine, and naive Bayes classifier) are adopted for binary
Temporal degradation classification of the structural results. These methods are then demonstrated in terms of
Hydrological hazard accuracy, applicability and computational time for prediction of the failure probability.
Results are then generalized for different dam classes (based on the height-to-width ratio),
various water levels, earthquake intensity, degradation rate, and cross-correlation between
the random variables. Finally, a sigmoid-type function is proposed for analytical calculation
of the failure probability for different classes of gravity dams. This function is then
specialized for the hydrological hazard and the failure surface is presented as a direct
function of the dam's height and width.
© 2017 Politechnika Wrocławska. Published by Elsevier Sp. z o.o. All rights reserved.
* Corresponding author.
E-mail addresses: [email protected] (M.A. Hariri-Ardebili), [email protected] (F. Pourkamali-
Anaraki).
http://dx.doi.org/10.1016/j.acme.2017.09.003
1644-9665/© 2017 Politechnika Wrocławska. Published by Elsevier Sp. z o.o. All rights reserved.
archives of civil and mechanical engineering 18 (2018) 592–610 593
stressing, cracking, and foundation failure [2]. Failures start tion techniques, together with reliability analysis. Yu [11]
with some initiating event that causes an adverse change in introduced the first finite element based time-variant reliabil-
the structure. The initiators can be classified into the following ity assessment of gravity dams. He used stochastic ground
main groups: motions along with FORM approximation and Koo's analytical
solution to compute the mean up-crossing rate of a given
Hydrological events (e.g., floods and an increase of water performance function. Krounis [12] studied the sliding stability
flow through the spillway). and failure probability of concrete dams with bonded
Static events (e.g., reservoir water load, ice load, equipment concrete-rock interfaces. The heterogeneous properties of
failure). the interface joint were considered with different spatially
Time-dependent events (e.g., erosion, alkali-aggregate reac- correlation lengths.
tion in concrete, increased seepage, clogged drain, degraded Nearly all prior studies on reliability assessment of concrete
grout curtain). gravity dams are limited to single-hazard analysis, constant
Seismic events (e.g., earthquake load). pool level, and random concrete material capacity (e.g., friction
Other events (e.g., human operating errors, fire, landslides into angle and cohesion). The existing simplified models do not
the reservoir, vehicular impact, sabotage, and vandalism). take into account the seismic or aging hazards. The simula-
tions were performed with crude MCS and the findings are
There are many concrete dams around the world that are usually limited to a specific case study (not a generic dam
entering middle age [3]. Over one-third of the United States model) subjected to a non-intensifying hazard. Time-depen-
dams are already fifty years old and in another ten years, nearly dency of the structural reliability was only discussed in the
70% of dams in the United States will have reached the half- theoretical level without any realistic example.
century mark [4]. Thus, predicting their long term behavior,
service life, and quantifying the failure probability is necessary. 1.1.2. Machine learning for reliability analysis of concrete
Detailed deterministic numerical analysis of concrete dams dams
with all sources of nonlinearities is computationally expensive Machine learning techniques are basically used either for
[5]. Moreover, it does not account for the uncertainties ‘‘regression’’ or ‘‘classification’’ purposes. The former one is
associated with the structure itself (epistemic) and the applied mainly used to forecast the future response (dam safety
loads (aleatory). Thus, the primary goal of this paper is to monitoring) based on the collected past information (dam
provide a simplified probabilistic framework to determine the instrumentation). Also, different branches of machine learn-
failure probability in gravity dams subjected to multi-hazard ing are adopted for the so-called ‘‘back analysis’’ and
risks. The secondary goal is to employ several machine determination of mechanical properties for concrete materi-
learning techniques in order to facilitate classification of the als. This line of work considers the prediction of continuous-
results and estimate the reliability index. Both these tasks are time variables (regression) based on ‘‘training’’ data sets. For
achieved by using explicit limit state functions. Finally, these example, Salazar et al. [13,14] discussed and contrasted some
results are generalized for different gravity dam classes based of the machine learning based predictive models for dam
on the height-to-width ratio. safety assessment, i.e., random forests, boosted regression
trees, neural network (NN), support vector machine (SVM), and
1.1. Literature review multivariate adaptive regression splines. The prediction
accuracy in each case was compared with the conventional
1.1.1. Reliability analysis of concrete dams statistical model. Saouma et al. [15] used stepwise linear
Bury and Kreuzer [6] calculated the sliding failure probability of regression and K-nearest neighbor local polynomial techni-
gravity dams based on rigid body analysis. Gumbel distribu- ques for prediction of arch dam responses based on pendulum
tional model was assumed for both the annual peak flood and recordings. The results of statistical analysis were compared
the ground acceleration. Saouma [7] combined the concept of with nonlinear finite element method.
reliability index with fracture mechanics and determined the The latter group of machine learning techniques have been
safety index of working dams based on the nonlinear finite used for classification of dam responses. The research in this
element analysis. Four random variables were considered: field is very limited and there is a gap between all the
reservoir elevation, fracture toughness, cohesion, and friction theoretical aspects and the real world dam engineering
angle. Carvajal et al. [8] performed reliability analysis of a applications. Gaspar et al. [16] proposed a probabilistic thermal
gravity roller compact concrete (RCC) dam by Monte Carlo model to propagate uncertainties on some RCC's physical
simulation (MCS) and the first order reliability method (FORM). properties. A thermo-chemo-mechanical model was then
Statistical analysis of RCC density, analysis of scatter at used to describe the RCC behavior. Moreover, the global
different spatial scales, data unification, and a physical sensitivity analysis was performed to evaluate the impact of
formulation of the RCC intrinsic curve were all considered. random variables. Mata et al. [17] proposed a method based on
Sliding and cracking were considered as limit state functions. linear discriminant models for the construction of decision
Also, Peyras et al. [9] proposed a combined method of risk, rules for the early detection of developing failure scenarios.
reliability analysis, and event tree methods for safety analysis They developed a single classification index by combining the
of concrete dams. Altarejos-Garcia et al. [10] proposed a physical measured quantities.
methodology to improve the estimation of the conditional Thus, there is an urgent need for comprehensive research
probability of responses in gravity dam-reservoir systems by on the failure probability estimation of concrete dams by using
using complex behavior models based on numerical simula- machine learning techniques. For example, by exploring
594 archives of civil and mechanical engineering 18 (2018) 592–610
lapped area is interpreted as Pf). There are various methods to Detailed time dependent models for cracking, load increas-
incorporate the temporal effects of both the demand and ing and material degradation will be discussed in Section 4.2.
capacity in the reliability assessment (e.g., time-integrated and
discrete approaches) [21]. In the present paper, the time- 2.3. Simulation based techniques
variant failure probability is written as:
Estimation of Pf is the challenging part in reliability analysis
Z 1
[20]. There are many methods to achieve this goal depending
Pf ðtÞ ¼ P½RðXðtÞÞ < SðXðtÞÞ ¼ FR;t ðdÞf S;t ðdÞdd (4) on the problem type, the required accuracy and computational
0
time, e.g., FORM, second-order reliability method (SORM),
where FR,t and fS,t are the instantaneous CDF of R and the crude MCS, and MCS with importance sampling (IS) [23], MCS
instantaneous PDF of S at time t respectively, assuming that with Latin Hypercube sampling (LHS) [24], and subset simula-
R and S are statistically independent [22]. tion (SS) [25]. In this paper, two simulation based techniques
Next, the time interval (t0, tf) (where t0 = 0 is the initial or are used and briefly discussed below. Crude MCS is used as it is
construction time and tf refers to the final or expected life time the reference method in structural reliability, while IS is used
of the structure) can be divided into n non-overlapping time specifically for ‘‘rare events’’.
instants, t1, t2, . . . tn (tn = tf) and the probability of failure can be Rare events refer to those with low frequency of occur-
reported in two ways: rence and encompass both natural hazards (e.g. earthquakes,
tsunamis, floods) and anthropogenic hazards (e.g. industrial
Instant probability of failure, PIf , where the failure probability accidents) as well as their interactions. Rare events have
is calculated based on the instant status of both R and S, ‘‘small failure probability’’, usually in the order of 105
Eq. (2). This model only considers the system behavior at 102. Thus, the standard random generation techniques
time t = ti and there is no condition on the previous failures. cannot be easily applied, as they require a large amount of
Cumulative probability of failure, PCf , in which the failure simulations.
probability is calculated within the time interval (0, t] as:
PCf ð0; t ¼ 1P½ðRðXðt1 ÞÞ > SðXðt1 ÞÞÞ \ \ ðRðXðtk ÞÞ > SðXðtk ÞÞÞ 2.3.1. Monte Carlo simulation
In this method, the failure probability is directly calculated
Y
k
¼ 1 ð1PIf ðtj ÞÞ based on the joint PDF of all the random variables. An unbiased
j¼1 estimator of Pf is given by:
(5)
P ^MCS 6
^MCS 2 P 41F1
a @ f A 7 5 (8)
f f
2 N P ^MCS
RðtÞ ¼ R0 cR ðtÞ; R0 ¼ Rjt¼0 sim f
(6)
SðtÞ ¼ S0 cS ðtÞ; S0 ¼ Sjt¼0
596 archives of civil and mechanical engineering 18 (2018) 592–610
where F(.) is the standard normal CDF and a 2 [0, 1] is used to number of features (random variables) and each category yi is
calculate the bounds with confidence level of 1 a. a binary variable such that yi =1 corresponds to class C1
(failure region) and yi =+1 corresponds to class C2 (safe region).
2.3.2. Importance sampling Using the learned classification rule in the training phase,
Importance sampling is a technique to reduce the large a new observation x 2 Rp is assigned to one of these two
number of simulations (and consequently the variance of the categories.
responses) in the crude MCS specifically for rare events (e.g., In practice, the set of n training observations are used to
small failure probability). In this method, which was originally form the training matrix Xtrn ¼ ½x1 ; . . .; xn T 2 Rnp , where each
proposed by Harbitz [23], the idea is to concentrate the row corresponds to one observation. Prior to the training
distribution of sampling in the most important region [26]. One phase, observations are often preprocessed since features or
way this can be done is by moving the sampling center from random variables might be measured in different units [30].
the origin in the standard normal space to the ‘‘design point’’ There are two main techniques to preprocess the training data:
on the limit state function [27]. Design point is the closest point (1) rescaling and (2) standardization. In the first method, also
from the origin to the limit state function (usually estimated by known as Min–Max scaling, features or random variables are
FORM). Consequently, a new sampling PDF, hX(x) is defined to rescaled to the range in [0, 1] by computing the maximum and
obtain the samples in the desired region and the failure minimum values. However, the second method computes the
probability can be approximated with the similar analogy of sample mean and standard deviation of random variables.
Eq. (7) as: Then, the features in each column of Xtrn are standardized by
subtracting the mean and dividing them by the standard
deviation. As a result, random variables will be rescaled so that
1 X
Nsim
^IS ¼ f X ðxj Þ they have the properties of a standard normal distribution
P If ðxj Þ (9)
f
Nsim j¼1
hX ðxj Þ with zero mean and unit standard deviation. The second
method is employed in the present work since it has been
As it is clear, the appropriate choice of hX(x) facilitates shown to be crucial in clustering analyses [31].
implementation of this method. Melchers [20], Melchers [28] In this paper, three popular classification algorithms are
and Au and Beck [29] proposed different techniques to use the studied and contrasted: (1) K-nearest neighbor (KNN), (2)
importance sampling along with structural reliability. support vector machine (SVM), and (3) naive Bayes classifier
(NBC), Fig. 2. These algorithms cover both deterministic and
probabilistic classification approaches and they are among top
3. Classification techniques ten data mining algorithms [32]. The training procedure,
properties, and computational complexity of these three
In machine learning, classification is the problem of assigning techniques are explained through Sections 3.1, 3.2 and 3.3.
new observations to one of the finite numbers of discrete
categories. Therefore, the goal of classification is to learn a 3.1. K-nearest neighbor
model that makes accurate predictions on new observations
based on a set of training data points. To be formal, let x1, . . ., The K-nearest neighbor algorithm is one of the simplest
xn be a set of training observations in Rp with corresponding classification techniques in machine learning. Given a new
categories y1, . . ., yn, where yi 2 { 1, + 1}. Here, p denotes the observation x 2 Rp , K training observations from the rows of
Fig. 2 – Three classification techniques in machine learning. From left to right: K-nearest neighbor (KNN), support vector
machine (SVM), and naive Bayes classifier (NBC). In KNN, a new observation is classified based on the labels of K nearest
neighbors. SVM solves an optimization problem to find a separating hyperplane with maximum margin and a new
observation is classified based on its position. NBC is a probabilistic approach to find the most likely class for a new
observation using Bayes theorem.
archives of civil and mechanical engineering 18 (2018) 592–610 597
Xtrn closest in distance to x are found. Then, x is classified Under this setup, the points that are closest to the
using the majority vote among these K nearest observations separating hyperplane, known as ‘‘support vectors’’, lie on
from the training set. Therefore, the KNN algorithm is the following hyperplane:
sensitive to the local structure of the data and its performance
depends heavily on the choice of K. yi wT ’ðxi Þ þ b ¼ 1 (14)
To reduce the sensitivity of KNN, one possible approach is
to weigh the contribution of each of the K nearest neighbors This hyperplane and the separating hyperplane in Eq. (11)
according to their distance to observation x. For example, the are parallel (they have the same orientation or coefficient
class of each of the K nearest training observations is vector w) and the distance between them is 1/k wk, where kwk
multiplied by a weight which is proportional to the inverse is the Euclidean norm of w. Thus, the hyperplane which gives
of the distance from x. Thus, greater weight is given to closer the maximum margin is obtained by maximizing kw k 1,
neighbors. The classification rule of weighted KNN for a new which is equivalent to minimizing kw k 2, subject to the
observation x and its K nearest neighbors x1, . . ., xK with labels constraints given in Eq. (13).
yi, i = 1, . . ., K, can be written as: To solve the above constrained optimization problem the
! Lagrange multipliers method is often used which results in
PK
i¼1 wi yi 1 maximizing the following term with respect to a [34]:
f ðxÞ ¼ sign PK ; wi ¼ (10)
i¼1 wi
kxxi k
Xn
1X n X n
~
LðaÞ ¼ ai a a y y kðxi ; xj Þ (15)
where wi is the distance weighting function. 2 i¼1 j¼1 i j i j
i¼1
f ðxÞ ¼ wT ’ðxÞ þ b (11) where b* can be found by substituting the optimal value w
¼
Pn
Given a new observation x, the main idea behind Naive The first author performed transient analysis accounting
Bayes Classifier is to find the following probability for each of for the soil-fluid-structures interaction, material and geometry
two classes known as posterior probability: nonlinearities [5], and performance based earthquake engi-
neering research [2] for concrete dams. However, LEM is used
in this paper in conjunction with explicit limit state functions
PðCj jxÞ; j ¼ 1; 2: (18)
to facilitate the machine learning based classifications. This
where PðCj jxÞ is the probability of observing Cj given that x is method is followed by many regulators/countries and is based
true. on experiences and engineering judgment [37]. Moreover, it is
the basis of several recent works, e.g., Carvajal et al. [8],
Then, x is assigned to the class that has the maximum Westberg [38], Altarejos-Garcia et al. [10], Westberg Wilde and
posterior probability: Johansson [39], Huaizhi et al. [40] and more recently Morales-
Torres et al. [41]. In the most general form, the limit state
function is:
b
C ¼ argmax PðCj jxÞ: (19)
j 2 f1;2g
Z ¼ f ðT; W; U; f; c; A; Lcr ; tÞ (22)
To find the posterior probability in Eq. (18), Bayes theorem is
where T is shear force, W is the weight, U is uplift force, w and c
used to get the following reformulation:
are angle of friction and cohesion at the considered plane
respectively, A is the area of rupture (for unit thickness), Lcr
PðxjCj ÞPðCj Þ is the pre-existing crack length, and t is time to be used in time-
PðCj jxÞ ¼ : (20) variant problems.
PðxÞ
In this equation, the denominator does not depend on the In the context of LEM, two main limit state functions can be
class Cj , which means that the denominator is effectively developed: (1) Z1(t) sliding limit state (at dam-foundation
constant. Therefore, the main difficulty in finding the posterior interface or along any lift joint), and (2) Z2(t) overturning limit
probability is to calculate the term P xjCj in the numerator. To state (around the dam's toe):
solve this problem, a ‘‘naive’’ assumption is considered: the p
features of new observation x 2 Rp are conditionally indepen- Z1 ðtÞ ¼ ðWðtÞUðtÞÞtan’ þ cðtÞðAAcr ðtÞÞTðtÞ
Xn X
m
dent given the class. Thus, the classification rule for the new Z2 ðtÞ ¼ ðFR ðtÞ:dR ðtÞÞi ðFS ðtÞdS ðtÞÞj (23)
observation vector x = [x1, . . ., xp]T is given as: i¼1 j¼1
is changed based on location of drainage and crack length. is indeed achieved by performing FORM based sensitivity
Moreover, the class of gravity dams are shown in this figure analysis on all the initial random variables). Waves due to the
where HB1 ratio varies 0.5:0.1:1:0. The one with HB1 ¼ 0:7 is sudden landslide into the dam's reservoir may (or may not)
1 1
considered as the standard shape. also cause a large driving force. The magnitude of such a force
depends on the topology of the valley. In any case, its hazard
4.1. Hydrological hazard can be quantified similar to the flooding condition.
The stress, S, in Eq. (1) is applied by hydrostatic, uplift and
In this example, it is assumed that the gravity dam is subjected silt pressures, while the resistance, R, is dam-foundation
to flooding load. The applied loads are: (1) self-weight, (2) uplift interface. Only one limit state function (sliding at dam-
pressure, (3) hydrostatic pressure (based on flood level), and (4) foundation interface which is the predominant failure mode)
silt pressure. Note that based on the authors preliminary is considered to facilitate comparison of different models. In
research, the ice pressure, wind load, and the surface wave situations where other failure modes are taken into account
load do not have any dominant impact of the failure (e.g., overturning), the final failure probability is composed of
probability and thus they are ignored in this research (this all limit state functions in the series and/or parallel modes [40].
600 archives of civil and mechanical engineering 18 (2018) 592–610
4.2. Aging hazard upper bounds are 85% and 99.9% of H1. Sampling at any time
interval is based on a normal distributional model, Fig. 4(b).
As it is discussed in Section 2.2, time-variant reliability is Silt height increases with time as the sediment accumu-
considered by simply using a temporal degrading or increasing lates behind the dam. The following empirical model is
function, c(t), (see Eq. (6)) for random variables. The applied proposed to account for time-dependent alluvium height:
loads on the system are identical to the hydrological hazard,
except that the reservoir operates in the normal water level.
Time dependent uplift pressure is automatically adjusted as a cHs ðtÞ ¼ Kðtts Þb (26)
function of reservoir water level, crack length and drain where K and b are the constants optimized based on site
efficiency, Fig. 3. In this paper, six random variables are observations. In this paper, they are assumed to be 0.0085
assumed to be changed by time, Fig. 4. The following describes and 0.5, respectively. Moreover, ts refers to the delay in silt
the quantification of these time dependent empirical models: accumulation time. It corresponds to the time interval in
wCrack length is assumed to increase linearly by time. No which the silt is not fully passed through the gates and starts
repair is expected within the life span of the dam. At any given to stack in the dam's reservoir. Consequently, the mean,
time interval, the crack length is a uniform distribution bounded standard deviation, and the bounds are computed as:
between the minimum and maximum values, Fig. 4(a):
8
8 >
> 0
8 for tts
>
>>m : H1 cHs ðtÞ
<0 for tts < >
tts
< s: COVH1 cHs ðtÞ
Lcr ðtÞ ¼ for t > ts (24) H
s ðtÞ ¼ (27)
: t t Lcr >
>>U : UBH1 cHs ðtÞ
for t > ts
>
:>
>
f s
:
L: LBH1 cHs ðtÞ
where ts is the cracking start time, tf is the final (life) time, *
refers to either upper (LU L
cr ) or lower (Lcr ) bound of the crack
where superscript * refers to m (mean), s (standard deviation), U
U L
length. In this paper, Lcr and Lcr are 5% and 40% of the total base (upper bound), and L (lower bound). In this paper, the coeffi-
length (B1), respectively. cient of variation is COV = 0.6, while LB and UB are assumed to
Water height at the upstream face (headwater) varies based be 0.1 and 3.0, respectively. A normal distributional model is
on the following regime [40]: assumed for the Hs, as shown in Fig. 4(c).
Cohesion at the rock-concrete interface is assumed to be
8 deteriorated based on the normalized stochastic model
>
> 1
< Hmw ðtÞHmw0 þ 3:5 1 p
4
ffiffi Hsw0 proposed by Li et al. [22]:
t (25)
>
> 1 s
: Hsw ðtÞ p
4
ffiffi Hw0
8
t Xn
>
> ^ Þeðt Þ
>
> cc ðtn Þ ¼ 1 dðt
where m and s refer to the mean and standard deviation, >
>
i i
< i¼1
respectively, and subscript zero is the initial value (first opera- ^ Þ ¼ ktm Dt
dðt (28)
>
>
i
tion). In this paper, Hmw0 and Hsw0 are assumed to be 90% and 1.5% >
> ^
dðtÞ j
>
>
of the dam height (H1), respectively. Moreover, the lower and : aðtÞ ¼ j ; bðtÞ ¼ ^
dðtÞ
Fig. 4 – Empirical time dependent models for the random variables; in (a) and (f) the black line means the bounds for the
uniform distribution and the green bars are showing the histogram; in (b), (c), (d) and (e) the black solid line shows the mean,
the dashed black lines are mean Wstandard deviation, and the dashed blue lines are the lower/upper bounds.
archives of civil and mechanical engineering 18 (2018) 592–610 601
where dðt^ Þ is the time-dependent mean degradation occurring 4.3. Seismic hazard
i
between ti1 and ti (describing the shape of degradation mod-
el), Dt is the time separation between ti1 and ti (usually taken In this example, it is assumed that the gravity dam is subjected
as one year), k and m are the scale factor and the shape factor, to earthquake loading. There are various methods to model
respectively. The parameter e(ti) is a sequence of independent the loads/stresses due to an earthquake event on the dam.
random variables, dðt ^ Þ; and is described by a Gamma distribu- Based on Hariri-Ardebili [2], one may categorize these methods
i
tion, with time-dependent shape factor a(t) and scale factor b as: (1) pseudo-static analysis, (2) pseudo-dynamic analysis, (3)
(t). Finally, j is a constant parameter defining the variation linear time history analysis, (4) nonlinear time history
associated with cc(tn). analysis, (5) narrow-range nonlinear analyses, and (6) wide-
The degradation function can be simplified as a Gamma range nonlinear analyses. Many aspects influence the selec-
distributional model with the following mean and standard tion and application of an appropriate method [43]; however,
deviation [22]: in this paper, pseudo-static analysis (also known as seismic
coefficient method) is used. The main reason can be attributed
8
>
> k to simplicity of this method and a straightforward procedure
< cmrc ðtÞ ¼ 1 tmþ1
mþ1
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi (29) for combining it with explicit limit state functions and provide
>
> k j mþ1
: csrc ðtÞ ¼ t a fully analytical model for the reliability assessment.
mþ1
In the seismic coefficient method (which is originally
In this paper, k, m and j are assumed to be 0.00007, 0.8 and developed for design not analysis), the earthquake loading is
0.4, respectively. Initial mean and standard deviation are treated as an inertial force applied statically to the structure
crc 0m ¼ 0:65 and crc 0s ¼ 0:2 MPa, respectively. Finally, constant [44]. Two additional loads are applied to the system (compared
lower and upper bounds equal to 0.5 and 1.05 times crc 0m are to the hydrological hazard): (1) inertia force due to the
considered for sampling purposes, Fig. 4(d). horizontal acceleration of the dam, and (2) hydrodynamic
Friction angle is assumed to be deteriorated based on a force. The former one is computed by the principle of mass
survivor function proposed by Yang et al. [42]: times the earthquake acceleration and acts through the center
of mass. The seismic coefficient, agm, is defined as the ratio of
g
cf ðtÞ ¼ elf ðtts Þ (30) the horizontal ground acceleration to gravity (and in no case
can it be related directly to acceleration from a strong motion
where lf is the failure rate, g the shape factor, and ts is the
instrument) [44].
delay in deterioration starting time. Consequently, the mean,
On the other hand, the hydrodynamic pressure, Phyd, and
standard deviation and the bounds are computed as:
consequently the force, Fhyd, on the upstream face of the dam
may be computed by means of Westergaard [45] parabolic
88 approximation:
> >m : frc 0m
>>
>
> <
s: frc 0s
>
>
>
> U: frc 0m þ UB
for tts
> > 8
<>
> : pffiffiffiffiffiffiffi
< Phyd ðyÞ ¼ Ce agm Ku Hw y1=2
8 L: frc 0m LB
frc ðtÞ ¼ (31) pffiffiffiffiffiffiffi
>>
> >m : frc 0m cf ðtÞ
: Fhyd ðyÞ ¼ 2 Ce agm Ku Hw y3=2
(33)
>
> <
>
> s: frc 0s c1f ðtÞ 3
>
> for t > ts
> >
::U :
> > fmrc ðtÞ þ UB
where y is the water depth measured from the free surface
L: fmrc ðtÞLB
down to the dam's base, Ku is a correction factor to account for
where superscript * refers to m (mean), s (standard deviation), U upstream slope (unit for dams with vertical face), and Ce is a
(upper bound), and L (lower bound). In this paper, frc 0m ¼ 30, correction factor to account for water compressibility and is
frc 0s ¼ 5, LB = UB = 15. Moreover, lf and g are assumed to be presented in SI unit ([KN], [m], [s]) as:
0.005 and 1, respectively. A normal distributional model is
assumed for frc, as shown in Fig. 4(e).
1
Drain efficiency is assumed to be reduced by time due to Ce ¼ 7:99Cc ; Cc ¼ rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
ffi (34)
2
Hw
clogged drains and no remediation action on the pipes is 17:75 1000 te
expected. At any given time interval, the drain efficiency is a
uniform distribution bounded between the minimum and where te is the period to characterize the ground acceleration
maximum values, Fig. 4(f): imposed on the dam.
Fig. 5(a) shows sensitivity of the hydrodynamic pressure for
8
the standard dam (i.e., H1 ¼ Hw ¼ 100 m). As seen, the value of in PREDICT to estimate the responses for a set of standardized
hydrodynamic pressure reaches the hydrostatic one at the test data points.
base under a quite large seismic coefficient value (i.e.,
agm = 1.2). However, as it will be shown later, the probability
Algorithm 1. Estimation of Pf via machine learning techniques
of failure increases drastically when the dam is subjected to a
seismic hazard.
Input: Xtrn 2 RNtrn 6 , ytrn 2 RNtrn , Xtest 2 RNtest 6 , classification-
method
Output: estimated failure probability P ^f
5. Results Training classifier:
1: for i = 1, . . ., 6 do
This section investigates the application of classical reliabili- 2: mi =MEAN(Xtrn(:, i))
ty analysis as well as three machine learning based 3: si =STD(Xtrn(:, i))
4: XÄtrn ð:; iÞ ðXtrn ð:; iÞmi Þ=s i
classification techniques, explained in Section 3, to estimate
5: end for
the failure probability of gravity dam classes. In each 6: ifclassification-method=KNN then model FITCKNN(XÄtrn ; ytrn )
classification technique, a set of Ntrn training data points 7: end if
in R6 (for hydrological and aging hazards and later R8 for 8: if classification-method=SVM then model FITCSVM(XÄtrn ; ytrn )
seismic hazard) and their corresponding labels are given as 9: end if
the input. These training data points can be viewed as the 10: if classification-method=NBC then model FITCNB(XÄtrn ; ytrn )
11: end if
rows of the matrix Xtrn 2 RNtrn 6 , where each column
Estimation via trained classifier:
represents one of the six random variables used in the
12: for i = 1, . . ., 6 do
analyses. The labels are also stored in the column vector 13: XÄtest ð:; iÞ ðXtest ð:; iÞmi Þ=s i
ytrn 2 Rtrn , where each entry is either þ1 (for safe) or 1 14: end for
(for fail). 15: ^
ytest PREDICT(model; X Ätest )
Three classification techniques are used to learn a classifier 16: ^f ¼ jfi : ^
P ytest ðiÞ ¼ 1gj=Ntest
from the training data that allows us to predict responses
(safe/fail) for a new set of test data points. Thus, the learned For each hazard model, the failure probability can be
classification rules can be used to estimate the failure discussed in three levels, i.e., (1) pilot dam, (2) generalized
probability. Similar to the training phase, test data points model, and (3) analytical solution for the generalized form.
are stored as the rows of matrix Xtest 2 RNtest 6 . The goal is to These three levels are fully explained for the hydrological
predict the response vector ^ ytest 2 RNtest by using three hazard (Section 5.1). To avoid duplication (and limited page
classification techniques. Finally, the failure probability can numbers), only the generalized model is discussed for two
be estimated as the number of failed cases, i.e., number of other hazard models (Sections 5.2 and 5.3). Developing the
entries of ^ ytest that are equal to 1, normalized by the total analytical solution is straightforward as will be discussed in
number of cases, Ntest. Given the estimate of failure probability Section 5.1.3 and can be easily repeated for the aging and
^f , the normalized estimation error is used to measure the
P seismic hazard models.
accuracy which is defined as jP ^f Pf j=Pf .
The overall training and estimation procedure are sum- 5.1. Hydrological hazard
marized in Algorithm 1. In order to make the implementation
of the proposed method easier for interested readers, 5.1.1. Pilot model
MATLAB's built-in functions are used in the given algorithm This first example deals with hydrological hazard on the
[46]. Here, Xtrn(:, i) denotes the i-th column of matrix Xtrn. As standard dam (HB1 ¼ 0:7). Material and modeling are selected
1
mentioned in Section 3, the features or random variables based on Table 1. More specifically, they can be summarized as
(columns of Xtrn) should be standardized by computing the follows: B1 = 70 m, H1 = 100 m, Ld = 11.25 m, Lcr = U(0, 28) m,
mean and standard deviation. The three classification Hw ¼ LNð85; 20Þ m, Hs = N(10, 6) m, rc = 2400 kg/m3,
3
methods can be implemented in MATLAB using FITCKNN, rs = 1850 kg/m , crc = LN(0.6, 0.15) MPa, frc = N(30, 7) deg.,
FITCSVM, and FITCNB. The learned classification rule is then used effD = U(0, 0.99), fs = 30 deg. Note that in most cases the values
archives of civil and mechanical engineering 18 (2018) 592–610 603
of the normal and lognormal distributions are truncated to IS. This means that the polynomial kernel function is capable
avoid unreal (and not necessarily negative) values. Classical of dealing with the nonlinearity in the structure of the data in
structural reliability is performed first based on crude MCS and this example. Furthermore, the accuracy of SVM using MCS-IS
IS technique, Fig. 6. As seen, both these methods result in the is nearly identical to crude MCS with an order of magnitude
same Pf (MCS-IS underestimates crude MCS with 4%). However, less training data points Ntrn. This comes from the fact that
the number of simulations required for crude MCS is 100 times importance sampling provides a better sampling of the global
of MCS-IS (1e6 vs. 1e4). Variation of mean Pf is relatively high in structure and, thus, facilitates finding the separating hyper-
crude MCS up to 1e5, showing that in no case the number of plane.
simulations should fall below this value. However, MCS-IS Finally, the accuracy of NBC is investigated for both crude
results are stable even after 5000 simulations. Shown in Fig. 6 MCS and MCS-IS in Fig. 7(d) and (h). It is observed that the
is also the confidence intervals based on Eq. (8). Although MCS- Naive Bayes Classifier using MCS-IS has higher accuracy than
IS shows a narrower confidence interval at the same Nsim = 1e4 crude MCS. In fact, the estimation error is reduced by a factor of
compared to crude MCS; for a complete Pf assessment (with 1e6 2 using MCS-IS.
simulations for crude MCS and 1e4 for MCS-IS), the crude MCS
has a narrower confidence interval. 5.1.2. Generalized model
Next, the performance of three classification methods are Results of the pilot model in the previous section is generalized
studied for a various number of training data points. The value for different dam classes; see Fig. 3. Different classes of dams
of Ntrn varies from 1e3 to 1e4 and from 1e2 to 1e3 in crude MCS are distinguished by HB1 ratio. Other characteristics of the model
1
and MCS-IS, respectively. A set of 1e6 test data points are used are parametrically changed based on Table 1.
to estimate the failure probability as described in Algorithm 1, Fig. 8(a) illustrates general class of gravity dams, where H1
which is compared with true value of Pf (which is assumed to varies from 60 to 160 m and B1 takes values from 40 to 90 m.
be PMCS
f in Fig. 6(a)). Fig. 7 reports the mean and standard This wide range of HB1 ratio is used to determine the failure
1
deviation of the normalized estimation error over 50 trials. In probability of dams with proportional water level (mean of Hw
each trial, Ntrn data points are chosen randomly from the total changes proportional to H1 while standard deviation is kept
number of 1e6 data points for crude MCS and 1e4 data points constant). Also, in this figure, the red rectangle represents a
for MCS-IS. The normalized estimation error for each trial is narrow range of HB1 ratio in which H1 is 100 m, while Hw varies
1
defined as jb Pf Pf j=Pf , where b
Pf and Pf are the estimated and from 80 to 110 m. Finally, the green square in this figure is the
true values of the failure probability, respectively. We compute location of the standard dam with HB1 ¼ 0:7 (already studied as a
1
the mean and standard deviation of 50 normalized estimation pilot model). Fig. 8(b) shows the cases where increasing H1 and
errors. reducing B1 lead to an increase in the failure probability, while
It is observed that the KNN classification method has much Fig. 8(c) shows increasing water level, increases the failure
higher accuracy on crude MCS compared to MCS-IS. This is probability for the constant HB 1 ratio.
1
mainly due to the fact that KNN depends heavily on the local Based on the previous results in Section 5.1.1 for the pilot
structure of the data which may not be preserved under model, four sets of experiments are performed in this section:
importance sampling assumptions. Moreover, based on Fig. 7 (1) SVM using crude MCS, (2) SVM using MCS-IS, (3) KNN using
(a), KNN method with K = 1 performs more accurate compared crude MCS, and (4) NBC using MCS-IS. In experiments with
to K = 2, 3. To explore the possibility of improving the accuracy crude MCS, a set of Ntrn = 1e4 data points are chosen randomly
using weighted KNN (WKNN), results are reported in Fig. 7(b) from the total of 1e6 data points for training classifiers.
and (f). It is concluded that the WKNN method nearly has Learning classifiers using MCS-IS is based on random sampling
identical accuracy as KNN for both crude MCS and MCS-IS. of Ntrn = 1e3 training data points from 1e4 data points.
Therefore, it is reasonable to use KNN instead of WKNN due to In Fig. 9, the mean of Pf estimation error over 50 trials is
less complexity. reported for two cases: (1) varying (H1, B1) with proportional Hw
The accuracy of SVM classification technique using both (corresponds to Fig. 8(b)), and (2) varying ðHw ; HB 1 Þ (corresponds
1
linear and polynomial kernel functions is reported in Fig. 7(c) to Fig. 8(c)). According to Fig. 8, small values of two parameters
and (g). It is shown that the polynomial kernel function leads lead to very small failure probabilities. This means that it is
to more accurate estimates of Pf for both crude MCS and MCS- more difficult to learn a reliable classification rule in this
Fig. 6 – Estimated failure probability and the confidence intervals for the pilot hydrological hazard model; the blue line shows
the mean and the pink lines are confidence interval.
604 archives of civil and mechanical engineering 18 (2018) 592–610
Fig. 7 – Failure probability estimation error using various machine learning techniques for the pilot hydrological hazard
model.
Fig. 8 – Estimated failure probability and the confidence intervals for different dam classes under hydrological hazard risk; the
blue line shows the mean and the pink lines are confidence interval.
regime due to the small number of training data points from that is defined for all real input values and has a positive
the class that corresponds to failure. This observation can be derivative at each point [47].
easily verified in Fig. 9 since all classifiers have larger Thus, the question arises: is there an analytical model to fit
estimation errors for smaller values of (H1, B1) or ðHw ; HB 1 Þ. the results of reliability analysis? Hariri-Ardebili [2] already
1
However, as these quantities get larger, the estimation error proposed a sigmoid-type curve to be used in quantifying dams
decreases and it gets closer to zero as one expects. capacity curve, yet this can be used for reliability function too:
Based on Fig. 9(b) and (f), SVM using MCS-IS has the best
performance among four different classification techniques.
1ec3 xþc4
This observation confirms that importance sampling is a SðxÞ ¼ c1 þ c2 (35)
1 þ ec5 xþc6
successful technique to preserve the global structure of data
using small number of training data points Ntrn. Furthermore, where ci (i=1, 2,. . ., 6) are the model constants obtained from
it is seen that SVM and KNN using crude MCS have high nonlinear least-squares curve-fitting.
accuracy in estimating the failure probability Pf. Since Fig. 8(b) is the most complete set of analyses on
different dam classes, it is selected as the case study to
5.1.3. Analytical model examine the applicability of Eq. (35) in providing a general
So far, reliability analysis of gravity dams performed with analytical model for the failure probability. To prevent over-
MCS family and the results were estimated with three fitting of the results, coefficients c1 and c2 are set to zero and
machine learning techniques. In most cases, the failure one. Thus, the remaining model includes four coefficients to be
probability was presented as a direct function of one or two fitted by nonlinear least-squares optimization techniques [46].
parameters while the other variables were selected random- Shown in Fig. 10(a) is the fitted curves and the original data
ly. This resulted in a group of increasing curves/surface points. Quality of fitting is shown in Fig. 10(b), where all the
similar to sigmoid-type growth curves. By definition, a residuals are limited to 1.5%. The coefficient of determination
sigmoid function is a bounded differentiable real function is more than 0.99 in all six curves.
archives of civil and mechanical engineering 18 (2018) 592–610 605
Fig. 9 – Failure probability estimation error for different dam classes subjected to hydrological hazard.
So far, each curve presents dependency of Pf to H1. To IS, (3) KNN using crude MCS, and (4) NBC using MCS-IS. Similar
establish a relationship between these coefficients and B1, the to Section 5.1.2, a set of Ntrn = 1e4 data points are chosen
resulted coefficients (ci, i=3,. . ., 6) are plotted versus B1, Fig. 11. randomly from the total of 1e6 data points for training
As it is clear, there is a (semi-) linear relationship between ci classifiers using crude MCS. Learning classifiers using MCS-
and B1. To avoid further complexity in the model, these four IS is based on random sampling of Ntrn = 1e3 training data
relations are assumed to be linear functions of B1 and can be points from 1e4 data points.
written in the form of ci ¼ ðp1 Þi B1 þ ðp2 Þi . Again, the two In Fig. 13(a), the mean and standard deviation of the
parameters, slope and intercept, are found for four coefficients estimated Pf is reported and compared with the reference
ci. Consequently, the analytical model can be wrapped up in value of PMCS
f using 1e6 simulations. It is observed that all four
the following form: classification strategies lead to accurate estimates of Pf (i.e., the
differences between estimated and true values are negligible).
However, in the small scales, SVM family underestimates the
1eð0:0001888B1 0:02008ÞH1 þð0:008561B1 þ1:067Þ failure probability while, KNN and NBC overestimate Pf. To
Pf ðH1 ; B1 Þ ¼ (36)
1 þ eð0:0001975B1 0:04129ÞH1 þð0:07097B1 þ1:503Þ further investigate the performance of these methods, a plot of
The potential application of quadratic functions for ci estimation error in log scale is provided in Fig. 13(a). In general,
coefficients are also evaluated. Overall, the quality of failure the estimation error is reduced when time passes (therefore,
surface is improved less than 5%. However, the competing increasing the failure probability). Among four methods, SVM
linear and quadratic functions should be controlled based on using crude MCS has the best performance. Furthermore, SVM
the Akaike information criterion (AIC) [48]. AIC not only using MCS-IS has the most stable condition as the estimation
rewards goodness-of-fit, but also includes a penalty that is an error has smaller reduction rate. Finally, the Pf estimation error
increasing function of the number of estimated parameters. in NBC using MCS-IS is higher than SVM using crude MCS by
We found that AIC for linear relationships is better than one order of magnitude.
quadratic forms. The reported Pf and estimation error in Fig. 13(a) and
This model is plotted in Fig. 12 with very fine grid (as (b) were based on instant probability of failure conditions,
opposed to the results obtained from reliability analysis based PIf (see Section 2.2). All these results are based on standard
on very coarse grid). This model obviously can be used for all dam shape (HB1 ¼ 0:7) and are subjected to time dependent
1
the dam classes having similar loading and material proper- loads/degradation models explained in Section 4.2. One
ties. This paper is not intended to propagate this equation for major assumption in these analyses is that all the random
the other random properties (e.g., water level, cohesion, variables are uncorrelated. Fig. 13(c) investigates the
friction angle) and natural hazard risks (e.g., aging and impact of correlation between crc and frc on the failure
seismic). This is clearly subject of another paper with more probability. These two random variables are used because
details and larger number of base simulations. they are the only variables in the system that are connected
physically through the rock-concrete interface characteris-
5.2. Aging hazard tics. Also, shown in this figure is the comparison between
instant and cumulative failure probabilities (Eq. (5)). Correla-
In this section, four sets of experiments are performed to tion varies 0.0:0.25:1.0 (zero means two variables are
investigate the estimation of failure probability under the independent and one means they are fully correlated). Three
aging condition: (1) SVM using crude MCS, (2) SVM using MCS- main conclusions are:
606 archives of civil and mechanical engineering 18 (2018) 592–610
Fig. 10 – Fitting an analytical model to discrete data points from different dam classes under hydrological hazard.
Fig. 13 – Estimated failure probability, the confidence intervals and estimation error under aging risk.
Fig. 14 – Estimated failure probability for different dam classes, correlation and water level subjected to seismic hazard.
(i.e., Hw ¼ 100 m), the probability of failure under relatively and KNN using crude MCS have high accuracy in estimating the
small agm values (i.e., 0.1) is already 40%. failure probability Pf. Furthermore, NBC using MCS-IS provides
As in Section 5.1.2, four sets of experiments are performed: high accuracy estimates when Pf is relatively large.
(1) SVM using crude MCS, (2) SVM using MCS-IS, (3) KNN using
crude MCS, and (4) NBC using MCS-IS. A set of Ntrn = 1e4 data
6. Concluding remarks and future work
points are chosen randomly from the total of 1e6 data points
for training classifiers using crude MCS. Learning classifiers
using MCS-IS is based on random sampling of Ntrn = 1e3 This paper presented a simplified reliability analysis frame-
training data points from 1e4 data points. work for concrete gravity dams subjected to multi (hydrologi-
In Fig. 15, the mean of Pf estimation error over 50 trials is cal, aging, and seismic) hazard risk. A group of time-variant
reported for different combinations of ðHw ; agm Þ. From Fig. 14, it degradation models were proposed for different random
is observed that small values of Hw and agm lead to very small variables. Limit state functions were presented in the explicit
failure probabilities. Therefore, as mentioned before, the form. Both the classical reliability analyses techniques and the
training procedure gets more difficult due to the small number machine learning methods were applied.
of training data points. However, when the values of Hw and For each of three hazard risk, three types of models are
agm increases, more training data points are available from considered: (1) pilot model, (2) generalized model, and (3)
both classes (safe/fail) to learn a reliable and accurate analytical model. The framework for each model is entirely
classification rule. This phenomenon can be seen in Fig. 15 explained in hydrological hazard. However, only the pilot
since all classifiers have higher estimation error for small model is explored for aging and seismic hazards. For less
values of ðHw ; agm Þ. However, as these quantities get larger, the experienced readers the following algorithm explains the step
estimation error decreases and it gets very close to zero. by step procedure to develop all three models:
The SVM classifier using MCS-IS has the best performance
among four different classification techniques based on Fig. 15 1. Quantify the uncertainties in the system and determine the
(b). This observation is consistent with the results in Sec- distributional models, Table 1.
tion 5.1.2, where SVM using MCS-IS has the highest accuracy as a. System uncertainties, e.g. dimensions, location of
well. Therefore, importance sampling is a successful technique drainage, crack, resistance parameters, etc.
to preserve the global structure of data using small numbers of b. Hydrological hazard uncertainty, i.e. pool water height.
training data points for both hydrological and seismic hazards. c. Aging hazard uncertainty, i.e. aging time.
Furthermore, similar to Section 5.1.2, it is concluded that SVM d. Seismic hazard uncertainty, i.e. seismic coefficient.
608 archives of civil and mechanical engineering 18 (2018) 592–610
Fig. 15 – Failure probability estimation error for different water levels subjected to seismic hazard.
2. Select the pilot model. Many of gravity dams have HB1 of NBC is a simple probabilistic classification technique.
1
about 0.7. Thus, B1 = 70 m and H1 = 100 m can be a good However, the performance of this method for small failure
starting point. probabilities is not as accurate as other classification
3. Determine the specific demand-affecting parameter in each techniques studied in this work.
hazard model (i.e. water level, age, seismic coefficient). Accounting for the cumulative failure probability increases
4. Perform a set of probabilistic simulations based on MCS and the value of Pf considerably compared to the instant failure
develop a sigmoid-type failure model, where the vertical mode.
axis presents Pf and the horizontal one is demand-affecting Accounting for the correlation between the random
parameter. Such plots are shown in Figs. 13(c) and 14(b). variables increases the failure probability at the smaller Pf,
5. Determine the possible range of dam width and height and while it decreases the failure probability for near collapse
build a matrix of dam classes, Fig. 3. cases.
6. Repeat steps 1 to 5 for any individual combination of B1 and Results of reliability analysis are generalized for different
H1. Use the machine learning techniques to reduce the classes of gravity dams. A sigmoid-type function is further
computational efforts. proposed and only applied to the hydrological hazard. This
7. Present the ‘‘generalized’’ model as Pf = f(B1, H1, Di), in model can be helpful for practitioners by providing a general
which Di refers to Hw in hydrological model, t in aging solution for different dam types.
model, and agm in seismic model. Such plots are shown in
Fig. 8(b) and (c). The following remarks can be considered in future
8. Fit a sigmoid-type curve (or surface) to the data and develop research:
the ‘‘analytical’’ model, Figs. 10(a) and (12). There are
different models to be used for this purpose. This paper Despite the superior performance of SVM technique, it is still
presented a new model based on Eq. (35); however, a not a precise classification algorithm specially for ‘‘rare
comprehensive set of models with their pros and cons can events’’. One reason can be attributed to the fact that
be found in Hariri-Ardebili [49]. determination of a clear separation using linear functions in
a projected high-dimensional feature space is difficult to
The following summarizes the main observations and achieve. Applications of other machine learning techniques
results: such as artificial neural networks can be investigated to
support the findings in this paper or to improve the
In the conducted experiments, SVM has the highest classification.
accuracy among three classification techniques for estimat- Applications of advance analysis techniques such as finite
ing the failure probability. This is mainly because of two element, finite difference, etc. can be combined with the
reasons: (1) SVM takes into account the global structure of framework proposed in this paper to improve the quality of
the data by finding a separating hyperplane, and (2) SVM can results and classify based on any desired response quantity
be used to deal with the nonlinear structure of the data using (e.g. internal stresses and pore water pressure).
nonlinear kernels such as the polynomial kernel function. The interaction between the aging hazard and either
The performance of SVM can be improved using the hydrological or seismic can be accounted for. This
importance sampling technique. This improvement is more allows to quantify the reliability of the existing old and
significant for small failure probabilities since the impor- deteriorated gravity dams. Such a framework is already
tance sampling technique provides a much better training proposed by Hariri-Ardebili [2] for the capacity function
data set to distinguish failure from non-failure cases. It is of dams and Ghosh and Padgett [50] for fragility function of
observed that SVM using MCS-IS requires an order of bridges.
magnitude less training data compared to the crude MCS.
KNN is an extremely simple algorithm which eliminates the
need to solve any optimization problem. However, KNN is
Ethical statement
based on the local structure of data and for this reason it can
only be used in conjunction with crude MCS. Therefore, KNN
often requires more training data points specifically for Authors state that the research was conducted according to
small failure probabilities to achieve high accuracy. ethical standards
archives of civil and mechanical engineering 18 (2018) 592–610 609
[40] S. Huaizhi, H. Jiang, Z. Wen, Service life predicting of dam [45] H. Westergaard, Water pressures on dams during
systems with correlated failure modes, ASCE J. Perform. earthquakes, Trans. Am. Soc. Civil Eng. 98 (1933) 418–433.
Construct. Facil. 27 (2013) 252–269. [46] MATLAB, version 9.1 (R2016b), The MathWorks Inc., Natick,
[41] A. Morales-Torres, I. Escuder-Bueno, L. Altarejos-Garcia, A. MA, 2016.
Serrano-Lombillo, Building fragility curves of sliding failure [47] J. Han, C. Moraga, The influence of the sigmoid function
of concrete gravity dams integrating natural and epistemic parameters on the speed of backpropagation learning, in:
uncertainties, Eng. Struct. 125 (2016) 227–235. International Workshop on Artificial Neural Networks,
[42] S.-I. Yang, D.M. Frangopol, L.C. Neves, Service life prediction Springer, 1995 195–201.
of structural systems using lifetime functions with emphasis [48] H. Akaike, A new look at the statistical model identification,
on bridges, Reliab. Eng. Syst. Saf. 86 (1) (2004) 39–51. IEEE Trans. Autom. Control 19 (6) (1974) 716–723.
[43] E. Bretas, A. Batista, J. Lemos, P. Léger, Seismic analysis of [49] M.A. Hariri-Ardebili, Analytical failure probability model for
gravity dams: a comparative study using a progressive generic gravity dam classes, Proc. Inst. Mech. Eng. Part O: J.
methodology, in: Proc. of the EURODYN 2014 – 9th Risk Reliab. 231 (5) (2017) 546–557.
International Conference on Structural Dynamics, Oporto, 2014. [50] J. Ghosh, J. Padgett, Aging considerations in the development
[44] USACE, Gravity Dam Design, Tech. Rep. EM 1110-2-2200, of time-dependent seismic fragility curves, J. Struct. Eng. 136
Department of the Army, U.S. Army Corps of Engineers, (2010) 1497–1511.
Washington, D.C., USA, 1995.